530 likes | 644 Views
Greedy Algorithms. 15-211 Fundamental Data Structures and Algorithms. Peter Lee March 19, 2004. Announcements. HW6 is due on April 5! Quiz #2 postponed until March 31 an online quiz requires up to one hour of uninterrupted time with a web browser
E N D
Greedy Algorithms 15-211 Fundamental Data Structures and Algorithms Peter Lee March 19, 2004
Announcements • HW6 is due on April 5! • Quiz #2 postponed until March 31 • an online quiz • requires up to one hour of uninterrupted time with a web browser • actually, only a 15-minute quiz • must be completed by April 1, 11:59pm
Example: Counting change • Suppose we want to give out change, using the minimal number of bills and coins.
A change-counting algorithm • An easy algorithm for giving out N cents in change: • Choose the largest bill or coin that is N. • Subtract the value of the chosen bill/coin from N, to get a new value of N. • Repeat until a total of N cents has been counted. • Does this work? I.e., does this really give out the minimal number of coins and bills?
Our simple algorithm • For US currency, this simple algorithm actually works. • Why do we call this a greedy algorithm?
Greedy algorithms • At every step, a greedy algorithm • makes a locally optimal decision, • with the idea that in the end it all adds up to a globally optimal solution. • Being optimistic like this usually leads to very simple algorithms.
Lu Lu’s Pan Fried Noodle Shop Over on Craig Street… Think Globally Act Locally Eat Noodles How Californian...
But… • What happens if we have a 12-cent coin?
Hill-climbing • Greedy algorithms are often visualized as “hill-climbing”. • Suppose you want to reach the summit, but can only see 10 yards ahead and behind (due to thick fog). Which way?
Hill-climbing • Greedy algorithms are often visualized as “hill-climbing”. • Suppose you want to reach the summit, but can only see 10 yards ahead and behind (due to thick fog). Which way?
Hill-climbing, cont’d • Making the locally-best guess is efficient and easy, but doesn’t always work.
Where have we seen this before? • Greedy algorithms are common in computer science • In fact, from last week…
Finding shortest airline routes 2704 BOS 867 1846 187 ORD 849 PVD SFO 740 144 JFK 802 1464 337 621 1258 184 BWI 1391 LAX DFW 1090 1235 946 1121 MIA 2342
Three 2-hop BWI->DFW routes 2704 BOS 867 1846 187 ORD 849 PVD SFO 740 144 JFK 802 1464 337 621 1258 184 BWI 1391 LAX DFW 1090 1235 946 1121 MIA 2342
A greedy algorithm • Assume that every city is infinitely far away. • I.e., every city is miles away from BWI (except BWI, which is 0 miles away). • Now perform something similar to breadth-first search, and optimistically guess that we have found the best path to each city as we encounter it. • If we later discover we are wrong and find a better path to a particular city, then update the distance to that city.
Intuition behind Dijkstra’s alg. • For our airline-mileage problem, we can start by guessing that every city is miles away. • Mark each city with this guess. • Find all cities one hop away from BWI, and check whether the mileage is less than what is currently marked for that city. • If so, then revise the guess. • Continue for 2 hops, 3 hops, etc.
Shortest mileage from BWI 2704 BOS 867 1846 187 ORD 849 PVD SFO 740 144 JFK 802 1464 337 621 1258 184 BWI 0 1391 LAX DFW 1090 1235 946 1121 MIA 2342
Shortest mileage from BWI 2704 BOS 867 1846 187 ORD 621 849 PVD SFO 740 144 JFK 184 802 1464 337 621 1258 184 BWI 0 1391 LAX DFW 1090 1235 946 1121 MIA 946 2342
Shortest mileage from BWI 2704 BOS 371 867 1846 187 ORD 621 849 PVD 328 SFO 740 144 JFK 184 802 1464 337 621 1258 184 BWI 0 1391 LAX DFW 1575 1090 1235 946 1121 MIA 946 2342
Shortest mileage from BWI 2704 BOS 371 867 1846 187 ORD 621 849 PVD 328 SFO 740 144 JFK 184 802 1464 337 621 1258 184 BWI 0 1391 LAX DFW 1575 1090 1235 946 1121 MIA 946 2342
Shortest mileage from BWI 2704 BOS 371 867 1846 187 ORD 621 849 PVD 328 SFO 3075 740 144 JFK 184 802 1464 337 621 1258 184 BWI 0 1391 LAX DFW 1575 1090 1235 946 1121 MIA 946 2342
Shortest mileage from BWI 2704 BOS 371 867 1846 187 ORD 621 849 PVD 328 SFO 2467 740 144 JFK 184 802 1464 337 621 1258 184 BWI 0 1391 LAX DFW 1423 1090 1235 946 1121 MIA 946 2342
Shortest mileage from BWI 2704 BOS 371 867 1846 187 ORD 621 849 PVD 328 SFO 2467 740 144 JFK 184 802 1464 337 621 1258 184 BWI 0 1391 LAX 3288 DFW 1423 1090 1235 946 1121 MIA 946 2342
Shortest mileage from BWI 2704 BOS 371 867 1846 187 ORD 621 849 PVD 328 SFO 2467 740 144 JFK 184 802 1464 337 621 1258 184 BWI 0 1391 LAX 2658 DFW 1423 1090 1235 946 1121 MIA 946 2342
Shortest mileage from BWI 2704 BOS 371 867 1846 187 ORD 621 849 PVD 328 SFO 2467 740 144 JFK 184 802 1464 337 621 1258 184 BWI 0 1391 LAX 2658 DFW 1423 1090 1235 946 1121 MIA 946 2342
Shortest mileage from BWI 2704 BOS 371 867 1846 187 ORD 621 849 PVD 328 SFO 2467 740 144 JFK 184 802 1464 337 621 1258 184 BWI 0 1391 LAX 2658 DFW 1423 1090 1235 946 1121 MIA 946 2342
Shortest mileage from BWI 2704 BOS 371 867 1846 187 ORD 621 849 PVD 328 SFO 2467 740 144 JFK 184 802 1464 337 621 1258 184 BWI 0 1391 LAX 2658 DFW 1423 1090 1235 946 1121 MIA 946 2342
Dijkstra’s algorithm • Algorithm initialization: • Label each node with the distance , except start node, which is labeled with distance 0. • D[v] is the distance label for v. • Put all nodes into a priority queue Q, using the distances as labels.
Dijkstra’s algorithm, cont’d • While Q is not empty do: • u = Q.removeMin • for each node z one hop away from u do: • if D[u] + miles(u,z) < D[z] then • D[z] = D[u] + miles(u,z) • change key of z in Q to D[z] • Note use of priority queue allows “finished” nodes to be found quickly (in O(log N) time).
The Fractional Knapsack Problem (FKP) • You rob a store: find n kinds of items • Gold dust. Wheat. Beer.
Example 2: Fractional knapsack problem (FKP) • You rob a store: find n kinds of items • Gold dust. Wheat. Beer. • The total inventory for the i th kind of item: • Weight: wi pounds • Value: vi dollars • Knapsack can hold a maximum of Wpounds. • Q: how much of each kind of item should you take? (Can take fractional weight)
FKP: solution • Greedy solution: • Fill knapsack with “most valuable” item until all is taken. • Mostvaluable = vi /wi (dollars per pound) • Then next “most valuable” item, etc. • Until knapsack is full.
Ingredients of a greedy alg. • An optimization problem. • Is iterative / Proceeds in stages. • Has the greedy-choice property: A greedy choice will lead to a globally optimal solution.
FKP is greedy • An optimization problem: • Maximize value of loot, subject to maximum weight W.(constrained optimization) • Proceeds in stages: • Knapsack is filled with one item at a time.
FKP is greedy • Greedy-choice property: A locally greedy choice will lead to a globally optimal solution. • In steps…: Step 1: Does the optimal solution contain the greedy choice? Step 2: can the greedy choice always be made first?
FKP: Greedy-choice: Step 1 • Consider total value, V, of knapsack. • Knapsack must contain item h: • Item h is the item with highest $/lb. • Why? Because if h is not included, we can replace some other item in knapsack with an equivalent weight of h, and increase V. • This can continue until knapsack is full, or all of h is taken. • Therefore any optimal solution must include greedy-choice.
More rigorously… Let item h be the item with highest $/lb.Total inventory of h is wh pounds.Total value of h is vi dollars. Let ki be weight of itemi in knapsack. Then total value: If kh<wh, and kj>0 for some jh, then replace j with an equal weight of h. Let new total value = V’. Difference in total value: since, by definition of h, Therefore all of item h should be taken.
FKP: Greedy-choice: Step 2 • Now we want to show that we can always make the greedy choice first. • If item h is more than what knapsack can hold, then fill knapsack completely with h. • No other item gives higher total value. • Otherwise, knapsack contains h and some other item. We can always make h the first choice, without changing total value V. • Therefore greedy-choice can always be made first.
More rigorously… • Case I: wh W • Fill knapsack completely with h. • No other item gives higher total value. • Case II: wh< W • Let 1st choice be item i, and kth choice be h, then we can always swap our 1st and kth choices, and total value Vremains unchanged. • Therefore greedy-choice can always be made first.
The Binary Knapsack Problem • You win the Supermarket Shopping Spree contest. • You are given a shopping cart with capacity C. • You are allowed to fill it with any items you want from Giant Eagle. • Giant Eagle has items 1, 2, … n, which have values v1, v2, …, vn, and sizes s1, s2, …, sn. • How do you (efficiently) maximize the value of the items in your cart?
BKP is not greedy • The obvious greedy strategy of taking the maximum value item that still fits in the cart does not work. • Consider: • Suppose item i has size si = C and value vi. • It can happen that there are items j and k with combined size sj+skC but vj+vk > vi.
$220 $160 $180 Maximum weight = 50 lbs $120, 30 lbs $100, 20 lbs $60, 10 lbs (optimal) item 1 item 2 item 3 knapsack BKP: Greedy approach fails BKP has optimal substructure, but not greedy-choice property: optimal solution does not contain greedy choice.
A question for a future lecture… • How can we (efficiently) solve the binary knapsack problem? • One possible approach: • Dynamic programming