140 likes | 223 Views
Advanced Algorithms CS 539/441 OR In Search Of Efficient General Solutions Joe Hoffert Joseph.W.Hoffert@Boeing.com. Outline. General Techniques For Polynomial Time Solutions Greedy Algorithms Dynamic Programming Linear Programming Problems w/ unlikely Poly-time Solutions (NP Complete)
E N D
Advanced Algorithms CS 539/441ORIn Search Of Efficient General SolutionsJoe HoffertJoseph.W.Hoffert@Boeing.com
Outline • General Techniques For Polynomial Time Solutions • Greedy Algorithms • Dynamic Programming • Linear Programming • Problems w/ unlikely Poly-time Solutions (NP Complete) • Next best solutions (e.g., Approximation Algorithms) • Lower Bound Techniques • On-Line/Dynamic Algorithms
Greedy Algorithms • Locally optimal choice leads to globally optimal solution. Will not often yield optimal solution. • Framework • Prove Greedy Choice Property Holds: • first step “g” made by greedy algorithm is part of some optimal solution (i.e., using “g” does no worse than an arbitrary optimal solution) • Prove Optimal Substructure Property Holds • We need subproblem P’ of P left after g is chosen • P’ is optimally solved in S. That is, the solution to P’ contained within S is optimal for P’. • Examples: Earliest Deadline First (EDF) scheduling, Huffman coding
Job 1 Job 1 Job 2 Job 2 Job 3 Job 3 Job 4 Job 4 Job 5 Job 5 Job 1 Job 2 Job 3 Job 4 Job 5 Greedy Algorithm Example Maximize # of jobs that meet deadlines EDF scheduling: sort jobs by deadline – O(n log n) Schedule job with shortest deadline first must prove this choice is at least as good as any other must prove this subproblem is independent of first choice made Subproblem P’ is to schedule remaining jobs 2 - 5
Dynamic Programming • Locally optimal choice doesn’t lead to a globally optimal solution but there is still optimal substructure • Framework • Try all possible first choices: • One of these must be the correct first choice • Prove Optimal Substructure Property Holds • We need subproblem P’ of P left after first choice • P’ is optimally solved in S. That is, the solution S’ to P’ contained within S is optimal for P’. • Use bottom up approach to efficiently compute optimal cost for all subproblems (a.k.a. overlapping subproblems) • Examples: assembly line scheduling, longest common subsequence
a2,n-1 a1,n-1 t2,n-1 t1,n-1 a1,n a2,n t2,1 t1,1 t1,2 t2,2 4 1 2 3 2 1 2 4 7 1 3 2 5 4 8 4 3 Dynamic Programming Example Assembly line scheduling: find minimal cost through stations a1,1 a1,2 a1,3 assembly line 1 e1 x1 … chassis enters completed auto exits e2 x2 a2,1 a2,2 a2,3 assembly line 2 7 9 assembly line 1 2 3 chassis enters completed auto exits 4 2 8 5 6 assembly line 2
Linear Programming • Problem defined by set of linear equations (equalities or inequalities) • Poly-time but high exponent on time complexity (e.g., n8 to n10) • Proves poly-time solution, more efficient solutions might be found • Example: 2 dimensional constraints, minimum cost flow x capacity = 5 cost = 2 Maximize x1 + x2 given constraints below capacity = 2 cost = 7 x2 x2 s capacity = 1 cost = 3 t 5x1 - 2x2 >= -2 capacity = 2 cost = 5 capacity = 4 cost = 1 y 4x1 - x2 <= 8 x1 + x2 = 8 Minimize cost for flow of 4 units from s to t x1 >= 0 x 1 of 2 cost = 7 2 of 5 cost = 2 x1 + x2 = 4 2x1 + x2 <= 10 s 1 of 1 cost = 3 t x1 + x2 = 0 x1 x1 x2 >= 0 2 of 2 cost = 5 3 of 4 cost = 1 y
NP-Complete Problems • NP complete problems are: • Complexity class Non-deterministic Polynomial (NP), i.e., solution to NP problem can be verified in polynomial time • NP-hard, i.e., reducible from all other NP problems in poly-time • Unlikely to have poly-time solutions, not (yet) proven • Example: MP-Scheduling problem … … Job 1 Job 2 Job 3 Job n-1 Job n Processor 1 Processor 2 Processor m Is there a schedule in which all jobs are processed by specified time? Job 3 Job 2 Job n Job n-1 Job 1 … Processor 1 Processor 2 Processor m-1 Processor m
Next Best Solutions • What to do when problem shown to be NP-complete: • Is problem a special case? For instance, vertex-cover is NP-complete but if graph is a tree there exists a poly time algorithm for this case • Input size may be small enough not to be a problem • Use a heuristic and hope it gets something good enough • Use approximation algorithms (can be proven to be “good enough” or not “good enough”) • LP relaxation (IP is NP-complete, LP may give a good enough answer) • 0-1 Knapsack relaxation to fractional knapsack • Non-preempted schedule relaxed to have preemption
Approximation Algorithms • Poly-time algorithms with provable bounds relative to optimal solution • If approximation bound is some constant n then the algorithm is said to be an n-approximation • Example: 2-approximation 0-1 Knapsack problem Item 5 Item 3 Item 4 Item 2 Item 1 Item 4 Take the larger of the first item that won’t completely fit and the sum of all the previous items in the knapsack. This must be at least ½ of optimal solution which is a 2 approximation. Sort items by value per unit quantity (i.e., total value/amount) and start filling up the knapsack Item 3 Item 1 Item 5 Item 2
Lower Bound Techniques • Determine minimum number of operations needed for any solution • Use decision-tree lower bound algorithm when applicable • All possibilities are enumerated as leaves in a decision tree • Traverse the depth of the tree • Example: lower bound for median among two sorted arrays • Adversary strategy • Devise a strategy that makes it “hard” for any algorithm to find a solution (i.e., maximize the number of steps needed for any solution) • Example: merging two sorted arrays • Provides guidance for how good an algorithm can be
a1 a2 a3 a4 b1 b2 b3 b4 Adversary Strategy Example • Example: merging two sorted arrays • How many comparisons must be made before any algorithm has the arrays sorted? … … an-1 bn-1 bn an Adversary strategy … a1 b1 a2 b2 a3 b3 a4 b4 an-1 bn-1 an bn If < 2n – 1 comparisons are made more than one possible answer left (i.e., adversary can switch the elements that have not been compared) … … ai-1 bi-1 ai bi ai+1 bi+1 … … … … ai-1 bi-1 ai bi ai+1 bi+1 ai-1 bi-1 bi ai ai+1 bi+1
On-Line/Dynamic Algorithms • Algorithms that don’t have knowledge of future requests/input • Compare to off-line algorithms that have complete knowledge of input • Provable bound compared to off-line solution • Examples: Ski rental vs. purchase, scheduling jobs with deadlines using EDF, B = cost to buy R = cost to rent Can have 2-competitive algorithm, i.e., provable 2B bound on cost
Conclusion • Techniques exist for: • Provable poly-time algorithms • Determining unlikely poly-time algorithms • Next best solutions with provable bounds • When to use what technique • Mostly developed through practice and intuition • What does it matter? • Provable solutions