400 likes | 612 Views
GRASP: A Sampling Meta-Heuristic. Topics What is GRASP The Procedure Applications Merit. What is GRASP. GRASP : Greedy Randomized Adaptive Search Procedure Random Construction : TSP: randomly select next city to add High Solution Variance Low Solution Quality
E N D
GRASP: A Sampling Meta-Heuristic Topics • What is GRASP • The Procedure • Applications • Merit
What is GRASP GRASP : Greedy Randomized Adaptive Search Procedure Random Construction: TSP: randomly select next city to add High Solution Variance Low Solution Quality TSP: randomly select next city to add Greedy Construction: TSP: select nearest city to add High Solution Quality Low Solution Variance GRASP: Tries to Combine the Advantages of Random and Greedy Solution Construction Together.
The Knapsack Example • Knapsack problem • Backpack: 8 units of space, 4 items to pick • Item Value in terms of dollars: 2,5,7,9 • Item Cost in terms of space units: 1,3,5,7 • Construction Heuristic • Pick the Most Valuable Item • Pick the Most Valuable Per Unit
Solution Quality • Solution Quality • For Heuristic 1: (1,4) , Value 11 • For Heuristic 2: (1,4), Value 11. • Optimal Solution: (2,3), Value 12 • None of them gives the Optimal solution • This is true for any heuristic • Theoretically, for a NP-Hard problem, there is no polynomial algorithm
Semi-Greedy Heuristics • Add at each step, not necessarily the highest rated solution components • Do the following • Put high (not only the highest) solution components into a restricted candidate list (RCL) • Choose one element of the RCL randomly and add it to the partial solution • Adaptive element: The greedy function depends on the partial solution constructed so far. • Until a full solution is constructed.
Mechanism of RCL • Size of the Restricted Candidate List • 1) If we set size of the RCL to be really big, then the semi-greedy heuristic turns into a pure random heuristic • 2) If we set the size of RCL to be 1, the sem-greedy heuristic turns into the pure greedy heuristic • Typically, this size is set between 3~5.
GRASP • Do the following • Phase I: Construct the current solution according to a greedy myopic measure of goodness (GMMOG) with random selection from a restricted candidate list • Phase II: Using a local search improvement heuristic to get better solutions • While the stopping criteria unsatisfied
GRASP • GRASP is a combination of semi-greedy heuristic with a local search procedure • Local search from a Random Construction: • Best solution often better than greedy, if not too large prob. • Average solution quality worse than greedy heuristic • High variance • Local Search from Greedy Construction: • Average solution quality better than random • Low (No Variance)
The Knapsack Example • Knapsack problem • Backpack: 8 units of space, 4 items to pick • Item Value in terms of dollars: 2,5,7,9 • Item Cost in terms of space units: 1,3,5,7 • Two Greedy Functions • Pick the Most Valuable Item • Pick the Most Valuable Per Unit
GRASP • The Most Valuable Item with RCL=2 • Items 4 and 3 with values 9,7 are in the RCL • Flip a coin, we select …. • The Most Valuable Per Unit with RCL = 2 • Items 1 and 2 are selected with values 2/1 =2 and 5/3 = 1.7, • Flip a coin, we select ….
GRASP extensions • Merits • Fast • High Quality Solution • Time Critical Decision • Few Parameters to tune • Extension • Reactive GRASP – The RCL Size • The use of Elite Solutions found • Long term memory, Path relinking
Literature • T.A.Feo and M.G.C. Resende, “A probabilistic Heuristic for a computational Difficult Set covering Problem,” Operations Research Letters, 8:67-71, 1989 • P. Festa and M.G.C. Resende, “GRASP: An annotated Biblograph” in P. Hansen and C.C. Ribeiro, editors, “Essays and Surveys on Metaheuristics, Kluwer Academic Publishers, 2001 • M.G.C.Resende and C.C.Ribeiro, “Greedy Randomized Adaptive Search Procedure”, in Handbook of Metaheuristics, F. Glover and G. Kochenberger, eds, Kluwer Academic Publishers, 219-249, 2002
Neighbourhood • For each solution S S,N(S) Sis a neighbourhood • In some sense each TN(S) is in some sense “close” to S • Defined in terms of some operation • Very like the “action” in search
Neighbourhood Exchange neighbourhood:Exchange k things in a sequence or partition Examples: • Knapsack problem: exchange k1 things inside the bag with k2 not in. (for ki, k2 = {0, 1, 2, 3}) • Matching problem: exchange one marriage for another
3-opt exchange • Select three arcs • Replace with three others • 2 orientations possible
Neighbourhood Strongly connected: • Any solution can be reached from any other(e.g. 2-opt) Weakly optimally connected • The optimum can be reached from any starting solution
Neighbourhood • Hard constraints create solution impenetrable mountain ranges • Soft constraints allow passes through the mountains • E.g. Map Colouring (k-colouring) • Colour a map (graph) so that no two adjacent countries (nodes) are the same colour • Use at most k colours • Minimize number of colours
Map Colouring ? Starting sol Two optimal solutions Define neighbourhood as: Change the colour of at most one vertex Make k-colour constraint soft…
Variable Neighbourhood Search • Large Neighbourhoods are expensive • Small neighbourhoods are less effective Only search larger neighbourhood when smaller is exhausted
Variable Neighbourhood Search • m Neighbourhoods Ni • |N1| < |N2| < |N3| < … < |Nm| • Find initial sol S ; best = z (S) • k = 1; • Search Nk(S) to find best sol T • If z(T) < z(S) S = T k = 1 else k = k+1
VNS does not follow a trajectory • Like SA, tabu search