1 / 30

Data Mining Driven Neighborhood Search

Data Mining Driven Neighborhood Search. Michele Samorani Manuel Laguna. Problem. PROBLEM : In meta-heuristic methods that are based on neighborhood search, whenever a local optimum is encountered, the search has to escape from it Tabu Search uses tabu lists and other strategies

tara
Download Presentation

Data Mining Driven Neighborhood Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Mining Driven Neighborhood Search Michele Samorani Manuel Laguna

  2. Problem • PROBLEM: In meta-heuristic methods that are based on neighborhood search, whenever a local optimum is encountered, the search has to escape from it • Tabu Search uses tabu lists and other strategies • Other variations are: Path Relinking, Randomization, Restarting • FACT: Escape directions are set by a-priori rules

  3. Goal of this work Class of problems General framework for any class of problems 4. Given another instance 5. Use the knowledge to tackle it by using “smart” constraints Take a few instances Consider the local optima Learn

  4. Outline • How to learn the constraints • How to apply them • Results • Conclusions

  5. How to learn the constraints

  6. How to learn the constraints • Collect many local optima from instances of the same class of problems • For each local optimum Ai, consider the local optima nearby, Bk ( k in N(i) ), forming pairs (Ai, Bk) Denote each pair with ‘-’ if the objective function improves from Ai to Bk, ‘+’ otherwise

  7. How to learn the constraints B1 - A1 - B2 + B3 + B4 B5 A2 - + B6 + B7 A3 + B8 + B9 - B10 A4 - B11 + B12

  8. Example: CTAP • Constrained Task Allocation Problem (CTAP): • Assign m tasks to n CPUs minimizing the total cost • Costs: • Fixed cost: if CPU j is used, we pay Sj • Communication cost: if tasks p and q are in different CPUs, we pay c(p,q)

  9. Example: CTAP • Suppose S2 > S3. Consider this move: • This move is unlikely to be performed because: • We would introduce the fixed cost S3 • We would introduce communication cost c5,6 CPU1 T1,T2, T3, T4 CPU2 T5 CPU3 T6 CPU1 T1,T2, T3, T4 CPU2 T5, T6 CPU3

  10. Example: CTAP Condition on local optimum • Suppose S2 > S3. Consider this move: • But at the next move, we could move T5 too • We want to learn rules like: “if there is an empty CPU y that can accommodate the tasks assigned to CPU x, and it has a smaller fixed cost, move the tasks from x to y” Condition on pair of local optima CPU1 T1,T2, T3, T4 CPU2 T5 CPU3 T6 CPU1 T1,T2, T3, T4 CPU2 T5, T6 CPU3

  11. How to learn the constraints • A RuleRt is a pair of conditions (Ft, Gt) Rt has to be applied at a local optimum L and has the following form : “If L satisfies condition Ft, then go towards a solution S such that (L, S) satisfies condition Gt”

  12. How to learn the constraints Ft = 1 Gt = 1 B1 - Finding a rule Rt = finding a subset of initial local optima such that: they can be distinguished from the other local optima through condition Ft If F is satisfied, then there are ‘-’ pairs satisfying condition Gt A1 - B2 + B3 + B4 Gt = 1 - B5 A2 + B6 Gt = 1 + B7 A3 + B8 + B9 - B10 A4 - B11 + B12

  13. Mathematical Model for 1 rule Ft = 1 Gt = 1 B1 - A1 - B2 + B3 + B4 Gt = 1 - B5 A2 + B6 Gt = 1 + B7 A3 + B8 + B9 - B10 A4 Constraints on F and G: F (or G)=1 and F (or G)=0 must be the outputs of a binary classifier - B11 + B12

  14. use the constraints to enhance the search

  15. Output of learning If L satisfies condition Ft, then go towards a solution S such that (L, S) satisfies condition Gt

  16. Enforcing the constraints Value(S) ≥ Value(L) Step < maxSteps Gt < 0 Set O.F: Real O.F. Set constraint Gt (L, S) > 0 ESCAPE Tabu Search EXPLORATION Tabu Search Set O.F: max Gt (L, S) We can’t satisfy Gt after maxSteps Value(S) < Value(L) Successful escape maxSteps reached Unsuccessful escape Local optimum L satisfies Ft

  17. Experiments and results

  18. Problem Set • 108 instances of CTAP – A.Lusa, CN Potts (2008)

  19. Experiment 1 – better LO? “Which between tabu search and smart escape is more effective to escape from a local optimum?” • For 30 times: • Find 1 rule (F1, G1) using 30% of the local optima • For each local optimum L of the remaining local optima (70%): • Try to escape from L using: • smart constraints • a simple tabu search • And see in which “valley” we are through a local search L M

  20. Experiment 1 – Results Accuracy = 81.58%

  21. Experiment 1 – Results

  22. Experiment 1 – Results • Which one yields the greatest improvement from the initial local optimum to the final local optimum?

  23. Experiment 2 – better search? • Compare the following: • GRASP + tabu search (max not improving moves = m) • GRASP + smart search (with 1 or with 2 rules) • Whenever you find a local optimum: • If a suitable rule is available, apply the corresponding constraint with maxSteps = m • Otherwise, run a tabu search (max moves = m) • Run for 50 times on 72 instances, and record the best solution found

  24. Experiment 2 – Results Comparison to Tabu Search New Best Known Solutions

  25. Additional experiments on the Matrix Bandwidth Minimization Problem • This problem is equivalent to labeling the vertices of an undirected graph so that the maximum difference between the labels of any pair of adjacent vertices is minimized • We considered the data set of Martí et al. (2001), which is composed by 126 instances • A simple tabu search performs well on 115 instances, and poorly on 11 • We used 30 of the easy instances as training set and the 11 hard ones as test set • For 50 times: • For each test instance: • Generate a random solution. • Run a regular Tabu Search • Run a Smart Tabu Search (Data Mining Driven Tabu Search – DMDTS) • Record the number of wins, ties, losses

  26. Additional experiments on the Matrix Bandwidth Minimization Problem 7 wins, 1 loss, 3 ties Wins - losses

  27. Conclusions

  28. Conclusions • We showed that: • It is possible to learn offline from other instances of the same class of problems • It is possible to effectively exploit this knowledge by dynamically introducing guiding constraints during a tabu search

  29. Research Opportunities • Improve learning part (heuristic algorithm) • Improve constraints enforcement • Apply this idea to other neighborhood searches • Explore the potential of this idea on other problems

  30. Thank you for your attention

More Related