1 / 29

Different Local Search Algorithms in STAGE for Solving Bin Packing Problem

Different Local Search Algorithms in STAGE for Solving Bin Packing Problem. Gholamreza Haffari Sharif University of Technology haffari@ce.sharif.edu. Overview. Combinatorial Optimization Problems and State Spaces STAGE Algorithm Local Search Algorithms Results Conclusion and Future works.

zephr-chan
Download Presentation

Different Local Search Algorithms in STAGE for Solving Bin Packing Problem

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Different Local Search Algorithms in STAGE for Solving Bin Packing Problem Gholamreza Haffari Sharif University of Technology haffari@ce.sharif.edu

  2. Overview • Combinatorial Optimization Problems and State Spaces • STAGE Algorithm • Local Search Algorithms • Results • Conclusion and Future works

  3. Optimization Problems • Objective function: F(x1, x2, …, xn) Find vector X=(x1, x2, …, xn) which minimizes (maximizes) F • Constraints: g1(X)  0 g2(X)  0 . . . gm(X)  0

  4. Combinatorial Optimization Problems (COP) • Special kind of Optimization Problems which are Discrete • Most of the COPs are NP-Hard, I.e. there is not any polynomial time algorithm for solving them.

  5. Satisfiability • SAT: Given a formula in propositional calculus, is there an assignment to its variables making it true? f(x1, x2, .., xn) • Problem is NP-Complete. (Cook 1971)

  6. Bin Packing Problem (BPP) • Given a list (a1, a2, …) of items, each of which has a size s(ai)>0, and a bin Capacity C, what is the minimum number of bins for packing items? • Problem is NP-Complete (Garey and Johnson 1979)

  7. a1 a2 a3 a4 b1 b2 b3 b4 An Example of BPP Objective function: m ai < C, aibj, 1j m Objects list: a1, a2, …, an Bin’s capacity (bj) is C

  8. a1 a2 a3 a4 b1 b2 b3 b4 Definition of State in BPP • A particular permutation of items in the object list is called state. Greedy Algorithm

  9. a1, a2, a3, a4 a1, a2, a4, a3 a1, a4, a2, a3 a2, a4, a3, a1 State Space of BPP . . . . . . . . .

  10. A Local Search Algorithm • 1) s0 : a random start state • 2)for i = 0 to + • - generate new solutions set S from the current • solution si • - decide whether si+1 = s’S or si • - if a stopping condition is satisfied • return the best solution found

  11. Local Optimum Solutions • The quality of a local optimum resulted from a local search process depends on a starting state.

  12. Multi-Start LSA • Runs the base local search algorithms from different starting states and returns the best result found. • Is it possible to choose a promising new starting state?

  13. Other Features of a State • Other features of a state can help the search process. (Boyan 1998)

  14. Previous Experiences • There is a relationship among local optima of a COP, so previously found local optima can help to locate more promising start states.

  15. Core ideas • Using an Evaluation Function to predict the eventual outcome of doing a local search from a state. • The EF is a function of some features of a state. • The EF is retrained gradually.

  16. Execution Phase Learning Phase STAGE Algorithm • Uses an Evaluation Function to locate a good start state. • Does local search. • Retrains EF with the new generated search trajectory

  17. State Features EF Prediction Evaluation Function • EF can be used by another local search algorithm for finding a good new starting point. • Applying EF on a state

  18. Diagram of STAGE (Boyan 98)

  19. Analysis of STAGE • What is the effect of using different local search algorithms? • Local search algorithms: • Best Improvement Hill Climbing (BIHC) • First Improvement Hill Climbing (FIHC) • Stochastic Hill Climbing (STHC)

  20. 1 … 4 7 2 Best Improvement HC • Generates all of the neighboring states, and then selects the best one.

  21. 5 4 7 First Improvement HC • Generates neighboring states systematically, and then selects the first good one.

  22. Stochastic HC • Stochastically generates some of the neighboring states, and then selects the best one. • The size of the set containing neighbors is called PATIENCE.

  23. Different LSAs Different LSAs for solving U250_00 instance http://www.ms.ic.ac.uk/info.html

  24. Different LSAs, bounded steps

  25. Some Results • The higher the accuracy in choosing the next state, the better the quality of the final solution, by comparing STHC1 and STHC2 (PATIENCE1=350, PATIENCE2=700) • Deep paces result in higher quality and faster solutions, by comparing BIHC and others.

  26. Different LSAs, bounded moves

  27. Some Results • It is better to search the solution space randomly rather than systematically, by comparing STHC and others.

  28. Future works • Using other learning structures in STAGE • Verifying these results on another problem (for example Graph Coloring) • Using other LSA, such as Simulated Annealing.

  29. Questions

More Related