1 / 36

Iterative Improvement Techniques for Solving Tight Constraint Satisfaction Problems Hui Zou

Iterative Improvement Techniques for Solving Tight Constraint Satisfaction Problems Hui Zou Constraint Systems Laboratory Department of Computer Science and Engineering University of Nebraska-Lincoln November, 2003 Supported by NSF grant #EPS-0091900. Outline.

gembree
Download Presentation

Iterative Improvement Techniques for Solving Tight Constraint Satisfaction Problems Hui Zou

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Iterative Improvement Techniques for Solving Tight Constraint Satisfaction Problems Hui Zou Constraint Systems Laboratory Department of Computer Science and Engineering University of Nebraska-Lincoln November, 2003 Supported by NSF grant #EPS-0091900

  2. Outline • Motivation; Related works; Questions • Approach: goal, context & model • Solvers: designed & evaluated • Local search • Multi-agent based search • Further improvements • Directions of future research

  3. Approach Use a small but challenging real-world application to • Develop new search techniques • Compare & characterize the behavior of various search strategies • Identify shortcomings proper to a given search strategy & propose improvements Long term goal: • Provide a robust portfolio of search algorithms

  4. Modeling -GTA Graduate Teaching Assistants (GTA) Assignment problem: In a semester, given • a set of courses, • a set of graduate teaching assistants, and • a set of constraints that specify allowable assignments find a consistent assignment of GTAs to courses Model GTA assignment problem into Constraint Satisfaction Problem (CSP) In practice, this problem is tight, even over-constrained The goal: Ensure GTA support to as many courses as possible Detailed modeling in [Glaubius & Choueiry ECAI 02 WS on Modeling]

  5. About the GTA - objectives • Optimization criteria • Maximize the number of courses covered • Maximize the geometric average of the assignments wrt the GTAs’ preference values (between 0 and 5) • Problem • Constraints are hard, must be met • Maximal consistent partial-assignment problem • Not a MAX-CSP (which maximizes #constraints satisfied) • Context • Ability to handle both solvable & unsolvable problem instances [Freuder ’93]

  6. Types of search Systematic search Iterative improvement search Start with an empty assignment & expand it by instantiating one variable at a time Move: expanding a partial solution Start with a complete assignment & improve it by making local changes Move: from state to state Sound & complete Neither sound nor complete Iterative improvement search is particularly effective for large scale problems

  7. Focus Iterative improvement search strategies • Local Search (LS) • Heuristic hill-climbing • Multi-agent based search (ERA) • Extremely decentralized stochastic search • Systematic search • Backtrack search (BT) • Heuristic backtrack search [Glaubius & Choueiry 02] • (Randomized backtrack search [Guddeti & Choueiry 04])

  8. Questions addressed • Local search • Solve non-binary CSPs • Performance on solvable & unsolvable CSPs • Noise strategies to handle local optima • ERA • Performance comparison of BT, LS & ERA • Local optima in ERA? • Solvable vs. unsolvable problem instances

  9. Heuristic hill-climbing (LS) • Min-conflict heuristic, for choosing the move • Adapted to non-binary CSPs • Uses constraint propagation to handle global constraints (i.e., capacity constraints) • Drawback: Nugatory move • Random walk to avoid local optima • Studied effect of the noise probability on performance • Random restart to recover from local optima • Studied effect of the number of restart on performance Resulting strategy operates as a greedy stochastic search

  10. Local search - hill-climbing with min-conflict • Hill-climbing • Starts from a random statei • Choose statejamong all neighbors of statei, such thatstatejis better thanstatei • Min-conflict (MC) [Minton ’92] • A heuristic to evaluate and choose next state • Solve one million n-queen problem in minutes • Tested only on binary constraints

  11. Local search -Nugatory move The probability p of satisfying this constraint is 3/(3^3)=11% In case, if |D|=30, p = 3/(30^3)=0.0001 • Problem • When solving global constraints, local search gets stuck in nugatory move. • Method • Constraint propagation

  12. Local search - Empirical study • For short response time, LS finds better partial solutions than BT • Although theoretically complete, BT thrashes forever [Guddeti & Choueiry 04] • LS has qualitatively similar behaviors on solvable & unsolvable CSPs

  13. Local search - Random walk • Random walk: • Avoid local optima 1 - p→ MC p → Random walk p=0= MC p=1 = Random walk P=0.02 [ Bartak ‘98]

  14. Local search- Random walk • Set probability p = 1%~50% with increment of 1% • Evaluation criterion: # constraint check (CC) When p ≤ 5%, • MC’s influence is predominant, • noise strategy not enough effective

  15. Local search- Random restart • Restart the search from a new randomly selected state • The number of restarts varies from 50~500 with increment of 50 • Evaluated by the percentage of unassigned courses SD – standard deviation • The number of restarts does not significantly affect performance • On average, a value of 300 ~ 400 is good enough

  16. Local search- Conclusions • Binary vs. non-binary: • Constraint propagation to handle global constraint • Local search (LS) vs. systematic search (BT): • LS: quickly find a good partial solution • LS: monotonic improvement, quick stabilization, one-time reparation • Solvable vs. unsolvable • similar behavior in both cases • good performance, but local optima • Handle local optima • noise strategies: random walk and random restart • the setting of parameters is likely problem dependent

  17. Agents Environment sensing Background - MAS • Multi-agent system (MAS):several agents interact and work together in order to achieve a set of goals. • Agents: autonomous (perceive & act), goal-directed, can communicate • Interaction protocols: governing communications among agents • Environment: where agents live & act acting

  18. ERA - ERA in general ERA [Liu & al. AIJ 2002] • Environment, Reactive rules, and Agents • A multi-agent formulation for solving a general CSP • Transitions between states when agents move It can be viewed as an extension of local search, but they are different: Local search • only one evaluation value (state cost) for the whole state • one global goal, central control ERA • each agent has its own evaluation value (position value) • each agent has its own local goal, no central control

  19. ERA - ERA’s components ERA=Environment + Reactive rules + Agents Environment:a two-dimensional array • Each agent corresponds to a variable • Each cell stores two values (domain value, violation value) • Each position corresponds to an assignment • Agent in Zero position  constraints are satisfied

  20. ERA - ERA’s components ERA=Environment + Reactive rules + Agents Reactive rules: • Least-move: choose a position with the min. violation value • Better-move: choose a position with a smaller violation value • Random-move: randomly choose a position Combinations of these basic rules form different behaviors.

  21. ERA - ERA’s components ERA=Environment + Reactive rules + Agents Agents:move in its own row At each state, an agent chooses a position to move to following the reactive rules. The agents will keep moving until all have reached zero position, or a certain time period has elapsed. All agents find zero position Some agents find zero position

  22. ERA - Example ( ERA ) 4-queen problem Init, Update Eval (agent Q1) Move (agent Q1),Update Eval (agent Q2) Eval (agent Q3) Move (agent Q3), Update Eval (agent Q4) Move(agent4), Update

  23. ERA- Empirical study • Test on GTA problem: • Testing the behavior of ERA • Performance comparison: • ERA: FrBLR • LS: hill-climbing, min-conflict & random walk • BT: B&B-like, many orderings (heuristic, random) • Observing behavior of individual agents • The deadlock phenomenon • 8 instances of the GTA problem

  24. ERA- Behavior of ERA • Observations: • BLR and FrBLR vibrate • LR quickly reaches a stable value • FrBLR achieves the largest number of agents in zero position FrBLR has the best ability to find a partial solution

  25. Unassigned Courses Unassigned Courses Unassigned Courses Solution Quality Solution Quality Solution Quality CC (×108) CC (×108) CC (×108) Unused GTAs Unused GTAs Unused GTAs Available Resource Available Resource Available Resource Unassigned Courses Unassigned Courses Unassigned Courses Unassigned Courses Unassigned Courses Unassigned Courses Unassigned Courses Solution Quality Solution Quality Solution Quality Solution Quality Solution Quality Solution Quality Solution Quality Unused GTAs Unused GTAs Unused GTAs Unused GTAs Unused GTAs Unused GTAs Unused GTAs CC (×108) CC (×108) CC (×108) CC (×108) CC (×108) CC (×108) CC (×108) Solution Quality Solution Quality Solution Quality Solution Quality Solution Quality Solution Quality Solution Quality Unused GTAs Unused GTAs Unused GTAs Unused GTAs Unused GTAs Unused GTAs Unused GTAs CC (×108) CC (×108) CC (×108) CC (×108) CC (×108) CC (×108) CC (×108) Solution Quality Solution Quality Solution Quality Solution Quality Solution Quality Solution Quality Solution Quality Available Resource Available Resource Available Resource Available Resource Available Resource Available Resource Available Resource Unassigned Courses Unassigned Courses Unassigned Courses Unassigned Courses Unassigned Courses Unassigned Courses Unassigned Courses Available Resource Available Resource Available Resource Available Resource Available Resource Available Resource Available Resource Unassigned Courses Unassigned Courses Unassigned Courses Unassigned Courses Unassigned Courses Unassigned Courses Unassigned Courses # Courses # Courses # Courses # Courses # Courses # Courses # Courses Ratio= Ratio= Ratio= Ratio= Ratio= Ratio= Ratio= Unused GTAs Unused GTAs Unused GTAs Unused GTAs Unused GTAs Unused GTAs Unused GTAs Available Resource Available Resource Available Resource Available Resource Available Resource Available Resource Available Resource CC (×108) CC (×108) CC (×108) CC (×108) CC (×108) CC (×108) CC (×108) Original/Boosted Original/Boosted Original/Boosted Original/Boosted Original/Boosted Original/Boosted Original/Boosted # GTAs # GTAs # GTAs # GTAs # GTAs # GTAs # GTAs Total Capacity Total Capacity Total Capacity Total Capacity Total Capacity Total Capacity Total Capacity Total Load Total Load Total Load Total Load Total Load Total Load Total Load Solvable? Solvable? Solvable? Solvable? Solvable? Solvable? Solvable? ERA- Performance comparison Original/Boosted Solvable? # GTAs # Courses Total capacity (C) Total load (L ) Ratio= C \ L Observations:- Only ERA can find a full solution on all solvable instances. - ERA leaves more unused GTAs than LS and BT.

  26. ERA- Performance comparison • Observation: • When •  1, ERA outperforms BT & LS • < 1, ERA gives the worst solution Observation: On average, LS performs much fewer constraint checks

  27. ERA- Performance comparison • Observation: • Solvable vs. unsolvable instances: • ERA • stable on solvable instances • oscillates on unsolvable cases • LS and BT behave the same way. ERA performance on solvable instances ERA performance on unsolvable instances

  28. ERA-behavior of individual agents • Instances • solvable • unsolvable • Motion of agents • variable • stable • constant • Observations:

  29. ERA - Deadlock Observation: ERA is not able to avoid deadlocks and yields a degradation of the solution on over-constrained CSPs. • Each circle corresponds to a given GTA (position). • Each square represents an agent. • A blank squares indicate that an agent is on a zero position. • The squares with same color indicate agents involved in a deadlock.

  30. ERA- Conclusions + advantages – shortcomings

  31. Improving ERA • Mixing behaviors of agents • Adding global control • New search hybrid: • ERA & h-BT • ERA & LS • Conflict resolution strategies for ERA

  32. Improving ERA - Extensions of ERA • ERA with mixed-behavior • Assign a random behavior before the search • Assign behavior randomly during the search • Neither of them solves the deadlock problem • Enhancing ERA with global control • Do not accept a movement that deteriorates the global goal • Lead to ‘local search’-like behavior Unsolvable instances Solvable instances

  33. Improving ERA - Extensions of ERA (cont’) • ERA with hybridization • fix partial assignment according to previous solution • combine with other search approach, such as LS or BT • the quality of solutions gets improved • ERA with conflict resolution • add dummy resources • find a complete solution when LS and BT fail • remove dummy assignments, solutions are still better

  34. Directions for future research • Enhance ERA to handle optimization • Conduct thorough & formal empirical evaluations • Include other search techniques in comparisons • BT search: Randomized, credit-based • Other local repair: squeaky-wheel method • Market-based techniques, etc. • Validate conclusions on other CSPs • random instances, real-world problems • Design & evaluate new hybrid search strategies • Relate problem tightness to backbone

  35. Acknowledgements Dr. F. Fred Choobineh Dr. Berthe Y. Choueiry (advisor) Dr. Hong Jiang Dr. Peter Revesz Members in the Constraint Systems Laboratory My friends in Lincoln My parents and my wife

  36. Questions

More Related