150 likes | 259 Views
Outline. Brief description of the GTAAP system Review ERA Algorithm Adaptations/Changes from Basic ERA Implementation – Optimization Demo/Results Future Research and Conclusions. Outline. Brief description of the GTAAP system Review ERA Algorithm
E N D
Outline • Brief description of the GTAAP system • Review ERA Algorithm • Adaptations/Changes from Basic ERA Implementation – Optimization • Demo/Results • Future Research and Conclusions
Outline • Brief description of the GTAAP system • Review ERA Algorithm • Adaptations/Changes from Basic ERA Implementation – Optimization • Demo/Results • Future Research and Conclusions
ERA: Environment, Rules, Agents [Liu et al, AIJ 02] Environment is an nxa board Each variable is an agent Each position on board is a value of a domain Agent moves in a row on board Each position records the number of violations caused by the fact that agents are occupying other positions Agents try to occupy positions where no constraints are broken (zero position) Agents move according to reactive rules
Reactive rules [Liu et al, AIJ 02] Reactive rules: Least-move: choose a position with the min. violation value Better-move: choose a position with a smaller violation value Random-move: randomly choose a position Combinations of these basic rules form different behaviors.
Big picture Agents do not communicate but share a common context Agents keep kicking each other out of their comfortable positions until every one is happy Charecterization: [Hui Zou, 2003] Amazingly effective in solving very tight but solvable instances Unstable in over-constrained cases Agents keep kicking each other out (livelock) Livelocks may be exploited to identify bottlenecks
Outline • Brief description of the GTAAP system • Review ERA Algorithm • Adaptations/Changes from Basic ERA Implementation – Optimization • Demo/Results • Future Research and Conclusions
Implementation Details • Use of the rBLR as the main behavior (random-move as a supporting behavior) • Random move’s probability is that it occurs about 2% of the time, the remaining 98% of the time is rBLR. • “r” in rBLR is set as 3 • Termination: 150 time steps
Additions Made to the Basic Algorithm • Optimization: • agent’s assigned TA’s preference for this class • Each agent assumes a better move to be: • If the new position has less constraint violations as the old one. • If the new position has the same number of constraint violations, but the position’s GTA has a higher preference ranking for this course than the current position’s GTA. • This provided much, much improved results in practice, by forcing more movement and overall better values to be selected.
Outline • Brief description of the GTAAP system • Review ERA Algorithm • Adaptations/Changes from Basic ERA Implementation – Optimization • Demo/Results • Future Research and Conclusions
Results • Fall 2007
Results • Spring 2007
Results • Fall 2004
Future Research • Testing upcoming semesters to see how well this aids the assignment process in the real-world. • Setting up courses that are a low priority to be able to remain unassigned. • Looking into other local search techniques (genetic algorithms, etc.) • Creation of hybrids of local searches • Investigations in mimicking the human process (greedy, yet still making a few “back changes”)
Conclusions • As the testing confirmed, this approach seems like it will be a great aid in the assignment process. • Its results are statistically approximately equal to or better than the human-generated solution, though this still needs to be confirmed in the real-world. • This approach seems a very good way to go in situations where a decent solution is needed in a relatively small amount of time.