140 likes | 198 Views
CC+. Swinburn Miranda – Computer Science William Moseson – Engineering Physics Ningchuan Wan – Computer Science. Introduction. Goal: To develop AI that plays Chinese checkers at a high level.
E N D
CC+ Swinburn Miranda – Computer Science William Moseson – Engineering Physics Ningchuan Wan – Computer Science
Introduction • Goal: To develop AI that plays Chinese checkers at a high level. • Developed an algorithm to make intelligent moves based on heuristic based searches and learning. • Motivated by experience gained and desire to win.
Method - Heuristic • Distance Max • Total distance of our pieces from the starting point • Penalizes stragglers • Penalizes pieces outside a certain width threshold
Method – Strategy Adjustment Endgame • Adjust the search type being used based on the point in the game. Midgame Opening
Method – Strategy Adjustment • Opening • BFS is applied to move max pieces furthest • Mid Game • BFS for yourself as well as the opponent • Enabling blocking & long moves • End Game • Finding the shortest path to the goal
Method – Gray Piece Placement • Block an opponent’s ladder • Block an opponent’s chain • Block a potentially large jump • Help your pieces at the end of the game
Method - learning • Game logs are used for learning • A parser converts the game log into a feature vector • Feature Vector<y= {1,-1}> <Current_BoardPostionofPiece>:<{0,1,2,3}> … <Next_BoardPostionofPiece >:<{0,1,2,3}>… • The feature vector represents a transition from current state to next • Y is assigned based on whether players transitions led to a final win or no. • SVMLight is used for training and prediction
System Architecture(Proposed) Decision module I/O module Communication Server Knowledge base Log Learning Engine
System Architecture • Decision Module • Evaluates a board state • Outputs a move • Learning Engine • Influences the Decision Module based on historical results • Knowledge Base • Contains information obtained from previous games • Data Sources • Training data
Experimental Evaluation: Methodology • Evaluation of our software is based on its performance • Variables: • Heuristic used • Search Depth • Heuristic weighting • Gray piece placement • Search type
Experimental Evaluation: Results • Tournament Results: • Consistently in 5th , 6thor 7thplace. • Wins 100% of the time against greedy.
Future Work • Learning • Training data size • Heuristic • Include more variables • Gray Pieces • More intelligent placement • More analysis of when and where to place
Conclusions • Developed a AI that out performs a greedy strategy • Gained a better understanding of AI and Machine learning
The Team • Swinburn Miranda – Learning Module • Ningchuan Wan – Heuristic Development • Will Moseson – Gray Piece Placement and testing