160 likes | 229 Views
Carlos Barboza Kenny Barron Kevin Cherry Tung Le Daniel Lorio. Intelligent Asteroid Agents combining Pathfinding and Steering Algorithms. Avoid enemy ships as they chase after you Every second alive adds points (10) Killing an enemy adds points (100)
E N D
Carlos Barboza Kenny Barron Kevin Cherry Tung Le Daniel Lorio Intelligent Asteroid Agents combining Pathfinding and Steering Algorithms
Avoid enemy ships as they chase after you Every second alive adds points (10) Killing an enemy adds points (100) Player can wrap around screen, enemies can’t Enemy can only propel forward, player can move forward and in reverse About The Game
Maximize agent’s performance in fully-dynamic, multi-agent environment with limited knowledge of environment Determine performance through the use of different AI algorithms and parameters Performance is gauged by score at end of game Objective
User controlled player Limited ship following Disorganized in pursuit of player ship Original Game“Frenzy Survivor” http://berfenfeldt.com/
Partially Observable – Limited to set regions Strategic – Moves based on location of enemy Episodic – Experience based on perception of enemy Dynamic – Enemy constantly moving/regenerating Discrete – Agent responds with set action based on perception of enemies in viewed regions Multi-Agent – Steering algorithm applied to enemies, and pathfinding for player Environment P – Score E – Grid A – Moves S - Grid Regions
Adapted open source project OpenSteer in C# Enemy determines velocity and direction based on players location Steer for seek allows enemies to converge around player Steer for flee allows enemies to distance themselves away from player in any direction Adapting the Enemy
Loosely based on A* Pathfinding Combines heuristic, actual cost, and utility function to quantitate each move choice Agent chooses maximum move value Adapting the Player
1st Agent was a simple reflex agent, not partially observable, random movements and actions, and it was not rational 2nd Environment was partially observable, agent was a simple reflex agent, and partially rational 3rd Agent was goal based agent, partially observable environment, and partially rational Algorithm Evolution
Creates grid to discretize the game world • Each grid cell has bitmask that holds information on cell contents • If enemy is in cell • If cell is part of a region Heuristic Function (h)
Creates 5 regions from grid • Multiplies enemy presence with proximity to player in each region using a cubic scale. • A – Accelerates Forward • F – Flees Backward • L – Turns Left • R – Turns Right • S – Shoots Heuristic Function (h) R S L F A
Concept of look ahead implemented as a move tree for actual score • 1 point per move simulates survival time • Prune on dead state (count dead states) • Total dead states counted for each move’s subtree • Final move score: • MaxDescendentScore * W1 - TotalDescendentDeadStates * W2 Actual Cost Function (g) root L A S F R ... L A S F R
Acts as a multiplier for move values, and tilts the behavior of the player towards passive or aggressive • A = 1.4 • B = 1.0 • F = 1.2 • L = 0.8 • R = 0.8 • S = 1.2 Utility Function (u)
f(m) = (h(m) + g(m)) * u(m) • where m = move • Overview: • h(m): Evaluate for each move and choose the one with the largest value • g(m): Simulate gameState for each child, prune dead states, and select move with highest • u(m): Utility function adds custom factor to each value Formula