1 / 17

Artificial Intelligence in Game Design

Artificial Intelligence in Game Design. Probabilistic Finite State Machines and Markov State Machines. Randomness Inside State. Randomness in actions taken by NPC Randomness inside update method Can depend on current state. Randomness Inside State. Randomness in Initial Setup

amory
Download Presentation

Artificial Intelligence in Game Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence in Game Design Probabilistic Finite State Machines and Markov State Machines

  2. Randomness Inside State • Randomness in actions taken by NPC • Randomness inside update method • Can depend on current state

  3. Randomness Inside State • Randomness in Initial Setup • Randomness in enter method • Example: choice of weapon in Fight state 60% 35% 5%

  4. Randomness in Transitions • Same current state + same stimuli = one of several possible next states • Possibly including current state • Performing different tasks at random Guard Door andShout for Help Player visible 60% Patrol in front of Door 40% Player visible Chase Player

  5. Random Behavior Timeouts • Continue strong emotional behavior for random number of steps Predator seen Predator seen Wander Flee Predator not seen 10% Predator not seen 90%

  6. Unpredictability of World • Small chance of “unexpected” occurrence • Adds “newness” to game even after multiple plays • Adds to “realism” of world Target in sights98% Reload Aim Fire Normal case Finished reloading Target in sights2% Gun Jam Gun cleared Unexpected case

  7. Randomness in Emotional States • Emotional transitions less predictable • Effect of “delayed reaction” Small hit by player75% Player HP < 1040% Player HP < 1060% Confident Angry Small hit by player25% Heavy hit by player70% My HP < 1050% Heavy hit by player30% My HP < 1050% Heavy hit by me30% Frightened Heavy hit by me70%

  8. Probabilities and Personality • NPCs with probabilities can give illusion of personalities • Differences must be large enough for player to notice in behavior Small hit by player10% Player HP < 1080% Player HP < 1020% Confident Angry Small hit by player90% Heavy hit by player70% My HP < 1080% Heavy hit by player30% Orc with anger management issues Heavy hit by me30% My HP < 1020% Frightened Heavy hit by me70%

  9. Dynamic Probabilities • Likelihood of transition depends on something else • More realistic (but not completely predictable) • Can give player clues about state of NPC % of bullets left Firing Reload Player not firing 1- % of bullets left Guard Door andShout for Help 1- Energy % Player visible Patrol in front of Door Energy % Chase Player

  10. Emergent Group Behavior • Each NPC in group can choose random behavior • Can appear to “cooperate” • Half of group fires immediately giving “cover” to rest • If player shoots firing players, rest will have time to reach cover Fire 50 % Player visible Patrol Cover reached 50 % Take Cover

  11. Emergent Group Behavior • Potential problem:Possibility all in group can choose same action • All either shoot or take cover • No longer looks intelligent • Can base probabilities on actions others take Fire 1 - % of other players firing Player visible Patrol Cover reached Take Cover % of other players firing

  12. “Markov” State Machines • Tool for decision making about states • Give states a “measure” describing how good state is • Move to state with best measure • Key: Measure changes as result of events • Possibly returns to original values if no events occur • Based (sort of) on Markov probabilistic process(but not really probabilities)

  13. “Markov” State Machines • Example: Guard choosing cover • Different cover has different “safety” measures • Firing from cover makes it less safe(player will start shooting at that cover) • Represent safety as vector of values trees 1.0 1.5 wall 0.5 brush

  14. “Markov” State Machines • Assign transition “matrix” to each action • Defines how each state affected by action • Multiplier < 1 = worse • Multiplier > 1 = better • Example: fire from trees • Trees less safe • Other positions marginally safer(player not concentrating on them) 0.1 1.2 1.2

  15. “Markov” State Machines • “Multiply” current vector by matrix to get new values • Note: real matrix multiplication requires 2D transition matrix 1.0 0.1 0.1 = 1.5 1.2 1.8 0.5 1.2 0.6 0.1 0 0 0 1.2 0 0 0 1.2

  16. “Markov” State Machines • Further events modify values • Example: Now fire from behind wall 0.12 0.1 1.2 = 0.9 1.8 0.2 0.72 0.6 1.2

  17. “Markov” State Machines • Note “total safety” (as sum of values) decreasing 3  2.5  1.74… • May be plausible (all cover becoming less safe) • Can normalize if necessary • Can gradually increasevalues over time • Usually result of time/turns without event • Example: player leaves area 0.12 1.1 0.132 = 0.9 1.1 0.99 0.72 1.1 0.792

More Related