1 / 21

Latest AI Research in RTS Games

Latest AI Research in RTS Games. Omar Khaled Enayet – July 2009. Index. Why is AI Development Slow in RTS-Games. AI Areas needing more research. Adaptation Introduction Planning. Case-Based Planning. PDDL. Learning. Reinforcement Learning. Dynamic Scripting.

sylvie
Download Presentation

Latest AI Research in RTS Games

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Latest AI Research in RTS Games Omar Khaled Enayet – July 2009

  2. Index • Why is AI Development Slow in RTS-Games. • AI Areas needing more research. • Adaptation • Introduction • Planning. • Case-Based Planning. • PDDL. • Learning. • Reinforcement Learning. • Dynamic Scripting. • Hierarchal Reinforcement Learning. • Monte-Carlo Method. • Temporal Difference Learning. • Q-learning. • Evolution Algorithms. • Artificial Neural Networks • Genetic Algorithms • Hybrid CBR-RL. • Opponent Modeling.

  3. Why is AI Development slow in RTS Games ? • RTS game worlds feature many objects, imperfect information, micro actions, and fast-paced action. By contrast, World–class AI players mostly exist for slow– paced, turn–based, perfect information games in which the majority of moves have global consequences and planning abilities therefore can be outsmarted by mere enumeration. • Market dictated AI resource limitations. Up to now popular RTS games have been released solely by game companies who naturally are interested in maximizing their profit. Because graphics is driving games sales and companies strive for large market penetration only about 15% of the CPU time and memory is currently allocated for AI tasks. On the positive side, as graphics hardware is getting faster and memory getting cheaper, this percentage is likely to increase – provided game designers stop making RTS game worlds more realistic. • Lack of AI competition. In classic two–player games tough competition among programmers has driven AI research to unmatched heights. Currently, however, there is no such competition among real–time AI researchers in games other than computer soccer. The considerable man–power needed for designing and implementing RTS games and the reluctance of game companies to incorporate AI APIs in their products are big obstacles to AI competition in RTS games.

  4. AI Areas needing more research • Adversarial real–time planning. In fine–grained realistic simulations, agents cannot afford to think in terms of micro actions such as “move one step North”. Instead, abstractions of the world state have to be found that allow AI programs to conduct forward searches in a manageable abstract space and to translate found solutions back into action sequences in the original state space. Because the environment is also dynamic, hostile, and smart — adversarial real–time planning approaches need to be investigated. • Decision making under uncertainty. Initially, players are not aware of the enemies’ base locations and intentions. It is necessary to gather intelligence by sending out scouts and to draw conclusions to adapt. If no data about opponent locations and actions is available yet, plausible hypotheses have to be formed and acted upon. • Opponent modeling, learning. One of the biggest shortcomings of current (RTS) game AI systems is their inability to learn quickly. Human players only need a couple of games to spot opponents’ weaknesses and to exploit them in future games. New efficient machine learning techniques have to be developed to tackle these important problems.

  5. AI Areas needing more research (2) • Spatial and temporal reasoning. Static and dynamic terrain analysis as well as understanding temporal relations of actions is of utmost importance in RTS games — and yet, current game AI programs largely ignore these issues and fall victim to simple common–sense reasoning . • Resource management. Players start the game by gathering local resources to build up defenses and attack forces, to upgrade weaponry, and to climb up the technology tree. At any given time the players have to balance the resources they spend in each category. For instance, a player who chooses to invest too many resources into upgrades, will become prone to attacks because of an insufficient number of units. Proper resource management is therefore a vital part of any successful strategy

  6. AI Areas needing more research (3) • Collaboration. In RTS games groups of players can join forces and intelligence. How to coordinate actions effectively by communication among the parties is a challenging research problem. For instance, in case of mixed human/AI teams, the AI player often behaves awkwardly because it does not monitor the human’s actions, cannot infer the human’s intentions, and fails to synchronize attacks. • Pathfinding. Finding high–quality paths quickly in 2D terrains is of great importance in RTS games. In the past, only a small fraction of the CPU time could be devoted to AI tasks, of which finding shortest paths was the most time consuming. Hardware graphics accelerators are now allowing programs to spend more time on AI tasks. Still, the presence of hundreds of moving objects and the urge for more realistic simulations in RTS games make it necessary to improve and generalize pathfinding algorithms. Keeping unit formations and taking terrain properties, minimal turn radii, inertia, enemy influence, and fuel consumption into account greatly complicates the once simple problem of finding shortest paths.

  7. Introduction to Adaptation in RTS Games • Current Implementation of RTS Games applies extensive usage of FSM that makes them highly predictable. • Adaptation is achieved either through Learning or planning or a mixture of both • Planning is beginning to appear in commercial games such as DemiGod and Latest Total War Game. • Learning has limited success so far. • Developers are experimenting on replacing the ordinary decision making systems (FSM, FUSM, Scripting, Decision Trees, and Markov Systems) with Learning Techniques

  8. Case Based Planning • Using Case-Based Reasoning with planning. • Plan recognition refers to the act of an agent observing the actions of another agent whether it be human or computer-based with the intent of predicting its future actions, intentions, or goals. • Several approaches can be used to perform plan recognition namely deductive, abductive, probabilistic, and case-based. It can also be classified as either intended or keyhole. • An intended case-based plan recognition system assumes that the agent or the user is actively giving signals or input to the sensing again to denote plans and intensions. • In the case of a real-time strategy game, the user or player is focused on playing the game and not focused on trying to convey his or her intention to the sensing agent, hence for this scenario, we would be classified as keyhole plan recognition wherein predictions are based on indirect observations about the users actions in a certain scenario.

  9. Case Based Planning – Papers (1) • After 2003 : CASE-BASED PLAN RECOGNITION FOR REAL-TIME STRATEGY GAMES :employs case-based plan recognition for non-player characters so as to minimize their predictability. • After 2004 :On the Role of Explanation for Hierarchical Case-Based Planning in Real-Time Strategy Games : describes an application of hierarchical case-based planning that involves reasoning in the context of real-time strategy games, describes a representation for explanations in this context, and detail four types of explanations.

  10. Case Based Planning – Papers (2) • 2005 : Learning to Win: Case-Based Plan Selection in a Real-Time Strategy Game : Introduced the case-based tactician (CAT) which removes the assumption a of a static opponent which was used with Ponson and Spronck(2004) who used genetic algorithms that searches in a plan space and a weighting algorithm (dynamic scripting) that biases sub-plan retreival.CAT significantly outperforms the best among a set of genetically evolved plans when tested against random opponents. • 2005 : Defeating Novel Opponents in a Real-Time Strategy Game : The Case-Based Tactician (CAT) system (created by the same authors) uses case-based reasoning to learn to win the real-time strategy game Wargus. Previous work has shown CAT’s ability to defeat a randomly selected opponent from a set against which it has trained. They now focus on the task of defeating a selected opponent while training on others. They describe CAT’s algorithm and report its cross-validation performance against a set of Wargus opponents.

  11. Case Based Planning – Papers (3) • 2007 : Case-Based Planning and Execution for Real-Time Strategy Games : presents a real-time case based planning and execution approach designed to deal with RTS games. propose to extract behavioral knowledge from expert demonstrations in form of individual cases. This knowledge can be reused via a case based behavior generator that proposes behaviors to achieve the specific open goals in the current plan. • 2007 : Mining Replays of Real-Time Strategy Games to Learn Player Strategies :the authors use replays of the commercial RTS game StarCraft to evaluate human player behaviors and to construct an intelligent system to learn human-like decisions and behaviors. A case-based reasoning approach was applied for the purpose of training our system to learn and predict player strategies. Our analysis indicates that the proposed system is capable of learning and predicting individual player strategies, and that players provide evidence of their personal characteristics through their building construction order.

  12. Case Based Planning – Papers (4) • 2008 : Learning from Human Demonstrations for Real-Time Case-Based Planning : One of the main bottlenecks in deploying casebased planning systems is authoring the case-base of plans. In this paper they present a collection of algorithms that can be used to automatically learn plans from human demonstrations. Theiralgorithms are based on the basic idea of a plan dependency graph, which is a graph that captures the dependencies among actions in a plan. Such algorithms are implemented in a system called Darmok 2 (D2), a case-based planning system capable of general game playing with a focus on real-time strategy (RTS) games. Theyevaluate D2 with a collection of three different games with promising results. • 2008 : On-Line Case-Based Plan Adaptation for Real-Time Strategy Games :They have developed on-line case-based planning techniques that are effective .In this paper, They extend their earlier work using ideas from traditional planning to inform the real-time adaptation of plans. In their framework, when a plan is retrieved, a plan dependency graph is inferred to capture the relations between actions in the plan. The plan is then adapted in real-time using its plan dependency graph. This allows the system to create and adapt plans in an efficient and effective manner while performing the task. The approach is evaluated using WARGUS, a well-known real-time strategy game.

  13. PDDL (Papers) • 2006 : A FIRST LOOK AT BUILD-ORDER OPTIMIZATION IN REAL-TIME STRATEGY GAMES : approaches the real-time planning problem by considering build-order optimization in real-time strategy games. This problem class can be formulated as a resource accumulation and allocation problem where an agent has to decide what objects to produce at what time in order to meet one of two goals: either maximizing the number of produced objects in a given time period or producing a certain number of objects as fast as possible. They identify challenges related to this planning problem, namely creating and destroying objects, concurrent action generation and execution, and present ideas how to address them. They use PDDL in their work.

  14. Reinforcement Learning – Papers (1)Hierarchical RL • After 2006 : Hierarchical Reinforcement Learning with Deictic representation in a computer game :Without applying abstraction and generalization techniques, many traditional machine learning techniques, such as reinforcement learning, will fail to learn efficiently. In this paper they examine extensions of reinforcement learning that scale to the complexity of computer games. In particular they look at hierarchical reinforcement learning applied to a learning task in a real time-strategy computer game. Moreover, they employ a deictic state representation that reduces the complexity compared to a propositional representation and allows the adaptive agent to learn a generalized policy, i.e., it is capable of transferring knowledge to unseen task instances. They found that hierarchical reinforcement learning significantly outperforms flat reinforcement learning for our task.

  15. Reinforcement Learning – Papers (2)Dynamic Scripting • 2006 : Goal-Directed Hierarchical Dynamic Scripting for RTS Games : This paper suggests a goal-directed hierarchical dynamic scripting approach for incorporating learning into real-time strategy games. Two alternatives for shortening the re-adaptation time when using dynamic scripting are also presented. Finally, this paper presents an effective way of throttling the performance of the adaptive artificial intelligence system. Put together, the approach entails the possibility of an artificial intelligence opponent to be challenging for a human player, but not too challenging. • 2008 : Automatically Acquiring Domain Knowledge For Adaptive Game AI Using Evolutionary Learning : introduce AKAD system, which uses an evolutionary algorithm that improves dynamic scripting performance on RTS Games. They conclude that high-quality domain knowledge (i.e., tactics) can be automatically generated for strong adaptive AI opponents in RTS games. This reduces the time and effort required by game developers to create intelligent game AI, thus freeing them to focus on other important topics (e.g., storytelling, graphics).

  16. Reinforcement Learning – Papers (2)Monte-Carlo • 2003 : UCT for Tactical Assault Battles in Real-Time Strategy Games : uses a Monte-Carlo algorithm called UCT for optimization of user-specified object functions, They present an evaluation of their approach on a range of tactical assault problems with different objectives in the RTS game Wargus. The results indicate that their planner is able to generate superior plans compared to several baselines and a human player. • After 2004 : Monte Carlo Planning in RTS Games : They present a framework—MCPlan—for Monte Carlo planning, identify its performance parameters, and analyze the results of an implementation in a capture– the–flag game.

  17. Reinforcement Learning – Papers (1)Monte-Carlo VS Dynamic Scripting • 2008 : Adaptive reinforcement learning agents in RTS games : compares Dynamic Scripting with Monte Carlo Methods with all details, talks about drawbacks of using TD-learning, states many future research to the general topic as well as specific algorithms.

  18. Reinforcement Learning – Papers (3) Temporal Difference Learning • After 2005 : Establishing an Evaluation Function for RTS games : Because of the high complexity of modern video games, the task to create a suitable evaluation function for adaptive game AI is quite difficult. In our research we aim at fully automatically generating a good evaluation function for adaptive game AI. This paper describes our approach, and discusses the experiments performed in the RTS game Spring. TD-learning is applied for establishing two different evaluation functions, one for a perfect-information environment, and one for an imperfect-information environment. From our results we may conclude that both evaluation functions are able to predict the outcome of a Spring game reasonably well, i.e., they are correct in about 75% of all games played. • 2005 : Learning Unit Values in Wargus Using Temporal Differences :In order to use a learning method in a computer game to improve the perfomance of computer controlled entities, a fitness function is required. In real-time strategy (RTS) games it seems obvious to found the fitness function on the game score, which usually is derived from the number and type of units a player has killed. In practice the value of a unit type is set manually. This paper proposes to use Temporal Difference learning (TD-learning) to determine the value of a unit type. An experiment was performed to determine good unit values for use by the learning mechanism ‘dynamic scripting’ in Wargus. The results of this experiment demonstrate significantly improved learning performance using the newlydetermined unit values, compared to using the original unit values which were manually determined by the Wargus game developers.

  19. Hybrid CBR-RL (Papers) • 2007 : Transfer Learning in Real-Time Strategy Games Using Hybrid CBR/RL : they present a multilayered architecture named CAse-Based Reinforcement Learner (CARL). It uses a novel combination of Case-Based Reasoning (CBR) and Reinforcement Learning (RL) to achieve transfer while playing against the Game AI across a variety of scenarios in MadRTSTM, a commercial Real Time Strategy game. Their experiments demonstrate that CARL not only performs well on individual tasks but also exhibits significant performance gains when allowed to transfer knowledge from previous tasks.

  20. Genetic Algorithms (Papers) • Co-evolving Real-Time Strategy Game Playing Influence Map Trees With Genetic Algorithms : They investigate the use of genetic algorithms to play real-time computer strategy games and focus on solving the complex spatial reasoning problems found within these games. To overcome the knowledge acquisition bottleneck found in using traditional expert systems, scripts, and decision trees as done in most game AI, we use genetic algorithms to evolve game players. The spatial decision makers in these game players use influence maps as a basic building block, from which they construct and evolve influence map trees containing complex game playing strategies. With co-evolution they attain ”arms race” like progress, leading to the evolution of robust players superior to their hand coded counterparts. • 2003 : Human-Like Behavior in RTS Games using genetic algorithms : In this thesis they will present the results from an experiment aiming at testing strategy game AI. Test persons played against traditional strategy game AI, a genetic algorithm AI, and other humans to see if they experienced any differences in the behavior of the opponents. • 2008 : Stochastic Plan Optimization in Real-Time Strategy Games : They present a domain independent off-line adaptation technique called Stochastic Plan Optimization for finding and improving plans in real-time strategy games. Their method is based on ideas from genetic algorithms but we utilize a different representation for our plans and an alternate initialization procedure for our search process. The key to their technique is the use of expert plans to initialize their search in the most relevant parts of plan space. Their experiments validate this approach using their existing case based reasoning system Darmok in the real-time strategy game Wargus, a clone of Warcraft II.

  21. Opponent Modeling – Papers • After 2007 : OPPONENT MODELING IN REAL-TIME STRATEGY GAMES : One feature of realistic behavior in game AI is the ability to recognize the strategy of the opponent player. This is known as opponent modeling. In this paper, they propose an approach of opponent modeling based on hierarchically structured models. The top-level of the hierarchy can classify the general play style of the opponent. The bottom-level of the hierarchy can classify specific strategies that further define the opponent's behavior. Experiments that test the approach are performed in the RTS game Spring. From their results they may conclude that the approach can be successfully used to classify the strategy of an opponent in the Spring game. • 2007 : Hierarchical Opponent Models for Real-Time Strategy Games : This paper describes an approach of opponent modeling using hierarchical structured models. As classifiers, two different approaches have been used for the different hierarchical levels, in order to test the effectiveness of this approach. The first classifier uses fuzzy models whereas the second classifier is a modification of the concept of discounted rewards from game theory. It is observed that the fuzzy classifier shows a stable convergence to a correct classification with high confidence rating. The experiments regarding the discounted reward approach revealed, that for some sub-models this approach showed a similar well conversion as the fuzzy classifier. However, for other sub-models only mediocre results were achieved with a late and unstable convergence. They conclude that this approach is suitable for real-time strategy games and can be reliable for each sub-model with further research. • 2009 : Design of Autonomous Systems Learning adaptive play in a Real Time Strategy game : In this paper we propose a method for an adaptive game player, by modeling the opponents' strategy and adjusting our strategy accordingly. To do so we use the Battlecode framework [1], and develop three basic strategies in accordance with three standard strategies in Real-Time Strategy games. We explain how all agents organize their behavior using roles and message passing. We then propose a system for adaptive play and show that an adaptive player outperforms a static one.

More Related