230 likes | 402 Views
9 . 2. Other notable AI Aspects / HLSL Intro. Common board game AI approaches and Strategic AI. Tactical and Strategic AI. Brief introduction to tactical and strategic AI. Tactical and Strategic AI. Tactical and strategic AI encompasses a wide range of algorithms that try to:
E N D
9.2.Other notable AI Aspects / HLSL Intro Common board game AI approaches and Strategic AI
Tactical and Strategic AI Brief introduction to tactical and strategic AI
Tactical and Strategic AI Tactical and strategic AI encompasses a wide range of algorithms that try to: derive a tactical assessment of some situation, possibly using incomplete or probabilistic information Use tactical assessments to make decisions and coordinate the behaviour of multiple characters Aside: Not every genre of game needs tactical and/or strategic forms of AI
Waypoint tactics A waypoint is simply a position in the game world. As with path-finding waypoints (holding path-finding information, e.g. terrain cost, etc.), tactical waypoints hold tactical information, e.g.: Cover points Reconnaissance/sniper locations Shadowed locations Power-up spawn points Exposed locations
Waypoint tactics Tactical locations can be either set by the designer or derived from game data or analytical algorithms. Tactical nodes can be combined with pathfinding nodes to provide tactically aware pathfinding.
Influence maps Influence mapping is widely used in strategy games to map the influence/strength of each side. The game world is split into chunks (tile-based is a common representation). Each chunk is assigned an influence score based on the combined balance of influence ‘emitted’ by game objects that can effect that chunk. The influence map can be used to identify points of weakness and strength and, from this, drive strategic goal selection.
Influence maps 5 1 4 4 The influence exerted on a particular area can depend upon the proximity of game objects (e.g. mobile units or stationary bases), type of surrounding terrain (e.g. a mountain range may ‘prevent’ influence passing), side specific factors (e.g. current financial or happiness state), etc. In most games, influence emitted by a game object decays over distance (e.g. using a linear drop-off alongside a defined maximum influence range). 1 4 4 4 3 3 1 1 1 2 2 2 2 1 +1 1 1 +1 1 +1 3 3 3 1 1 1 2 1 2 2 4 5 4 3 2 1 1 3 2 4 4 4 2 1 1 1 2 2 3 1 3 3 3 1
Jumping Overview of approaches enabling jumping in games
Jumping Unlike other forms of steering behaviour, jumps are inherently risky (i.e. they can fail, possibly with ‘fatal’ consequences). To jump, the character must be moving at the right speed and in the right direction and ensure the jump is executed at the right time. Also, steering behaviours typically re-evaluate decisions several times/second, correcting small mistakes. A jump action is a one-time, single event, fail-sensitive decision.
Jumping (jump points) The most simple approach is to place jump points into the game level. If characters can move with different speeds, then the jump point also needs a minimum jump speed. The character can then seek towards the jump pad, matching the specified speed, and jump whenever it is on the jump pad. Minimum jump velocity
Jumping (difficulties) Some forms of jump require a defined target speed or a defined direction/angle of approach to the jump pad. Additionally, some jumps may have a higher ‘price’ of failure (e.g. ‘death’ vs. a short delay to climb-up). Aside: Such information can be incorporated into the jump point, but it is difficult to extensively test. Precise jump direction needed Precise jump speed needed
Jumping (landing pads) A good approach is to pair a jump pad with a landing pad. Doing this permits the game object to determine the needed speed and direction (by solving the trajectory equations). This approach is more flexible (different characters can differ in their movement approach) and is less prone to error.
High Level Shader Language Introduction to HLSL
A bit of history (fixed function pipelines) Early versions of the DirectX and OpenGL APIs defined a number of fixed rendering stages. This forced all games to use the same approach with only a few parameters open to change.
Application Recent history (shaders) Vertices Vertex Shader As GPUs increased in capability it became possible to inject small programs (called shaders) allowing the application to have greater control. A list of vertices (points) are sent to the vertex shader. In the rasterisation stage primitives are constructed from output vertices. The primitives are then rasterized (i.e. the onscreen pixels determined). Vertex attributes are interpolated between the pixels. The pixel shader determines the on-screen colour of each pixel. Rasterisation / Interpolation Pixel Shader Z-buffer test Frame buffer To screen
Shaders Shaders are small programs that run on the GPU. Different shader languages are available. Vertex Shader The vertex shader can set/change rendered vertices, e.g. object deformation, skeletal animation, particle motion, etc. Pixel Shader The pixel shader sets the colour of the pixel, e.g. for per-pixel lighting, texturing. Can also be used to apply effects over an entire scene, e.g. bloom, depth of field blur, etc. Aside: DirectX 10 also supports geometry shaders (not supported by XNA).
To do: Decide if you want to explore this HLSL(High Level Shader Language) HLSL is a shading language developed by Microsoft for the Direct3D API. HLSL offers a number of functions (mostly centred around branching control, math functions and texture access). HLSL Aside: See http://msdn2.microsoft.com/en-us/library/bb509638.aspx for a complete HLSL reference
texture textureName; sampler2D textureSampler = sampler_state { Texture = textureName; MinFilter = Linear; MagFilter = Linear; MipFilter = Linear; AddressU = Wrap; AddressV = Wrap; AddressW = Wrap; } HLSL(Data types) HLSL supports the shown scalar data-types. Note: vectors/matrices forms can also be defined, e.g. float3, int2x2, double4x4, etc. HLSL also provides a sampler type (used to read, i.e. sample, textures): sampler, sampler1D, sampler2D, and sampler3D. The sampler type is defined using a number of different states, e.g. MinFilter, MagFilter, and MipFilter controlling texture filtering, and AddressU, AddressV, and AddressW controlling addressing states.
HLSL(Semantics) Semantics are used to map input and output data to variables. All varying input data (from the application or between rendering stages) requires a semantic tag, e.g. all outputs from the vertex shader must be semantically tagged. Above right note: [n] is an optional integer that provides support for multiple data types, e.g. Texture0, Texture1, Texture2. float4 vertexPosition : POSITION0; Aside: The only valid semantic inputs to the pixel shader are Color[n] and Texture[n]. Often, custom data (i.e. non-texture addressing) is passed using a Texture[n] semantic.
HLSL(Functions) float2 CalculateParallaxOffset( float3 view, float2 texCoord ) { view = normalize(view); float height = parallaxScale * (tex2D(HeightSampler, texCoord).r ) + parallaxOffset; float2 viewOffset = view.xy * (height); return viewOffset; } HLSL permits C like functions to be specified. A shader must define at least one vertex function that will consider vertex information and at least one pixel function that will consider pixel colours. These functions must define their inputs and outputs with semantics. Intrinsic Functions HLSL offers a set of ‘built-in’ functions, mostly centred around flow control, math operations and texture access.
pixelShaderInputSimpleVS(vertexShaderInput input) { pixelShaderInputouput; output. screenPosition = mul(input.position, wvpMatrix ); output.colour = float3(1.0f, 1.0f, 1.0f); return output; } float4 SimplePS(pixelShaderInput input) : Colour0 { return float4(input.colour.rgb, 1.0f); } technique SimpleShader { pass { VertexShader = compile vs_1_1 SimpleVS(); PixelShader = compile ps_1_1 SimplePS(); } } Vertex shader function HLSL(Example) Transform from model space to screen space The following is a simple example shader Define world-view-projection matrix float4x4 wvpMatrix : WorldViewProjection; structvertexShaderInput { float4 vertexPosition : Position0; }; structpixelShaderInput { float4 screenPosition : Position; float3 colour : Color0; }; Pixel shader function Define input structure expected by the vertex shader Output pixel colour Technique definition – specifying VS and PS functions and compile type Define input structure expected by the pixel shader (and also output by the vertex shader)
Effect effect; effect = content.Load<Effect>(“effectName"); effect.CurrentTechnique = lightEffect.Techniques[“techique"]; effect.Parameters[“colour"].SetValue(Vector3.One); effect.Parameters[“tolerance"].SetValue(0.8f); effect.Begin(); foreach ( EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); // Send vertex information to the effect, e.g. //graphicsDevice.DrawUserIndexedPrimitives <VertexPositionTexture> (PrimitiveType.TriangleList, ... ); pass.End(); } effect.End(); Load the effect Effects in XNA Select the desired technique Define effect parameters Effects in XNA are types of game asset (alongside textures and models). The Effect class represents an effect, permitting effect parameter configuration, technique selection, and actual rendering. An effect can be loaded/configured as shown. Begin effect and iterate over each pass Aside: For better performance, the effect.Parameter[“name”] can be stored as an EffectParameter object (upon effect construction), and the setValue(...) method called on the parameter. Vertex information can be sent using other approaches End the effect
Summary Today we explored: • Brief intro to some types of strategic/ tactical AI • Overview of how jumps can be supported within games • HLSL and effect usage in XNA To do: • Complete Question Clinic • Consider if material is of use within your game. • Complete section in project document on Alpha hand-in • Roundup work for the alpha hand-in at the end of this week.