290 likes | 303 Views
Dive into MonoGame basics including game creation, drawing images & text, sprite handling, and rendering techniques. Learn about initializing, loading content, updating, and drawing for a flawless game development experience.
E N D
Week 2 - Monday COMP 4290
What did we talk about last time? • C# • MonoGame
Review of MonoGame • Program creates a Game1 (or similar) object and starts it running • Game1 has: • Initialize() • LoadContent() • Update() • Draw() • It runs an update-draw loop continuously until told to exit
Console game • We're used to interacting with programs from the command line (console) • MonoGame was not designed with this in mind • It has pretty easy ways to read from the keyboard, the mouse, and also Xbox controllers • But you'll need a console for Project 1 so that you can tell it which file to load and what kind of manipulations to perform on it • So that Console.Write() and Console.Read() work • Go to the Properties page for your project • Go to the Application tab • Change Output Type to Console Application • More information: http://rbwhitaker.wikidot.com/console-windows • You'll need a separate thread to read and write to the console if you don't want your game to freeze up
Drawing a picture • To draw a picture on the screen, we need to load it first • Inside a MonoGame project, right-click the Content.mgcb file and choose Open with… • Select MonoGame Pipeline Tool • Addand then Existing Item… • Find an image you want on your hard drive • Make sure the Build Action is Build • The Importer should be Texture Importer - MonoGame • Create a Texture2D member variable to hold it • Assume the member variable is called cat and the content is called cat.jpg • In LoadContent(), add the line: cat = Content.Load<Texture2D>("cat.jpg");
Drawing a picture continued • Now the variable cat contains a loaded 2D texture • Inside the Draw() method, add the following code: • This will draw cat at location (x, y) • All sprites need to be drawn between Begin() and End()spriteBatch calls spriteBatch.Begin(); spriteBatch.Draw(cat, new Vector2(x, y), Color.White); spriteBatch.End();
Drawing text • Modern TrueType and OpenType fonts are vector descriptions of the shapes of characters • Vector descriptions are good for quality, but bad for speed • MonoGame allows us to take a vector-based font and turn it into a picture of characters that can be rendered as a texture • Just like everything else
Drawing text continued • Inside a MonoGame project, right-click the Content.mgcb file and choose Open with… • Select MonoGame Pipeline Tool • Right click on Content in the tool, and select Add -> New Item… • Choose SpriteFont Description and give your new SpriteFont a name • Open the spritefont file, choosing a text editor like Notepad++ • By default, the font is Arial at size 12 • Edit the XML to pick the font, size, and spacing • You will need multiple Sprite Fonts even for different sizes of the same font • Repeat the process to make more fonts • Note: fonts have complex licensing and distribution requirements
Drawing a font continued • Load the font similar to texture content • Add a DrawString() call in the Draw() method: font = Content.Load<SpriteFont>("Text"); spriteBatch.Begin(); spriteBatch.DrawString(font, "Hello, World!", new Vector2(100, 100), Color.Black); spriteBatch.End();
Why are they called sprites? • They "float" above the background like fairies… • Multiple sprites are often stored on one texture • It's cheaper to store one big image than a lot of small ones • This is an idea borrowed from old video games that rendered characters as sprites
Drawing sprites with rotation • It is possible to apply all kinds of 3D transformations to a sprite • A sprite can be used for billboarding or other image-based techniques in a fully 3D environment • But, we can also simply rotate them using an overloaded call to Draw() spriteBatch.Draw(texture, location, sourceRectangle, Color.White, angle, origin, 1.0f, SpriteEffects.None, 1);
Let's unpack that • texture: Texture2D to draw • location: Location to draw it • sourceRectangle Portion of image • Color.White Full brightness • angle Angle in radians • origin Origin of rotation • 1.0f Scaling • SpriteEffects.NoneNo effects • 1 Float level
Graphics rendering pipeline • For API design, practical top-down problem solving, and hardware design, and efficiency, rendering is described as a pipeline • This pipeline contains three conceptual stages:
Geometry stage • The output of the Application Stage is polygons • The Geometry Stage processes these polygons using the following pipeline:
Model Transform • Each 3D model has its own coordinate system called model space • When combining all the models in a scene together, the models must be converted from model space to world space • After that, we still have to account for the position of the camera
Model and View Transform • We transform the models into camera space or eye space with a view transform • Then, the camera will sit at (0,0,0), looking into negative z • The z-axis comes out of the screen in the book's examples and in MonoGame (but not in older DirectX)
Vertex Shading • Figuring out the effect of light on a material is called shading • This involves computing a (sometimes complex) shading equation at different points on an object • Typically, information is computed on a per-vertex basis and may include: • Location • Normals • Colors
Projection • Projection transforms the view volume into a standardized unit cube • Vertices then have a 2D location and a z-value • There are two common forms of projection: • Orthographic: Parallel lines stay parallel, objects do not get smaller in the distance • Perspective: The farther away an object is, the smaller it appears
Clipping • Clipping process the polygons based on their location relative to the view volume • A polygon completely inside the view volume is unchanged • A polygon completely outside the view volume is ignored (not rendered) • A polygon partially inside is clipped • New vertices on the boundary of the volume are created • Since everything has been transformed into a unit cube, dedicated hardware can do the clipping in exactly the same way, every time
Screen mapping • Screen-mapping transforms the x and y coordinates of each polygon from the unit cube to screen coordinates • A few oddities: • DirectX has weird coordinate systems for pixels where the location is the center of the pixel • DirectX conforms to the Windows standard of pixel (0,0) being in the upper left of the screen • OpenGL conforms to the Cartesian system with pixel (0,0) in the lower left of the screen
Next time… • Rendering pipeline • Rasterizer stage
Reminders • Keep reading Chapter 2 • Want a Williams-Sonoma internship? • Visit http://wsisupplychain.weebly.com/ • Interested in coaching 7-18 year old kids in programming? • Consider working at theCoderSchool • For more information: • Visit https://www.thecoderschool.com/locations/westerville/ • Contact Kevin Choo at kevin@thecoderschool.com • Ask me!