Learning Windows 8 Game Development
上QQ阅读APP看书,第一时间看更新

Down the graphics pipeline

Let's step aside for a moment to understand the stages that the graphics card goes through when it renders something to the screen, also called the graphics pipeline.

To do this, we need to understand what exactly is drawn to the screen, even when we're just working in 2D. The base component of something drawn onto the screen is a vertex. The vertex is a point in space, that, when combined with at least two other points, forms a solid triangle that can be rendered. By combining multiple triangles together, you can create anything from a simple square to a detailed 3D model with thousands of triangles.

Often, vertices will share the same space, so we need a way to reduce the repetition and memory use by only defining a vertex once. However, how do we indicate which triangles use this vertex? This is where the index enters. Just as with arrays, the index allows you to map to a particular vertex using much less memory. This way you can have a lot of data per vertex, and reference a single vertex multiple times by defining the triangles with an array of indices.

How does all this get mapped and calculated so that we render the right thing? This is where the Input Assembler (IA) comes into play. The IA takes the vertex and index buffers (arrays) and builds a list of triangle vertices that need to be drawn. These are then passed to the vertex shader.

The vertex shader is a piece of code that we use to map the 3D position of a vertex to its 2D position on the screen. We'll be using DirectXTK for this so we can avoid worrying about vertex shaders for now; however if you want to start adding awesome visual features and take advantage of the hardware, you will want to learn about these shaders. Vertex shaders are written in a language called the High Level Shader Language (HLSL).

Once we have a transformed vertex, we can take advantage of some new Direct3D 10.0 and Direct3D 11.0 features: the tessellation shader and the geometry shader.

Tessellation refers to the act of increasing geometric detail by adding more triangles. This is done through two different shaders, the hull shader and the domain shader. Tessellation is an advanced topic that is too big for this book. I encourage you to take a look at the many resources available online if you're interested in this topic.

Geometry shaders were introduced in Direct3D 10.0, and provide a way to generate geometry completely on the GPU. These allow for some interesting tricks; however, they're also well outside the scope of this book, so we'll skip over them for now. The hull, domain, and geometry shaders are optional features that are not available on pre-Direct3D 10.0 hardware.

Once we have processed the vertices, they are sent to the rasterizer, which is responsible for interpolating across the screen and finding the region of pixels that represent the object. This happens automatically and results in the input to the pixel shader.

The pixel shader is where the final color of the pixel is determined. Here, we can apply lighting effects to generate the photorealistic scenes we see in modern games, or anything else that we want. In the pixel shader stage, the developer has full control over the final look, which can result in some crazy visual effects, as evident in the many demoscene projects that are created each year.

Note

The demoscene is a collection of developers who create visual experiences, often set to music, and in incredibly small executables. It's easy to find demo challenges that require a maximum executable size of 64k or even 4k.

Finally, we will end up with a lot of pixels, some even trying to share the same spot on the texture. This is where the Output Merger combines everything down into a flat 2D texture that can be displayed on the screen. The Output Merger handles resolving the pixels down to a single color and writing them out to our render target and depth buffer.