- Direct3D Rendering Cookbook
- Justin Stenning
- 5134字
- 2021-07-19 18:40:11
Rendering primitives
Now that we have our rendering framework ready, we can finally work on the more interesting stuff—actually rendering something!
All rendered objects, at their simplest form, are made up of one or more primitives: points, lines, or triangles which are made up of one or more vertices. In this recipe, we will render the following primitives:
- A set of colored arrows representing the x, y, and z axes (red, green, and blue) using lines
- A triangle using a triangle
- A quad (made up of two triangles)
We will also implement our WVP matrix and see how multisampling affects the rendered image. The final result is shown in the following figure:
Getting ready
We'll start by creating a new Windows Form Application project named Ch02_01RenderingPrimitives
in the D3DRendering.sln
solution:
- Add the
SharpDX.dll
,SharpDX.DXGI.dll
andSharpDX.Direct3D11.dll
references like we did in the previous recipes. - Next, we will add a reference to
.\External\bin\SharpDX.D3DCompiler.dll
for dealing with our shader code. - An important step before we get started is to ensure that we are copying the Direct3D
d3dcompiler_46.dll
DLL to the build directory, as this will let us compile our shaders at runtime. To do this, open the project properties, select Build Events, and add the following code to the Post-build event command line:copy "$(SolutionDir)\External\Bin\Redist\D3D\x86\d3d*.dll" "$(TargetDir)"
- Make sure that you have the
Common
rendering framework project (that we used in the Using the sample rendering framework recipe earlier in this chapter) added to yourD3DRendering.sln
solution, and add a reference to it to our new project.
How to do it…
For this recipe, we will first create a HLSL shader to render our primitive shapes. We will create a D3DApp
class that compiles our shaders and creates instances of our line, triangle, and quad renderers.
- The first thing we will do is create our HLSL shader file. Do this by adding a new Text File to the project and call it
Simple.hlsl
. Add the following code to the shader file:// Constant buffer to be updated by application per frame cbuffer PerObject : register(b0) { // WorldViewProjection matrix float4x4 WorldViewProj; }; // Vertex Shader input structure with position and color struct VertexShaderInput { float4 Position : SV_Position; float4 Color : COLOR; }; // Vertex Shader output structure consisting of the // transformed position and original color // This is also the pixel shader input struct VertexShaderOutput { float4 Position : SV_Position; float4 Color : COLOR; }; // Vertex shader main function VertexShaderOutput VSMain(VertexShaderInput input) { VertexShaderOutput output = (VertexShaderOutput)0; // Transform the position from object space to homogeneous // projection space output.Position = mul(input.Position, WorldViewProj); // Pass through the color of the vertex output.Color = input.Color; return output; } // A simple Pixel Shader that simply passes through the // interpolated color float4 PSMain(VertexShaderOutput input) : SV_Target { return input.Color; }
Tip
By default, Visual Studio will create the file using the UTF-8 with signature encoding. The Direct3D 11 shader compiler requires ANSI encoding. To do this in Visual Studio, navigate to FILE | Save Simple.hlsl As... from the menu and select the Western European (Windows) - Codepage 1252 encoding.
Select Yes when you are asked if you want to overwrite the file.
- Select the shader file in the Solution Explorer and select Copy if newer for the Copy to Output Directory setting within the Properties window.
- In our project, let's add a new class called
D3DApp
, which is descending fromD3DApplicationDesktop
(note the additionalusing
directives):using SharpDX; using SharpDX.Windows; using SharpDX.DXGI; using SharpDX.Direct3D11; using SharpDX.D3DCompiler; using Common; using Buffer = SharpDX.Direct3D11.Buffer; public class D3DApp: D3DApplicationDesktop { public PrimitivesApp(System.Windows.Forms.Form window) : base(window) { } ... }
- We will include the following private member fields:
// The vertex shader ShaderBytecode vertexShaderBytecode; VertexShader vertexShader; // The pixel shader ShaderBytecode pixelShaderBytecode; PixelShader pixelShader; // The vertex layout for the IA InputLayout vertexLayout; // A buffer that will be used to update the constant buffer // used by the vertex shader. This contains our // worldViewProjection matrix Buffer worldViewProjectionBuffer; // Our depth stencil state DepthStencilState depthStencilState;
- Next, we will implement our
CreateDeviceDependentResources
method override as described in the Creating the device dependent resources recipe. Within this method, we will begin by calling the base implementation, releasing existing references, and retrieving our Direct3D device and immediate context:base.CreateDeviceDependentResources(deviceManager); // Release all resources RemoveAndDispose(ref vertexShader); RemoveAndDispose(ref vertexShaderBytecode); RemoveAndDispose(ref pixelShader); RemoveAndDispose(ref pixelShaderBytecode); RemoveAndDispose(ref vertexLayout); RemoveAndDispose(ref worldViewProjectionBuffer); RemoveAndDispose(ref depthStencilState); // Get a reference to the Device1 instance and context var device = deviceManager.Direct3DDevice; var context = deviceManager.Direct3DContext;
- Next, we will compile our HLSL source code into a vertex and pixel shader. We will enabling the debug flag if we are using the Debug build configuration:
ShaderFlags shaderFlags = ShaderFlags.None; #if DEBUG shaderFlags = ShaderFlags.Debug; #endif // Compile and create the vertex shader vertexShaderBytecode = ToDispose(ShaderBytecode.CompileFromFile("Simple.hlsl", "VSMain", "vs_5_0", shaderFlags)); vertexShader = ToDispose(new VertexShader(device, vertexShaderBytecode)); // Compile and create the pixel shader pixelShaderBytecode = ToDispose(ShaderBytecode.CompileFromFile("Simple.hlsl", "PSMain", "ps_5_0", shaderFlags)); pixelShader = ToDispose(new PixelShader(device, pixelShaderBytecode));
- Next, initialize a vertex layout to match the structure defined in our HLSL vertex shader.
// Layout from VertexShader input signature vertexLayout = ToDispose(new InputLayout(device, vertexShaderBytecode.GetPart( ShaderBytecodePart.InputSignatureBlob), new[]{ // input semantic SV_Position=vertex coord in object space new InputElement("SV_Position",0,Format.R32G32B32A32_Float, 0, 0), // input semantic COLOR = vertex color new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 16, 0) }));
- Now, we will create a new
Buffer
used to populate the WVP matrix constant buffer defined within our HLSL code.// Create the buffer that will store our WVP matrix worldViewProjectionBuffer = ToDispose(new SharpDX.Direct3D11.Buffer(device, Utilities.SizeOf<Matrix>(), ResourceUsage. Default, BindFlags.ConstantBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0));
- Create the depth stencil state to control how the OM stage will handle depth:
// Configure the OM to discard pixels that are // further than the current pixel in the depth buffer. depthStencilState = ToDispose(new DepthStencilState(device, new DepthStencilStateDescription() { IsDepthEnabled = true, // enable depth? DepthComparison = Comparison.Less, DepthWriteMask = SharpDX.Direct3D11.DepthWriteMask.All, IsStencilEnabled = false,// enable stencil? StencilReadMask = 0xff, // 0xff (no mask) StencilWriteMask = 0xff,// 0xff (no mask) // Configure FrontFace depth/stencil operations FrontFace = new DepthStencilOperationDescription() { Comparison = Comparison.Always, PassOperation = StencilOperation.Keep, FailOperation = StencilOperation.Keep, DepthFailOperation = StencilOperation.Increment }, // Configure BackFace depth/stencil operations BackFace = new DepthStencilOperationDescription() { Comparison = Comparison.Always, PassOperation = StencilOperation.Keep, FailOperation = StencilOperation.Keep, DepthFailOperation = StencilOperation.Decrement }, }));
- Lastly, we need to assign our input layout, constant buffer, vertex and pixel shaders, and the depth stencil state to the appropriate graphics pipeline stages.
// Tell the IA what the vertices will look like context.InputAssembler.InputLayout = vertexLayout; // Bind constant buffer to vertex shader stage context.VertexShader.SetConstantBuffer(0, worldViewProjectionBuffer); // Set the vertex shader to run context.VertexShader.Set(vertexShader); // Set the pixel shader to run context.PixelShader.Set(pixelShader); // Set our depth stencil state context.OutputMerger.DepthStencilState = depthStencilState;
Now that the resources have been initialized, we can implement the
D3DApplicationBase.Run
method as described in the Using the sample rendering framework recipe. Here, we will host our rendering loop, initialize the renderers, and call their Render methods. - First we will initialize the instances of our renderers (the implementation of these classes will follow shortly):
// Create and initialize the axis lines renderer var axisLines = ToDispose(new AxisLinesRenderer()); axisLines.Initialize(this); // Create and initialize the triangle renderer var triangle = ToDispose(new TriangleRenderer()); triangle.Initialize(this); // Create and initialize the quad renderer var quad = ToDispose(new QuadRenderer()); quad.Initialize(this);
- Next, we will prepare our world, view, and projection matrices. These matrices are multiplied and the result is used to update the WVP constant buffer within the render loop to perform vertex transformations within the vertex shader.
// Initialize the world matrix var worldMatrix = Matrix.Identity; // Set camera position slightly to the right (x), above (y) // and behind (-z) var cameraPosition = new Vector3(1, 1, -2); var cameraTarget = Vector3.Zero; // Looking at origin 0,0,0 var cameraUp = Vector3.UnitY; // Y+ is Up // Create view matrix from our camera pos, target and up var viewMatrix = Matrix.LookAtLH(cameraPosition, cameraTarget, cameraUp); // Create the projection matrix // Field of View 60degrees = Pi/3 radians // Aspect ratio (based on window size), Near clip, Far clip var projectionMatrix = Matrix.PerspectiveFovLH((float)Math.PI / 3f, Width / (float)Height, 0.5f, 100f); // Maintain the correct aspect ratio on resize Window.Resize += (s, e) => { projectionMatrix = Matrix.PerspectiveFovLH( (float)Math.PI / 3f, Width / (float)Height, 0.5f, 100f); };
- We can now implement our render loop. This is done the same way as in the previous chapter, that is, by a call to
RenderLoop.Run(Window, () => { ... });
. After retrieving our device context, we first clear the depth stencil view and clear the render target.// Clear depth stencil view context.ClearDepthStencilView(DepthStencilView, DepthStencilClearFlags.Depth|DepthStencilClearFlags.Stencil,1.0f,0); // Clear render target view context.ClearRenderTargetView(RenderTargetView, Color.White);
- Next, we will multiply the view and projection matrices, and then create our WVP matrix. Then this matrix is assigned to our constant buffer resource.
// Create viewProjection matrix var viewProjection = Matrix.Multiply(viewMatrix, projectionMatrix); // Create WorldViewProjection Matrix var worldViewProjection = worldMatrix * viewProjection; // HLSL defaults to "column-major" order matrices so // transpose first (SharpDX uses row-major matrices). worldViewProjection.Transpose(); // Write the worldViewProjection to the constant buffer context.UpdateSubresource(ref worldViewProjection, worldViewProjectionBuffer);
- We can now call each of our renderer's
Render
method and present the final result.// Render the primitives axisLines.Render(); triangle.Render(); quad.Render(); // Render FPS fps.Render(); // Render instructions + position changes textRenderer.Render(); // Present the frame Present();
- This completes our
D3DApp
class. OpenProgram.cs
, and replace theMain()
method with the following code so that we are now utilizing ourD3DApp
class:static void Main() { #if DEBUG // Enable object tracking SharpDX.Configuration.EnableObjectTracking = true; #endif // Create the form to render to var form = new Form1(); form.Text = "D3DRendering - Primitives"; form.ClientSize = new System.Drawing.Size(1024, 768); form.Show(); // Create and initialize the new D3D application // Then run the application. using (D3DApp app = new D3DApp(form)) { // Only render frames at the maximum rate the // display device can handle. app.VSync = true; // Initialize (create Direct3D device etc) app.Initialize(); // Run the application app.Run(); } }
- Let's first add stubs for the renderer classes so that we can test whether our project is compiling correctly. We will add these with names according to the following files:
AxisLinesRenderer.cs
,TriangleRenderer.cs
, andQuadRenderer.cs
. Go ahead and create these empty classes now and descend them fromCommon.RendererBase
as demonstrated in the Creating a Direct3D renderer class recipe. - At this point we should be able to compile and run (F5) the project. A blank form will appear with the frames per second displayed in the top-left corner.
- We will first complete the axis-lines renderer class of the renderers. Begin by opening the
AxisLinesRenderer.cs
file and adding the following private member fields:// The vertex buffer for axis lines Buffer axisLinesVertices; // The binding structure of the axis lines vertex buffer VertexBufferBinding axisLinesBinding;
- Now we will add the following resource initialization code to the
CreateDeviceDependentResources
method. This consists of first ensuring that the resources have been released viaRemoveAndDispose
and then creating a local reference to the device.// Ensure that if already set the device resources // are correctly disposed of before recreating RemoveAndDispose(ref axisLinesVertices); // Retrieve our SharpDX.Direct3D11.Device1 instance var device = this.DeviceManager.Direct3DDevice;
- Next, we create our vertex buffer and binding for use in the IA stage:
// Create xyz-axis arrows // X is Red, Y is Green, Z is Blue // The arrows point along the + for each axis axisLinesVertices = ToDispose(Buffer.Create(device, BindFlags.VertexBuffer, new[] { /* Vertex Position Vertex Color */ new Vector4(-1f,0f,0f,1f),(Vector4)Color.Red, // - x-axis new Vector4(1f,0f,0f,1f), (Vector4)Color.Red, // + x-axis new Vector4(0.9f,-0.05f,0f,1f),(Vector4)Color.Red,//head start new Vector4(1f,0f,0f,1f), (Vector4)Color.Red, new Vector4(0.9f,0.05f,0f,1f), (Vector4)Color.Red, new Vector4(1f,0f,0f,1f), (Vector4)Color.Red, // head end new Vector4(0f,-1f,0f,1f), (Vector4)Color.Lime, // - y-axis new Vector4(0f,1f,0f,1f), (Vector4)Color.Lime, // + y-axis new Vector4(-0.05f,0.9f,0f,1f),(Vector4)Color.Lime,//head start new Vector4(0f,1f,0f,1f), (Vector4)Color.Lime, new Vector4(0.05f,0.9f,0f,1f), (Vector4)Color.Lime, new Vector4(0f,1f,0f,1f), (Vector4)Color.Lime, // head end new Vector4(0f,0f,-1f,1f), (Vector4)Color.Blue, // - z-axis new Vector4(0f,0f,1f,1f), (Vector4)Color.Blue, // + z-axis new Vector4(0f,-0.05f,0.9f,1f),(Vector4)Color.Blue,//head start new Vector4(0f,0f,1f,1f), (Vector4)Color.Blue, new Vector4(0f,0.05f,0.9f,1f), (Vector4)Color.Blue, new Vector4(0f,0f,1f,1f), (Vector4)Color.Blue, // head end })); axisLinesBinding = new VertexBufferBinding(axisLinesVertices, Utilities.SizeOf<Vector4>() * 2, 0);
- The axis lines drawing logic is made up of the following code which belongs to
DoRender
. This sets the topology to be used, passes the vertex buffer to the IA stage, and requests the pipeline to draw the 18 vertices we just defined.// Get the context reference var context = this.DeviceManager.Direct3DContext; // Render the Axis lines // Tell the IA we are using lines context.InputAssembler.PrimitiveTopology = SharpDX.Direct3D.PrimitiveTopology.LineList; // Pass in the line vertices context.InputAssembler.SetVertexBuffers(0, axisLinesBinding); // Draw the 18 vertices or our xyz-axis arrows context.Draw(18, 0);
- Next, we will implement the triangle renderer. Open the
TriangleRenderer
class and add the following private member fields:// The triangle vertex buffer Buffer triangleVertices; // The vertex buffer binding structure for the triangle VertexBufferBinding triangleBinding;
- We will initialize the device-dependent resources with this code. As with the axis lines renderer, here we also need to create a vertex buffer and binding.
RemoveAndDispose(ref triangleVertices); // Retrieve our SharpDX.Direct3D11.Device1 instance var device = this.DeviceManager.Direct3DDevice; // Create a triangle triangleVertices = ToDispose(Buffer.Create(device, BindFlags.VertexBuffer, new[] { /* Vertex Position Vertex Color */ new Vector4(0.0f, 0.0f, 0.5f, 1.0f), new Vector4(0.0f, 0.0f, 1.0f, 1.0f), // Base-right new Vector4(-0.5f, 0.0f, 0.0f, 1.0f), new Vector4(1.0f, 0.0f, 0.0f, 1.0f), // Base-left new Vector4(-0.25f, 1f, 0.25f, 1.0f), new Vector4(0.0f, 1.0f, 0.0f, 1.0f), // Apex })); triangleBinding = new VertexBufferBinding(triangleVertices, Utilities.SizeOf<Vector4>() * 2, 0);
- And finally, render our triangle with the following code that is placed in
DoRender()
. This time, we use a different topology and only need to draw three vertices.// Get the context reference var context = this.DeviceManager.Direct3DContext; // Render the triangle // Tell the IA we are now using triangles context.InputAssembler.PrimitiveTopology = SharpDX.Direct3D.PrimitiveTopology.TriangleList; // Pass in the triangle vertices context.InputAssembler.SetVertexBuffers(0, triangleBinding); // Draw the 3 vertices of our triangle context.Draw(3, 0);
- Lastly, we will implement our quad renderer. Open the
QuadRenderer
class, and add the following private member fields:// The quad vertex buffer Buffer quadVertices; // The quad index buffer Buffer quadIndices; // The vertex buffer binding for the quad VertexBufferBinding quadBinding;
- Initialize the device-dependent resources with the following code. We are initializing our vertex buffer and binding in the same way as the axis lines and triangle renderer. In addition, we are creating an index buffer to re-use the existing vertices.
RemoveAndDispose(ref quadVertices); RemoveAndDispose(ref quadIndices); // Retrieve our SharpDX.Direct3D11.Device1 instance var device = this.DeviceManager.Direct3DDevice; // Create a quad (two triangles) quadVertices = ToDispose(Buffer.Create(device, BindFlags.VertexBuffer, new[] { /* Vertex Position Vertex Color */ new Vector4(0.25f, 0.5f, -0.5f, 1.0f), new Vector4(0.0f, 1.0f, 0.0f, 1.0f), // Top-left new Vector4(0.75f, 0.5f, -0.5f, 1.0f), new Vector4(1.0f, 1.0f, 0.0f, 1.0f), // Top-right new Vector4(0.75f, 0.0f, -0.5f, 1.0f), new Vector4(1.0f, 0.0f, 0.0f, 1.0f), // Base-right new Vector4(0.25f, 0.0f, -0.5f, 1.0f), new Vector4(0.0f, 0.0f, 1.0f, 1.0f), // Base-left })); quadBinding = new VertexBufferBinding(quadVertices, Utilities.SizeOf<Vector4>() * 2, 0); // v0 v1 // |-----| // | \ A | // | B \ | // |-----| // v3 v2 quadIndices = ToDispose(Buffer.Create(device, BindFlags.IndexBuffer, new ushort[] { 0, 1, 2, // A 2, 3, 0 // B }));
- We will now render the quad using the following code placed in the
DoRender
override. We will use the same topology as the triangle renderer; however, this time we also need to set the index buffer and use them when drawing the vertices:var context = this.DeviceManager.Direct3DContext; // Tell the IA we are using a triangle list context.InputAssembler.PrimitiveTopology = SharpDX.Direct3D.PrimitiveTopology.TriangleList; // Set the index buffer context.InputAssembler.SetIndexBuffer(quadIndices, Format.R16_UInt, 0); // Pass in the quad vertices (note: only 4 vertices) context.InputAssembler.SetVertexBuffers(0, quadBinding); // Draw the 6 vertices that make up the two triangles in the quad // using the vertex indices context.DrawIndexed(6, 0, 0); // Note: we have called DrawIndexed to use the index buffer
- Compile and run the project and you will now see a result similar to the figure shown at the beginning of this recipe.
Tip
To add the key down and mouse wheel handlers for rotating the objects, copy the code from the sample code that can be downloaded for this book from Packt's website. With this code in place, the scene can be rotated around the x, y, and z axes using the arrow keys and mouse wheel. The camera is moved with the W, A, S, and D keys and Shift + mouse wheel. Pressing X will reinitialize the device—this is useful for testing the initialization code.
How it works…
We have started by creating our HLSL shader code. This consists of one constant buffer that stores the WVP matrix, two structures that store the input/output of the vertex shader and also the input for the pixel shader, and our two shader methods for the vertex shader and pixel shader.
cbuffer PerObject : register(b0) { // WorldViewProjection matrix float4x4 WorldViewProj; };
The preceding constant buffer declaration is named PerObject
and will be loaded using the first constant buffer register, that is, b0
(also known as slot 0). The buffer consists of a single 4 x 4 affine transform matrix, which is our precalculated WVP matrix. The name itself is of no consequence. It is the register number/slot number and name of the properties within that are important.
The vertex shader structures hold the position and color component. When initializing the IA stage, we will see how the VertexShaderInput
structure and the input layout match up. Ultimately, these structures define our vertex layout.
There are two shader entry points: VSMain
represents the vertex shader and PSMain
is the pixel shader. The vertex shader will transform vertices from local object space into a homogeneous projection space based on the world, view, and projection (by applying the WVP matrix). The return value is the result of this along with the color, and it is the same structure that is passed into the pixel shader. This shader will run for each vertex.
The pixel shader is provided with an interpolated VertexShaderOutput
structure by the rasterizer state—this pixel shader is doing nothing but returning the unchanged color component.
Next, we implemented our Direct3D application class. This houses the rendering loop, and it initializes the Direct3D pipeline and our individual renderers. We descend from D3DApplicationDesktop
, which simply creates the SwapChain1
instance based on a System.Windows.Form
, as demonstrated in Chapter 1, Getting Started with Direct3D. We provided a compatible constructor that passes the form through to the base class.
The CreateDeviceDependentResources
implementation that is provided creates our device-dependent resources and initializes the IA and OM stages of the rendering pipeline.
First, we created our shader programs by compiling them. For example, we compile our vertex shader by using ShaderBytecode.CompileFromFile("Simple.hlsl", "
VSMain ", "
vs_5_0 ", shaderFlags)
. This compiles Simple.hlsl
by using the vs_5_0
shader profile and uses VSMain
as the entry point. If we have compiled for Debug, we are telling the shader to include the debug information via the shaderFlags
enumeration value. This allows us to step through the shaders when using the graphics debugging tools in Visual Studio.
After the shaders are compiled, we prepare the input layout for the IA. This is used to tell the IA in which memory layout the vertices can be expected when copying them to the VertexShaderInput
structure in our shader file.
new InputLayout(..., new[] { new InputElement("SV_Position",0,Format.R32G32B32A32_Float, 0, 0), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 16, 0) });
The previous layout tells the input assembler that the COLOR
component will be located after the 16 bytes of the SV_Position
component. In the preceding code, we can see that the name and format of the input layout matches the type and input semantics (the name) used in the Simple.hlsl
shader.
struct VertexShaderInput { float4 Position : SV_Position; float4 Color : COLOR; };
Next, we will create our constant buffer to store the WVP matrix. A second buffer for updating the per-frame information, such as light position, direction, and color, is another common resource.
new SharpDX.Direct3D11.Buffer(device, Utilities.SizeOf<Matrix>(), ResourceUsage.Default, BindFlags.ConstantBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0)
Here we created a buffer that is the size of a single Matrix
structure. This buffer that is available for read/write on the GPU (ResourceUsage.Default
) is a constant buffer, and it will not be accessible directly from the CPU. There are no additional options set, and as it is only representing a single object there is no structure byte stride.
Next, we will create our DepthStencilState
class which is used to control how the OM stage will behave when determining whether to keep or discard a fragment based on depth (recall that we created our depth buffer in D3DApplicationBase
). The state object created here enables depth testing, disables the stencil, and will choose pixels that are closer to the camera over pixels that are further away. There is little need to change this state, other than to enable the stencil.
context.InputAssembler.InputLayout = vertexLayout;
context.VertexShader.SetConstantBuffer(0, worldViewProjectionBuffer);
context.VertexShader.Set(vertexShader);
context.PixelShader.Set(pixelShader);
context.OutputMerger.DepthStencilState = depthStencilState;
Finally, we assign the input layout to the IA, add the constant buffer to the vertex shader stage, set the vertex shader and pixel shader programs, and set the OM depth stencil state.
The code within our D3DApp.Run()
method first initializes our renderers, then sets up our initial camera position, sets up our view and projection matrices, and finally starts our rendering loop.
We have initialized a world matrix using an identity matrix which effectively means that we are not translating, rotating, or scaling any of the objects in this scene.
In Direct3D, the traditional coordinate system is left-handed, with the camera view looking down the Z+ axis with X+ to the right, and Y+ as up. The view matrix is created with the camera position and where it is looking at and which direction is up. Here, we are placing the camera slightly up, to the right, and behind the origin to give us a better view of the scene.
var cameraPosition = new Vector3(1, 1, -2); var cameraTarget = Vector3.Zero; // Looking at the origin 0,0,0 var cameraUp = Vector3.UnitY; // Y+ is Up var viewMatrix = Matrix.LookAtLH(cameraPosition, cameraTarget, cameraUp);
The projection matrix is created using the desired field-of-view angle, aspect ratio, and the near and far Z-planes. This matrix gives us our perspective.
// FoV 60degrees = Pi/3 radians var projectionMatrix = Matrix.PerspectiveFovLH((float)Math.PI / 3f, Width / (float)Height, 0.5f, 100f);
Combining the view and projection matrices, we get the view frustum—a region of space that defines what is visible through the camera. The process of excluding objects that do not lie within this space is called frustum culling. This region is roughly the shape of a pyramid on its side, with its top cut off as shown in the following figure. In this figure, everything between the Z-planes 1 (our 0.5f
near plane) and 2 (our 100f
far plane), and within the bounds of the pyramid, will appear on the screen.
Other than our rendering commands, the render loop does two additional operations to the render loops of Chapter 1, Getting Started with Direct3D. We first cleared the depth/stencil view, which resets our depth buffer. This is important to do or we will have depth bleeding between frames.
Then after creating the WVP matrix, we updated the WVP matrix constant buffer with a call to DeviceContext.UpdateSubresource
. HLSL, by default, expects the matrix to be in column-major order. Therefore, we must first transpose our SharpDX row major WVP matrix (write the rows of the matrix as the columns).
The IA input layout that we defined requires that the vertices are two 4-component floats making up 32 bytes. The first 16 bytes represent the object-space position, and the second 16 bytes represent the vertex color.
When creating the vertex buffer, any structure can be used to represent the vertex as long as its memory layout matches the input layout. We have used an array of Vector4
instances in this example; however, this could, just as easily, have been a new structure with two Vector4
members or an array of floats. In this code, every second Vector4
represents the color of the vertex (RGBA).
The axis lines in the AxisLinesRenderer
class are made up of 18 vertices to draw the lines and arrow shapes for the axes. We create the buffer like any other Buffer
object but with a binding of BindFlags.VertexBuffer
. Then, we create a VertexBufferBinding
, passing the size of a single element as the stride and 0
as the offset.
buffer=Buffer.Create(device, BindFlags.VertexBuffer, new[] {...}); binding = new VertexBufferBinding(buffer, Utilities.SizeOf<Vector4>() * 2, 0);
Before issuing the draw command for the axis lines, first we must tell the IA that the vertex buffer represents a list of lines by setting the context.InputAssembler.PrimitiveTopology
to PrimitiveTopology.LineList
. Then, we set the vertex buffers of the IA, and issue the DeviceContext
draw command for 18 vertices—starting at the first vertex with a call to context.Draw(18, 0)
.
The TriangleRenderer
class is made in exactly the same way except that we need three vertices to render an object, and when drawing, we must set PrimitiveTopology
to PrimitiveTopology.TriangleList
. The context.Draw
command passes three vertices.
The Input Assembler will automatically ignore malformed primitives, that is, lines require a start and end point. If an odd number of vertices was provided in the vertex buffer, the last vertex will be discarded. This applies to all vertex buffers.
The QuadRenderer
class does not represent a quad primitive type (there is no such primitive). Instead, we create two triangles for the halves of the quad. This is done exactly the same as the triangle example, but, with two triangles. All complex shapes are made up of multiple triangles in a similar fashion.
Rather than creating duplicate vertices for the two points where the triangles align along their hypotenuse, we will use an index buffer. This allows us to reduce the number of vertices sent to the IA by reusing the same vertices by index—if we used an index buffer for the axis lines, we would have used 12 vertices instead of 18. Although this isn't entirely necessary for our examples, larger meshes will quickly see a reduction in the memory bandwidth used.
// v0 v1 // |-----| // | \ A | // | B \ | // |-----| // v3 v2 quadIndices = ToDispose(Buffer.Create(device, BindFlags.IndexBuffer, new ushort[] { 0, 1, 2, // A 2, 3, 0 // B }));
Here, we have four vertices in the vertex buffer. To build triangle A, we will use the indexes 0
, 1
, and 2
; and to build triangle B, we will use the indexes 2
, 3
, and 0
.
By default, a clockwise vertex direction will define the front face of a primitive (for example, from our camera position) drawing a triangle from top-left to right to bottom-right and then back to top-left will mean that the back face is away from the camera. Rotating the scene by 180 degrees using the arrow keys will show that the shape no longer renders because of back-face culling. Try reversing the vertex direction to see what happens. We can override the default direction by assigning a RasterizerState
object to the context.Rasterizer.State
property, where the RasterizerStateDescription.IsFrontCounterClockwise
property can be set to true
or false
, as appropriate.
When we draw the triangle list for the quad, we must set an index buffer and a vertex buffer. Indices can be defined using 16 or 32 bits. In this example, we have used an unsigned short (16-bit). So we must use a format of Format.R16_Uint
when setting the index buffer.
To let the pipeline know that we are using the index buffer, we must call the DrawIndexed
method rather than Draw
. Notice that although we only defined four vertices, we have asked it to draw six (the number of indices).
context.DrawIndexed(6, 0, 0);
There's more…
You may have noticed that the actual pixel colors have been interpolated between the vertices, that is, if vertex (A) of a line is red (1, 0, 0, 1) and vertex (B) is green (0, 1, 0, 1), all the pixels in between have been linearly interpolated between those two values by the rasterizer stage (remember that the rasterizer stage is immediately before the pixel shader). Half-way along the line, the value of the pixel will be 0.5
, 0.5
, 0
, and 1
. This applies to all the per-vertex attributes except the vertex position.
It is possible to control the interpolation method for individual vertex struct
fields within HLSL by prefixing with one of the following interpolation modifiers:
linear
: This is the default mode of interpolation if no modifier is provided.centroid
: This method may improve antialiasing if a pixel is partially covered, and it must be combined withlinear
ornoperspective
.nointerpolation
: This method tells the rasterizer to perform no interpolation, instead using the value of the closest vertex. This is the only valid option for theint
/uint
types in a vertex.noperspective
: This method causes the rasterizer to not perform perspective correction during interpolation.sample
: This method interpolates per sample location rather than per pixel center (which is generally used in conjunction with antialiasing to change the behavior). It may be useful to combine this with theSV_SampleIndex
pixel shader input semantic.
An example of this in HLSL is as follows:
struct VertexShaderOutput { ...
nointerpolation float4 Color: COLOR; }
There are a number of additional primitive types that we didn't use. These include: PointList
, LineListWithAdjacency
, LineStrip
, TriangleListWithAdjacency
, TriangleStrip
, and PatchListWith1ControlPoints
through to PatchListWith32ControlPoints
. The patch list topologies are used with the tessellation stages.
See also
- We will be rendering more complex objects in the recipes Rendering a cube and sphere and Loading a static mesh from a file in Chapter 3, Rendering Meshes
- We will cover how to use the patch topologies in the recipe Tessellation of primitives in Chapter 5, Applying Hardware Tessellation
- We use a triangle-strip in Chapter 10, Implementing Deferred Rendering, for implementing a screen-aligned quad renderer