Adding an Frames Per Second counter

This is a very short and quick post. In your “Game1” class, locate your “Update” method. Before the base.Update(gameTime);, add the following:

// Output Frames Per Second
double fps = (1000 / gameTime.ElapsedGameTime.TotalMilliseconds);
fps = Math.Round(fps, 0);
Window.Title = "Project Vanquish " + fps.ToString() + " FPS";

This will update the Game Windows title with the Frames Per Second. It should sit at 60, dependent on the power of your PC. If you want to see this update in real-time, then in your “Game1” constructor, add the following:

this.IsFixedTimeStep = false;

This stops the update syncing with the V-Sync of the monitor.

The next post we’ll be implementing shadows. It will be a multi-part post as there is a lot to do in order to get it working.

Adding a basic Light Manager – Part 2

We’ll start by creating a new folder in our “ProjectVanquish” project. Locate the “Core” folder and create a new folder called “Lights”. Create a class called “PointLight” in this folder and once created, add the following namespace:

using Microsoft.Xna.Framework;

Now, alter the class declaration so that it is public.

public class PointLight

And declare the following variables:

Vector3 position;
Color color;
float lightRadius, lightIntensity;

Pretty simple in what we are trying to achieve here. Just a simple class to store the position, color, radius and intensity of the light. Let’s add a constructor:

public PointLight(Vector3 position, Color color, float radius, float intensity)
{
    this.position = position;
    this.color = color;
    lightRadius = radius;
    lightIntensity = intensity;
}

And to finish this class off, we just need to declare some properties the the “LightManager” class can use:

public Color Color { get { return color; } }

public float Radius { get { return lightRadius; } }

public float Intensity { get { return lightIntensity; } }

public Vector3 Position { get { return position; } }

We need to make some big modifications to our “LightManager” class now. In our “DrawLights” method, where we have our “DrawPointLight” call, we need to change this to the following:

foreach (PointLight light in pointLights)
    DrawPointLight(light, colorRT, normalRT, depthRT, camera);

There is a new variable there, called “pointLights”. This is a list of all of the instantiated Point lights. Let’s add this variable:

IList<PointLight> pointLights = new List<PointLight>();

We will also need to include the new namespace, else we can’t use the “PointLight” class:

using ProjectVanquish.Core.Lights;

Going back to our “DrawPointLight” method, our declaration has changed some what. We are now passing in a “PointLight” object. Amend the “DrawPointLight” method to:

void DrawPointLight(PointLight light, RenderTarget2D colorRT, RenderTarget2D normalRT, RenderTarget2D depthRT, FreeCamera camera)

The last thing to change in this code is all of the parameters where we used to have them passed in. These are now part of the “PointLight” object. I’ve included the final version of the “DrawPointLight” to save time:

void DrawPointLight(PointLight light, RenderTarget2D colorRT, RenderTarget2D normalRT, RenderTarget2D depthRT, FreeCamera camera)
{
    // Set the G-Buffer parameters
    pointLightEffect.Parameters["colorMap"].SetValue(colorRT);
    pointLightEffect.Parameters["normalMap"].SetValue(normalRT);
    pointLightEffect.Parameters["depthMap"].SetValue(depthRT);

    // Compute the Light World matrix
    // Scale according to Light radius and translate it to Light position
    Matrix sphereWorldMatrix = Matrix.CreateScale(light.Radius) * Matrix.CreateTranslation(light.Position);
    pointLightEffect.Parameters["World"].SetValue(sphereWorldMatrix);
    pointLightEffect.Parameters["View"].SetValue(camera.View);
    pointLightEffect.Parameters["Projection"].SetValue(camera.Projection);
    // Light position
    pointLightEffect.Parameters["lightPosition"].SetValue(light.Position);

    // Set the color, radius and Intensity
    pointLightEffect.Parameters["Color"].SetValue(light.Color.ToVector3());
    pointLightEffect.Parameters["lightRadius"].SetValue(light.Radius);
    pointLightEffect.Parameters["lightIntensity"].SetValue(light.Intensity);

    // Parameters for specular computations
    pointLightEffect.Parameters["cameraPosition"].SetValue(camera.Position);
    pointLightEffect.Parameters["InvertViewProjection"].SetValue(Matrix.Invert(camera.View * camera.Projection));
    // Size of a halfpixel, for texture coordinates alignment
    pointLightEffect.Parameters["halfPixel"].SetValue(halfPixel);
    // Calculate the distance between the camera and light center
    float cameraToCenter = Vector3.Distance(camera.Position, light.Position);
    // If we are inside the light volume, draw the sphere's inside face
    if (cameraToCenter < light.Radius)
        device.RasterizerState = RasterizerState.CullClockwise;
    else
        device.RasterizerState = RasterizerState.CullCounterClockwise;

    device.DepthStencilState = DepthStencilState.None;

    pointLightEffect.Techniques[0].Passes[0].Apply();
    foreach (ModelMesh mesh in sphere.Meshes)
    {
        foreach (ModelMeshPart meshPart in mesh.MeshParts)
        {
            device.Indices = meshPart.IndexBuffer;
            device.SetVertexBuffer(meshPart.VertexBuffer);

            device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, meshPart.NumVertices, meshPart.StartIndex, meshPart.PrimitiveCount);
        }
    }

    device.RasterizerState = RasterizerState.CullCounterClockwise;
    device.DepthStencilState = DepthStencilState.Default;
}

All being well, the project will compile without any issues. Let’s add the final method to the “LightManager” class:

public void AddLight(PointLight light)
{
    pointLights.Add(light);
}

This method will allow the “DeferredRenderer” class to be able to add new Point lights, but we can’t access this method from outside of the “DeferredRenderer” class. So, in our “DeferredRenderer” class, we add the following method:

public void AddLight(PointLight light)
{
    lightManager.AddLight(light);
}

Build the solution and once it’s finished, we’ll be able to add Point lights from our “Game1” class. To do so, in the “Game1” class, we’ll need to add the new “Lights” namespace:

using ProjectVanquish.Core.Lights;

Locate the “LoadContent” method. Under the renderer.AddModel() line, add:

renderer.AddLight(new PointLight(new Vector3(-30, 1, -70), Color.Red, 30, 5));
renderer.AddLight(new PointLight(new Vector3(0, 1, -70), Color.Green, 30, 5));
renderer.AddLight(new PointLight(new Vector3(30, 1, -70), Color.Blue, 30, 5));

Here we are creating 3 Point lights (Red, Green and Blue), positioned in a line which should produce the following output:

In the next post, we’ll add a simple Frames Per Second counter so we can see how well the engine is performing.

Adding a basic Light Manager – Part 1

In the last post we implemented Point lights, but it was hard coded into the “DeferredRenderer” class. In this post, we’ll implement a very basic Light Manager that will store the lights and render them. This is a very simple implementation. I’d really like you, the community, to help implement a decent Light Manager, but for the time being we’ll use this.

In the “ProjectVanquish” project, add a new Class under the “Core” folder called “LightManager”. Add the following namespaces:

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Content;
using Microsoft.Xna.Framework.Graphics;
using ProjectVanquish.Cameras;
using ProjectVanquish.Renderers;

Add the following variables:

ContentManager content;
GraphicsDevice device;
Effect directionalLightEffect, pointLightEffect;
QuadRenderer fullscreenQuad;
Vector2 halfPixel;
Model sphere;

Now we can create our constructor:

public LightManager(GraphicsDevice device, ContentManager content)
{
    this.content = content;
    this.device = device;
    directionalLightEffect = content.Load<Effect>("Shaders/Lights/DirectionalLight");
    pointLightEffect = content.Load<Effect>("Shaders/Lights/PointLight");
    fullscreenQuad = new QuadRenderer(device);
    halfPixel = new Vector2()
    {
        X = 0.5f / (float)device.PresentationParameters.BackBufferWidth,
        Y = 0.5f / (float)device.PresentationParameters.BackBufferHeight
    };
    sphere = content.Load<Model>("Models/sphere");
}

As you can see, we are now loading the “Sphere” Model in our “LightManager” constructor, so we should remove this from the “DeferredRenderer” class as well.

We now have duplicated code as we are loading the light Effects in the “DeferredRenderer” class and the new “LightManager” class. Remove the instantiation code from the “DeferredRenderer” class, but not the rendering methods yet. We need to move these methods into the “LightManager” class, so we’ll start with the “DrawLights” method. Cut the method and paste it into the “LightManager” class. We’ll get some errors now, because the RenderTargets don’t exist in this class. Alter the “DrawLights” method declaration to the following:

public void DrawLights(RenderTarget2D colorRT, RenderTarget2D normalRT, RenderTarget2D depthRT, RenderTarget2D lightRT, FreeCamera camera)

Find the “DrawDirectionalLight” and “DrawPointLight” methods in the “DeferredRenderer” class and cut and paste into the “LightManager” class. Both method declarations will now need to change in order to work. We’ll start with the new declaration of the “DrawDirectionalLight” method:

void DrawDirectionalLight(RenderTarget2D colorRT, RenderTarget2D normalRT, RenderTarget2D depthRT, FreeCamera camera, Vector3 lightDirection, Color color)

And the new “DrawPointLight” method:

void DrawPointLight(RenderTarget2D colorRT, RenderTarget2D normalRT, RenderTarget2D depthRT, FreeCamera camera, Vector3 lightPosition, Color color, float lightRadius, float lightIntensity)

The last thing left to do is to find sceneManager.Camera instances and change them to camera.

Back in our “DeferredRenderer” class, let’s instantiate this new “LightManager” class. Add a variable:

private LightManager lightManager;

In the constructor, instantiate it:

lightManager = new LightManager(device, content);

The last thing left to do is to alter the “DrawLights” method in the “Draw” method. This now becomes:

lightManager.DrawLights(colorRT, normalRT, depthRT, lightRT, sceneManager.Camera);

Build the solution and you should get no errors. If you run the code you should still see our scene from the last post. In the next part, we’ll extend this “LightManager” class by creating a “PointLight” class which we can instantiate and control it’s position, direction etc. from the “DeferredRenderer” class.

Implementing Point lights

So, I thought it would be a good point to release the source code that we have been building throughout the project. Here is the Source Code.

If you haven’t been typing the code yourself (or copying/pasting), download the code and then let’s move on with implementing Point lights ūüôā

Roy’s source code contains a “Sphere” model. Extract this model to our “Models” directory in the “ProjectVanquishTestContent” project. We don’t need to change the “Content Processor” for this Model, so we’ll just keep it as the default values. Add new a “Effect” file to the “Lights” folder under the “Shaders” directory and call it “PointLight”. Clear out the default content and add in the following parameter declarations:

float4x4 World;
float4x4 View;
float4x4 Projection;
// Color of the light 
float3 Color; 
// Position of the camera, for specular light
float3 cameraPosition; 
// This is used to compute the world-position
float4x4 InvertViewProjection; 
// This is the position of the light
float3 lightPosition;
// How far does this light reach
float lightRadius;
// Control the brightness of the light
float lightIntensity = 1.0f;
float2 halfPixel;

Next we’ll add our Sampler States:

// Diffuse color, and SpecularIntensity in the alpha channel
texture colorMap; 
sampler colorSampler = sampler_state
{
    Texture = (colorMap);
    AddressU = CLAMP;
    AddressV = CLAMP;
    MagFilter = LINEAR;
    MinFilter = LINEAR;
    Mipfilter = LINEAR;
};

// Depth
texture depthMap;
sampler depthSampler = sampler_state
{
    Texture = (depthMap);
    AddressU = CLAMP;
    AddressV = CLAMP;
    MagFilter = POINT;
    MinFilter = POINT;
    Mipfilter = POINT;
};

// Normals, and SpecularPower in the alpha channel
texture normalMap;
sampler normalSampler = sampler_state
{
    Texture = (normalMap);
    AddressU = CLAMP;
    AddressV = CLAMP;
    MagFilter = POINT;
    MinFilter = POINT;
    Mipfilter = POINT;
};

Create our Vertex Shader Input, Output and Function:

struct VertexShaderInput
{
    float3 Position : POSITION0;
};

struct VertexShaderOutput
{
    float4 Position : POSITION0;
    float4 ScreenPosition : TEXCOORD0;
};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
    VertexShaderOutput output;
    // Processing geometry coordinates
    float4 worldPosition = mul(float4(input.Position,1), World);
    float4 viewPosition = mul(worldPosition, View);
    output.Position = mul(viewPosition, Projection);
    output.ScreenPosition = output.Position;
    return output;
}

We’ll add our Pixel Shader and Technique:

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
    // Obtain screen position
    input.ScreenPosition.xy /= input.ScreenPosition.w;

    // Obtain textureCoordinates corresponding to the current pixel
    // The screen coordinates are in [-1,1]*[1,-1]
    // The texture coordinates need to be in [0,1]*[0,1]
    float2 texCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1);
    // Align texels to pixels
    texCoord -=halfPixel;

    // Get normal data from the normalMap
    float4 normalData = tex2D(normalSampler,texCoord);
    // Transform normal back into [-1,1] range
    float3 normal = 2.0f * normalData.xyz - 1.0f;
    // Get specular power
    float specularPower = normalData.a * 255;
    // Get specular intensity from the colorMap
    float specularIntensity = tex2D(colorSampler, texCoord).a;

    // Read depth
    float depthVal = tex2D(depthSampler,texCoord).r;

    // Compute screen-space position
    float4 position;
    position.xy = input.ScreenPosition.xy;
    position.z = depthVal;
    position.w = 1.0f;
    // Transform to world space
    position = mul(position, InvertViewProjection);
    position /= position.w;

    // Surface-to-light vector
    float3 lightVector = lightPosition - position;

    // Compute attenuation based on distance - linear attenuation
    float attenuation = saturate(1.0f - length(lightVector)/lightRadius); 

    // Normalize light vector
    lightVector = normalize(lightVector); 

    // Compute diffuse light
    float NdL = max(0,dot(normal,lightVector));
    float3 diffuseLight = NdL * Color.rgb;

    // Reflection vector
    float3 reflectionVector = normalize(reflect(-lightVector, normal));
    // Camera-to-surface vector
    float3 directionToCamera = normalize(cameraPosition - position);
    // Compute specular light
    float specularLight = specularIntensity * pow( saturate(dot(reflectionVector, directionToCamera)), specularPower);

    // Take into account attenuation and lightIntensity.
    return attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight);
}

technique PointLight
{
    pass Pass0
    {
        VertexShader = compile vs_2_0 VertexShaderFunction();
        PixelShader = compile ps_2_0 PixelShaderFunction();
    }
}

Build the project and we should receive no errors. Now we need to implement this new Shader into our “DeferredRenderer” class. Open the “DeferredRenderer” class and add a new variable:

private Effect pointLightEffect;

In the constructor, we’ll instantiate it:

pointLightEffect = content.Load<Effect>("Shaders/Lights/PointLight");

Excellent. We now have our Effect initalized, now we need to add the “Sphere” Model. Create a new variable:

private Model sphere;

And back in the constructor, we’ll instantiate it:

sphere = content.Load<Model>("Models/sphere");

We now have the Effect and Model ready to go. The last steps are to create a method that will render them. In our “DrawLights” method, after we the “DrawDirectionalLight” method, add:

DrawPointLight(new Vector3(0, 1, -70), Color.Red, 30, 5);

What the above will do is create a Point light at a position of Y: 1, Z: -70 and set the colour to Red. It will assign a radius to the Point light as well as the intensity. Comment out the “DrawDirectionalLight” method call.

Let’s create the method and populate our PointLight Effect parameters:

void DrawPointLight(Vector3 lightPosition, Color color, float lightRadius, float lightIntensity)
{
    // Set the G-Buffer parameters
    pointLightEffect.Parameters["colorMap"].SetValue(colorRT);
    pointLightEffect.Parameters["normalMap"].SetValue(normalRT);
    pointLightEffect.Parameters["depthMap"].SetValue(depthRT);

    // Compute the Light World matrix
    // Scale according to Light radius and translate it to Light position
    Matrix sphereWorldMatrix = Matrix.CreateScale(lightRadius) * Matrix.CreateTranslation(lightPosition);
    pointLightEffect.Parameters["World"].SetValue(sphereWorldMatrix);
    pointLightEffect.Parameters["View"].SetValue(sceneManager.Camera.View);
    pointLightEffect.Parameters["Projection"].SetValue(sceneManager.Camera.Projection);
    // Light position
    pointLightEffect.Parameters["lightPosition"].SetValue(lightPosition);

    // Set the color, radius and Intensity
    pointLightEffect.Parameters["Color"].SetValue(color.ToVector3());
    pointLightEffect.Parameters["lightRadius"].SetValue(lightRadius);
    pointLightEffect.Parameters["lightIntensity"].SetValue(lightIntensity);

    // Parameters for specular computations
    pointLightEffect.Parameters["cameraPosition"].SetValue(sceneManager.Camera.Position);
    pointLightEffect.Parameters["InvertViewProjection"].SetValue(
                                                        Matrix.Invert(
                                                           sceneManager.Camera.View * 
                                                           sceneManager.Camera.Projection));
    // Size of a halfpixel, for texture coordinates alignment
    pointLightEffect.Parameters["halfPixel"].SetValue(halfPixel);
    // Calculate the distance between the camera and light center
    float cameraToCenter = Vector3.Distance(sceneManager.Camera.Position, lightPosition);
    // If we are inside the light volume, draw the sphere's inside face
    if (cameraToCenter < lightRadius)
        device.RasterizerState = RasterizerState.CullClockwise;                
    else
        device.RasterizerState = RasterizerState.CullCounterClockwise;

    device.DepthStencilState = DepthStencilState.None;

    pointLightEffect.Techniques[0].Passes[0].Apply();
    foreach (ModelMesh mesh in sphere.Meshes)
    {
        foreach (ModelMeshPart meshPart in mesh.MeshParts)
        {
            device.Indices = meshPart.IndexBuffer;
            device.SetVertexBuffer(meshPart.VertexBuffer);
            device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, meshPart.NumVertices, meshPart.StartIndex, meshPart.PrimitiveCount);
        }
    }            
            
    device.RasterizerState = RasterizerState.CullCounterClockwise;
    device.DepthStencilState = DepthStencilState.Default;
}

Build the solution and then run the project. Fingers crossed, you should be seeing:

In the next post we’ll look at creating a basic Light Manager so we can add new lights from our “ProjectVanquishTest” project, rather than hardcoding values in the “DeferredRenderer” class.

Directional lighting

In this post we’ll add a Directional light to our engine. We’ll start by creating a new folder in the “ProjectVanquishTestContent” project under the “Shaders” folder called “Lights”. We’ll add all of our lighting shaders in here. Create a new Effect file called “DirectionalLight” and remove all of the files content. We’ll start the shader off with some parameter declarations:

// Direction of the light
float3 lightDirection;
// Color of the light 
float3 Color; 
// Position of the camera, for specular light
float3 cameraPosition; 
// This is used to compute the world-position
float4x4 InvertViewProjection; 
// Diffuse color, and SpecularIntensity in the alpha channel
texture colorMap; 
// Normals, and SpecularPower in the alpha channel
texture normalMap;
// Depth
texture depthMap;
float2 halfPixel;

Next we’ll define our Sampler States:

sampler colorSampler = sampler_state
{
    Texture = (colorMap);
    AddressU = CLAMP;
    AddressV = CLAMP;
    MagFilter = LINEAR;
    MinFilter = LINEAR;
    Mipfilter = LINEAR;
};

sampler depthSampler = sampler_state
{
    Texture = (depthMap);
    AddressU = CLAMP;
    AddressV = CLAMP;
    MagFilter = POINT;
    MinFilter = POINT;
    Mipfilter = POINT;
};

sampler normalSampler = sampler_state
{
    Texture = (normalMap);
    AddressU = CLAMP;
    AddressV = CLAMP;
    MagFilter = POINT;
    MinFilter = POINT;
    Mipfilter = POINT;
};

We’ll create our Vertex Shader Input and Output structs, plus our VertexShaderFunction:

struct VertexShaderInput
{
    float3 Position : POSITION0;
    float2 TexCoord : TEXCOORD0;
};

struct VertexShaderOutput
{
    float4 Position : POSITION0;
    float2 TexCoord : TEXCOORD0;
};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
    VertexShaderOutput output;
    output.Position = float4(input.Position,1);
    // Align texture coordinates
    output.TexCoord = input.TexCoord - halfPixel;
    return output;
}

The only things left to do is create our Pixel Shader Function and to add our technique:

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
    // Get normal data from the normalMap
    float4 normalData = tex2D(normalSampler,input.TexCoord);
    // Transform normal back into [-1,1] range
    float3 normal = 2.0f * normalData.xyz - 1.0f;
    // Get specular power, and get it into [0,255] range]
    float specularPower = normalData.a * 255;
    // Get specular intensity from the colorMap
    float specularIntensity = tex2D(colorSampler, input.TexCoord).a;
    
    // Read depth
    float depthVal = tex2D(depthSampler,input.TexCoord).r;

    // Compute screen-space position
    float4 position;
    position.x = input.TexCoord.x * 2.0f - 1.0f;
    position.y = -(input.TexCoord.x * 2.0f - 1.0f);
    position.z = depthVal;
    position.w = 1.0f;
    // Transform to world space
    position = mul(position, InvertViewProjection);
    position /= position.w;
    
    // Surface-to-light vector
    float3 lightVector = -normalize(lightDirection);

    // Compute diffuse light
    float NdL = max(0,dot(normal,lightVector));
    float3 diffuseLight = NdL * Color.rgb;

    // Reflection vector
    float3 reflectionVector = normalize(reflect(-lightVector, normal));
    // Camera-to-surface vector
    float3 directionToCamera = normalize(cameraPosition - position);
    // Compute specular light
    float specularLight = specularIntensity * pow( saturate(dot(reflectionVector, directionToCamera)), specularPower);

    // Output the two lights
    return float4(diffuseLight.rgb, specularLight) ;
}

technique DirectionalLight
{
    pass Pass0
    {
        VertexShader = compile vs_2_0 VertexShaderFunction();
        PixelShader = compile ps_2_0 PixelShaderFunction();
    }
}

Great! Build the solution to make sure that you don’t have any errors. Back in our “DeferredRenderer” class, let’s define a new variable:

private Effect directionalLightEffect;

In the constructor, let’s instantiate it:

directionalLightEffect = content.Load<Effect>("Shaders/Lights/DirectionalLight");

In the “Draw” method, we have one comment left, and that is to do with the lights. Change the comment too:

DrawLights();

Now, let us create this new method:

void DrawLights()
{
}

In this method, we’ll set the Light RenderTarget, the BlendState and the DepthStencilState, draw the lights and reset the BlendState, DepthStencilState and the Light RenderTarget.

void DrawLights()
{
    device.SetRenderTarget(lightRT);
    device.Clear(Color.Transparent);
    device.BlendState = BlendState.AlphaBlend;
    device.DepthStencilState = DepthStencilState.None;

    // Draw lights
    DrawDirectionalLight(new Vector3(0, -1, 0), Color.Blue);

    device.BlendState = BlendState.Opaque;
    device.DepthStencilState = DepthStencilState.None;
    device.RasterizerState = RasterizerState.CullCounterClockwise;
    device.SetRenderTarget(null);
}

There is a new method in there which we’ll need to create:

void DrawDirectionalLight(Vector3 lightDirection, Color color)
{
}

This is really a nasty hack in order to create a Directional light for testing purposes. Outside of the “DeferredRenderer” class, you cannot modify the Color or Position of this light. This is where a Light Manager will come in handy and all of the Light rendering will move in there, much like the Scene Manager. Anyway, let’s continue with the code. In this method we’ll be assigning the parameter values in the “DirectionalLight” effect:

void DrawDirectionalLight(Vector3 lightDirection, Color color)
{
    directionalLightEffect.Parameters["colorMap"].SetValue(colorRT);
    directionalLightEffect.Parameters["normalMap"].SetValue(normalRT);
    directionalLightEffect.Parameters["depthMap"].SetValue(depthRT);
    directionalLightEffect.Parameters["lightDirection"].SetValue(lightDirection);
    directionalLightEffect.Parameters["Color"].SetValue(color.ToVector3());
    directionalLightEffect.Parameters["cameraPosition"].SetValue(sceneManager.Camera.Position);
    directionalLightEffect.Parameters["InvertViewProjection"].SetValue(
                                           Matrix.Invert(sceneManager.Camera.View * 
                                                         sceneManager.Camera.Projection));
    directionalLightEffect.Parameters["halfPixel"].SetValue(halfPixel);
    directionalLightEffect.Techniques[0].Passes[0].Apply();
    fullscreenQuad.Draw();
}

Now, if I haven’t missed anything out, you’ll be able to build this without any errors. If you run the application, you should see the following:

If you feel like changing the colour of the light, change the line in the “DrawLights” method:

DrawDirectionalLight(new Vector3(0, -1, 0), Color.Blue);

In the next part, we’ll look at implement Point lights.

Rendering a model

We’ll start this post of by creating a very simple Scene manager. This is one of the aspects that I’d like the community to get involved in as there are so many differents ways of managing a Scene. We’ll just be using a simple list of models and rendering each one.

In the “Core” folder of the “ProjectVanquish” project, add a new class called “SceneManager”. Add the following namespaces:

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Content;
using Microsoft.Xna.Framework.Graphics;
using ProjectVanquish.Cameras;

Create a few variables:

ContentManager content;
GraphicsDevice device;
FreeCamera camera;
IList<Model> models;

This will hold all of our models so we can iterate through them. If we were to create our own Model class, we could then add in things like visibility tests, so we can only render those that are visible.

Create a constructor:

public SceneManager(GraphicsDevice device, ContentManager content)
{
    this.content = content;
    this.device = device;
    camera = new FreeCamera(device, new Vector3(0, 10, 0), Vector3.Zero, device.Viewport.AspectRatio, 0.1f, 1000f);
    models = new List<Model>();
}

We now have the start of a simple Scene manager, but we need to be able to add models to it. Add a new method called “AddModel”:

public void AddModel(Model model)
{
    models.Add(model);
}

This method will add a Model to our list. We aren’t testing to see whether the list already contains the Model, as this is only a simple Scene manager.

Next, we’ll add the rendering methods:

public void Draw() 
{ 
    device.DepthStencilState = DepthStencilState.Default;
    device.RasterizerState = RasterizerState.CullCounterClockwise;
    device.BlendState = BlendState.Opaque;

    foreach (Model model in models)
        DrawModel(model, Matrix.Identity, camera);
}

void DrawModel(Model model, Matrix world, FreeCamera camera) 
{ 
    foreach (ModelMesh mesh in model.Meshes)
    {
        foreach (Effect effect in mesh.Effects)
        {
            effect.Parameters["World"].SetValue(world);
            effect.Parameters["View"].SetValue(camera.View);
            effect.Parameters["Projection"].SetValue(camera.Projection);
        }

        mesh.Draw();
    }
}

We have sorted out our Model rendering, but we haven’t updated our Camera. Create a new method called “Update”:

public void Update(GameTime gameTime)
{
    camera.Update(gameTime);
}

In this method we call the Cameras “Update” method, which in turn will update the Camera. That’s all for the Scene manager. As I said at the start, it’s a very simplistic approach, but I’ve included it so we can actually start rendering some models.

Go back to the “DeferredRenderer” class, add a new namespace:

using ProjectVanquish.Core;

Now we can declare our new “SceneManager” class:

private SceneManager sceneManager;

In the constructor, we’ll instantiate our new object:

sceneManager = new SceneManager(device, content);

Remember early on in the project when we were creating the skeleton structure of the project? We added a comment into the “Draw” method of the “DeferredRenderer” class. We can now replace this comment with the following:

sceneManager.Draw();

We also added a comment in the “Update” method, so we can also replace that with the following:

sceneManager.Update(gameTime);

Great! We’ve now included our Scene manager in the Deferred engine. But wait! We have an “AddModel” method in the “SceneManager” class, but we won’t be able to access that from our “ProjectVanquishTest” project due to the protection level of the class. This is easy to fix. We’ll create an “AddModel” method in the “DeferredRenderer” class which will call the “SceneManager”s method:

public void AddModel(Model model)
{
    sceneManager.AddModel(model);
}

We’ll add a property for the Camera so that the “DeferredRenderer” class can access it:

public FreeCamera Camera 
{ 
    get 
    { 
        return camera; 
    }
}

Build the solution to make sure that there are no errors. Head over to the “ProjectVanquishTest” project and open the “Game1” class. Locate the “LoadContent” method and under the TODO comment, add the following:

renderer.AddModel(Content.Load<Model>("Models/Ground"));

Building the solution will show no errors, but we’ll need to add the “Ground” Model to our “ProjectVanquishTestContent” project. Roy’s source code contains this Model, in the “DeferredLightingContent\Models” folder. You will need 4 files. Firstly, the “Ground.X” file. Then there are 3 JPG files, all starting with “ground_”. Extract these 4 files to your “ProjectVanquishContent\Models” directory. In Visual Studio, right mouse click the “Models” folder in the “ProjectVanquishTestContent” and click “Add” then “Existing Item”. Open up the “Models” folder and click “Ground.x” and then click “Add”.

Once the Model is added to the project, we need to change the “Content Processor” to our “ProjectVanquishContentPipeline.ContentProcessor”. To do this, click the “Ground.x” item and view the “Properties”. The “Content Processor” is the fourth item in the list of properties. Click the drop-down arrow and locate the “ProjectVanquishContentPipeline.ContentProcessor” and click. Expand the “Content Processor” items so you can see all of the items for the processor. Locate “Normal Map Texture” and type in:

ground_normal.jpg

Then, locate “Specular Map Texture” and type in:

ground_specular.jpg

We’ve now configured our Model to use our “Content Processor”. If you build and run this code, you will should see a large black screen, but the Debug textures are now appearing. This is because we have not implementated any lighting.

Well done! In the next part, we’ll look at implementing Directional lighting.

Rendering a model – Free Camera

Building on the last post, we’ll create a free moving camera. Add a new class to the “Cameras” folder in the “ProjectVanquish” project. Name the class “FreeCamera”. We need to change the class declaration to inherit our “BaseCamera” class.

public class FreeCamera : BaseCamera
{
}

Add the following to the namespaces:

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

You’ll see that we are referencing the “Input” namespace. This is so we can use Input devices like the Keyboard, Mouse and even GamePads. We’ll need to declare a “MouseState” object for us to store the original position of the Mouse before it’s moved. Add in the following variable declaration:

MouseState originalMouse;

Let’s create the constructor, remembering that we’ll need to pass in variables to the base camera class:

public FreeCamera(GraphicsDevice device, Vector3 position, Vector3 target, float aspectRatio, float near, float far)
    : base(device, position, target, aspectRatio, near, far)
{
    originalMouse = Mouse.GetState();
}

We don’t need to do anything else with the constructor as all of the values are set within the “BaseCamera” class. Let’s look at overriding the “Move” and “Update” methods, starting with the “Move” method:

public override void Move(Vector3 vector)
{
    // Move the cameras position based on the way its facing
    Vector3 rotatedVector = Vector3.Transform(vector, rotationMatrix);
    Position += speed * rotatedVector;
}

This code moves the cameras position based on the direction that it’s facing. Let’s do the “Update” method:

public override void Update(GameTime gameTime)
{
    // Free movement
    float dt = (float)gameTime.ElapsedGameTime.Milliseconds / 1000f;
    // Rotation
    MouseState currentMouseState = Mouse.GetState();
    if (currentMouseState != originalMouse && Keyboard.GetState().IsKeyDown(Keys.Space))
    {
        Vector3 rot = Rotation;
        float xDifference = currentMouseState.X - originalMouse.X;
        float yDifference = currentMouseState.Y - originalMouse.Y;
        rot.Y -= 0.3f * xDifference * dt;
        rot.X += 0.3f * yDifference * dt;
        Mouse.SetPosition(device.Viewport.Width / 2, device.Viewport.Height / 2);

        Rotation = rot;
    }

    originalMouse = Mouse.GetState();

    // Key press movement
    KeyboardState keyboard = Keyboard.GetState();
    Vector3 direction = Vector3.Zero;
    if (Keyboard.GetState().IsKeyDown(Keys.W))
        Move(new Vector3(0, 0, -1) * dt);

    if (Keyboard.GetState().IsKeyDown(Keys.S))
        Move(new Vector3(0, 0, 1) * dt);

    if (Keyboard.GetState().IsKeyDown(Keys.A))
        Move(new Vector3(-1, 0, 0) * dt);

    if (Keyboard.GetState().IsKeyDown(Keys.D))
        Move(new Vector3(1, 0, 0) * dt);
}

That’s all we need to do for this class. We have added input handling in the “Update” method for the Keyboard and Mouse devices. All we need to do now is to tie this in with our “ProjectVanquishTest” project.

The code below is for reference purposes only. It won’t actually be used in the next part, so if you add it in, be prepared to remove it for the next post.

Open up the “Game1” file within the “ProjectVanquishTest” project and add the following namespace:

using ProjectVanquish.Cameras;

Declare a new variable:

FreeCamera camera;

We’ll instantiate this variable in the “Initialize” method:

camera = new FreeCamera(GraphicsDevice, 
                        new Vector3(0, 10, 0), 
                        Vector3.Zero, 
                        GraphicsDevice.Viewport.AspectRatio, 
                        0.1f, 
                        1000f);

We are positioning the camera at X:0, Y:10, Z:0 looking at X:0, Y:0, Z:0. Find the “Update” method and then add the following:

camera.Update(gameTime);

So, we have now incorporated our new Camera into the test project.

However, we will still see nothing on the screen at the moment, and also, this camera isn’t in the correct place. I believe that the camera management should be done in the Scene Manager, so in the next part, we’ll create a simple Scene Manager to handle this.

Rendering a model – Creating a Camera

The engine is finally beginning to take shape. ¬†Before we can render a model, we’ll need to create a camera. ¬†We’ll create two new classses, a base camera class and a free camera. ¬†We’ll inherit from the base camera class to create our free camera. ¬†Let’s start off with the base camera class. ¬†In the “ProjectVanquish” project, add a new Class file to the Cameras folder called “BaseCamera”. ¬†Declare the class as:

public abstract class BaseCamera
{
}

Add the following namespaces:

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;

Add in the following variables:

protected Matrix viewMatrix;
protected Matrix projectionMatrix;
protected Matrix rotationMatrix;
protected GraphicsDevice device;
protected Vector3 target;
protected Vector3 position;
protected Vector3 rotation;
protected Vector3 up;
protected float speed;
float near;
float far;

Add the following properties:

public Matrix View { get { return viewMatrix; } }
public Matrix Projection { get { return projectionMatrix; } }
public float NearClip { get { return near; } }
public float FarClip { get { return far; } }
public Vector3 Position
{
    get { return position; }
    set
    {
        position = value;
        UpdateView();
    }
}

public Vector3 Target
{
    get { return target; }
    set
    {
        target = value;
        viewMatrix = Matrix.CreateLookAt(Position, Target, new Vector3(0, 1, 0));
    }
}

public Vector3 Rotation
{
    get { return rotation; }
    set
    {
        rotation = value;
        rotationMatrix = Matrix.CreateRotationX(rotation.X) 
                       * Matrix.CreateRotationY(rotation.Y);
        UpdateView();
    }
}

public Matrix World
{
    get
    {
        return Matrix.CreateTranslation(Position.X, Position.Y, Position.Z)
             * Matrix.CreateRotationX(rotation.X)
             * Matrix.CreateRotationY(rotation.Y)
             * Matrix.CreateRotationZ(rotation.Z);
    }
}

Now we need a constructor for the class:

public BaseCamera(GraphicsDevice device, Vector3 position, Vector3 target, float aspectRatio, float near, float far)
{
    Position = position;
    Rotation = Vector3.Zero;
    Target = target;
    this.speed = 30;
    this.near = near;
    this.far = far;
    this.device = device;
    this.up = Vector3.Up;

    // Setup Field of View, Aspect Ratio and Clipping Planes
    projectionMatrix = Matrix.CreatePerspectiveFieldOfView(
                         MathHelper.PiOver4, 
                         aspectRatio, 
                         near, 
                         far
                       );

    // Update the View
    UpdateView();
}

The last line in the constructor is calling an “UpdateView” method. Let’s add this in:

public virtual void UpdateView()
{
    Vector3 cameraOriginalTarget = new Vector3(0, 0, -1);
    Vector3 cameraRotatedTarget = Vector3.Transform(cameraOriginalTarget, rotationMatrix);
    Vector3 cameraFinalTarget = position + cameraRotatedTarget;
    Vector3 cameraRotatedUpVector = Vector3.Transform(up, rotationMatrix);

    viewMatrix = Matrix.CreateLookAt(Position, cameraFinalTarget, cameraRotatedUpVector);
}

So, that is the camera initialisation code out of the way. Let’s add a few more methods so our other classes can inherit the functionality, but override them if the developer chooses too:

public virtual void Move(Vector3 vector)
{
}

public virtual void Update(GameTime gameTime)
{
}

Excellent. We now have a good base class that we can create new Cameras from. In the next part, we’ll create a “Free” camera for our first camera and to use in testing.

Content Pipeline – Part 2

In part 1 we created our Content Pipeline Extension Library, so we can now focus on the code. Let’s start off by removing the following method:

public override TOutput Process(TInput input, ContentProcessorContext context)

Also, we need to change the inherited type to “ModelProcessor” rather than “ContentProcessor”:

public class ProjectVanquishContentProcessor : ModelProcessor

Ok, the last thing before we can start coding is to add the following namespaces:

using System.Collections;
using System.ComponentModel;
using System.IO;

And declare some new variables:

string directory;
// Normal and Specular Map textures
string normalMapTexture, specularMapTexture;

// These Keys are used to search the Normal and Specular map in the opaque data of the model
// Normal Map Key
string normalMapKey = "NormalMap";
// Specular Map Key
string specularMapKey = "SpecularMap";

// Create a List of Acceptable Vertex Channel Names
static IList acceptableVertexChannelNames = new string[]
{
    VertexChannelNames.TextureCoordinate(0),
    VertexChannelNames.Normal(0),
    VertexChannelNames.Binormal(0),
    VertexChannelNames.Tangent(0),
};

We will expand on the Vertex Channel Names when we come to look at adding Skinned models. We’ll continue with the code and we look at properties next. These properties will be used in Visual Studio (as shown below):

The first property we need to create is to override “GenerateTangentFrames”:

[Browsable(false)]
public override bool GenerateTangentFrames
{
    get { return true; }
    set { }
}

Next, we’ll create the properties for the Normal and Specular Keys.

[DisplayName("Normal Map Key")]
[Description("This will be the key that will be used to search the Normal Map in the Opaque data of the model")]
[DefaultValue("NormalMap")]
public string NormalMapKey
{
    get { return normalMapKey; }
    set { normalMapKey = value; }
}

[DisplayName("Specular Map Key")]
[Description("This will be the key that will be used to search the Specular Map in the Opaque data of the model")]
[DefaultValue("SpecularMap")]
public string SpecularMapKey
{
    get { return specularMapKey; }
    set { specularMapKey = value; }
}

Let’s look over that code. We are defining 3 attributes for the property. These are pretty much self explanatory. The “DisplayName” relates to the left hand column in the “Properties” window, whilst the “Description” is what is displayed at the base of the “Properties” Window. The third property is defining a default value for the property.

Next, we’ll create the Normal and Specular Map Texture properties:

[DisplayName("Normal Map Texture")]
[Description("If set, this file will be used as the Normal Map on the model, overriding anything found in the Opaque data.")]
[DefaultValue("")]
public string NormalMapTexture
{
    get { return normalMapTexture; }
    set { normalMapTexture = value; }
}

[DisplayName("Specular Map Texture")]
[Description("If set, this file will be used as the Specular Map on the model, overriding anything found in the Opaque data.")]
[DefaultValue("")]
public string SpecularMapTexture
{
    get { return specularMapTexture; }
    set { specularMapTexture = value; }
}

Again, the same principals apply to the previously defined properties. We are defining a display name and description and this time we aren’t setting a default value. This will be done in the code itself. We need to create 3 methods, well, 1 new method and override two methods that are in the “ModelProcessor” class. We’ll start with the overrides first:

protected override void ProcessVertexChannel(GeometryContent geometry, int vertexChannelIndex, ContentProcessorContext context)
{
    string vertexChannelName = geometry.Vertices.Channels[vertexChannelIndex].Name;

    // If this vertex channel has an acceptable names, process it as normal.
    if (acceptableVertexChannelNames.Contains(vertexChannelName))
        base.ProcessVertexChannel(geometry, vertexChannelIndex, context);
    // Otherwise, remove it from the vertex channels; it's just extra data
    // we don't need.
    else
        geometry.Vertices.Channels.Remove(vertexChannelName);
}

This override method, we are checking for acceptable Vertex Channel Names compared to our list and if the Vertex Channel Name exists within our list, we process it as normal, otherwise we remove the Vertex Channel Name.

protected override MaterialContent ConvertMaterial(MaterialContent material, ContentProcessorContext context)
{
    EffectMaterialContent deferredShadingMaterial = new EffectMaterialContent();
    deferredShadingMaterial.Effect = new ExternalReference<EffectContent>("Shaders/RenderGBuffer.fx");

    // Copy the textures in the original material to the new normal mapping
    // material, if they are relevant to our renderer. The
    // LookUpTextures function has added the normal map and specular map
    // textures to the Textures collection, so that will be copied as well.
    foreach (KeyValuePair<String, ExternalReference<TextureContent>> texture in material.Textures)
    {
        if ((texture.Key == "Texture") ||
            (texture.Key == "NormalMap") ||
            (texture.Key == "SpecularMap"))
            deferredShadingMaterial.Textures.Add(texture.Key, texture.Value);
    }

    return context.Convert<MaterialContent, MaterialContent>(deferredShadingMaterial, typeof(MaterialProcessor).Name);
}

public override ModelContent Process(NodeContent input, ContentProcessorContext context)
{
    if (input == null)
        throw new ArgumentNullException("input");

    directory = Path.GetDirectoryName(input.Identity.SourceFilename);
    LookUpTextures(input);
    return base.Process(input, context);
}

In the last snippet of code, you’ll notice that we have referenced a new Shader. This Shader is required in order to render our Color, Normal and Specular textures, combining them together to give us our final output. We will create this Shader in the Shaders folder of the “ProjectVanquishContent” project. Add a new Effect file called “RenderGBuffer” and delete the contents. Add the following:

float4x4 World;
float4x4 View;
float4x4 Projection;
float SpecularIntensity = 0.8f;
float SpecularPower = 0.5f;

These are our parameters for the shader. We’ll need to pass in the World, View and Projection matrices plus we can override the Specular Intensity or Power if we fancy changing them. We’ll define some Sampler States now:

// Define the Color RenderTarget
texture Texture;
sampler diffuseSampler = sampler_state
{
    Texture = (Texture);
    MAGFILTER = LINEAR;
    MINFILTER = LINEAR;
    MIPFILTER = LINEAR;
    AddressU = Wrap;
    AddressV = Wrap;
};

// Define the Normal Map Texture
texture NormalMap;
sampler normalSampler = sampler_state
{
    Texture = (NormalMap);
    MAGFILTER = LINEAR;
    MINFILTER = LINEAR;
    MIPFILTER = LINEAR;
    AddressU = Wrap;
    AddressV = Wrap;
};

// Define the Specular Map Texture
texture SpecularMap;
sampler specularSampler = sampler_state
{
    Texture = (SpecularMap);
    MAGFILTER = LINEAR;
    MINFILTER = LINEAR;
    MIPFILTER = LINEAR;
    AddressU = Wrap;
    AddressV = Wrap;
};

Here we have our 3 Sampler States for the Color, Normal and Specular textures and we configure them to use Linear and Wrap. Let’s carry on with the Shader code and add in our Vertex Shader structs and define our Vertex Shader function:

struct VertexShaderInput
{
    float4 Position : POSITION0;
    float3 Normal : NORMAL0;
    float2 TexCoord : TEXCOORD0;
    float3 Binormal : BINORMAL0;
    float3 Tangent : TANGENT0;
    // We will add additional variables when we add Skinned models
};

struct VertexShaderOutput
{
    float4 Position : POSITION0;
    float2 TexCoord : TEXCOORD0;
    float2 Depth : TEXCOORD1;
    float3x3 tangentToWorld : TEXCOORD2;
};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
    VertexShaderOutput output;

    float4 worldPosition = mul(float4(input.Position.xyz,1), World);
    float4 viewPosition = mul(worldPosition, View);
    output.Position = mul(viewPosition, Projection);

    output.TexCoord = input.TexCoord;
    output.Depth.x = output.Position.z;
    output.Depth.y = output.Position.w;

    // Calculate tangent space to world space matrix using the world space tangent,
    // binormal, and normal as basis vectors
    output.tangentToWorld[0] = mul(input.Tangent, World);
    output.tangentToWorld[1] = mul(input.Binormal, World);
    output.tangentToWorld[2] = mul(input.Normal, World);

    return output;
}

The last thing left to add to the Shader is the Pixel Shader:

struct PixelShaderOutput
{
    half4 Color : COLOR0;
    half4 Normal : COLOR1;
    half4 Depth : COLOR2;
};

PixelShaderOutput PixelShaderFunction(VertexShaderOutput input)
{
    PixelShaderOutput output;
    output.Color = tex2D(diffuseSampler, input.TexCoord);
    
    float4 specularAttributes = tex2D(specularSampler, input.TexCoord);
    // Specular Intensity
    output.Color.a = specularAttributes.r;
    
    // Read the normal from the normal map
    float3 normalFromMap = tex2D(normalSampler, input.TexCoord);
    // Transform to [-1,1]
    normalFromMap = 2.0f * normalFromMap - 1.0f;
    // Transform into world space
    normalFromMap = mul(normalFromMap, input.tangentToWorld);
    // Normalize the result
    normalFromMap = normalize(normalFromMap);
    // Output the normal, in [0,1] space
    output.Normal.rgb = 0.5f * (normalFromMap + 1.0f);

    // Specular Power
    output.Normal.a = specularAttributes.a;
    output.Depth = input.Depth.x / input.Depth.y;

    return output;
}

Excellent. Now we have our Vertex and Pixel Shader functions. We need a Technique in order to call these:

technique RenderGBuffer
{
    pass Pass1
    {
        VertexShader = compile vs_2_0 VertexShaderFunction();
        PixelShader = compile ps_2_0 PixelShaderFunction();
    }
}

That’s it done. We are nearing the end. We need to create our final method back in our “ProjectVanquishContentProcessor” file. The code is lengthy, but commented very well:

void LookUpTextures(NodeContent node)
{
    MeshContent mesh = node as MeshContent;
    if (mesh != null)
    {
        // This will contatin the path to the normal map texture
        string normalMapPath;

        // If the NormalMapTexture property is set, we use that normal map for all meshes in the model.
        // This overrides anything else
        if (!String.IsNullOrEmpty(NormalMapTexture))
            normalMapPath = NormalMapTexture;
        else
            // If NormalMapTexture is not set, we look into the opaque data of the model, 
            // and search for a texture with the key equal to NormalMapKey
            normalMapPath = mesh.OpaqueData.GetValue<string>(NormalMapKey, null);

        // If the NormalMapTexture Property was not used, and the key was not found in the model, than normalMapPath would have the value null.
        if (normalMapPath == null)
        {
            // If a key with the required name is not found, we make a final attempt, 
            // and search, in the same directory as the model, for a texture named 
            // meshname_n.tga, where meshname is the name of a mesh inside the model.
            normalMapPath = Path.Combine(directory, mesh.Name + "_n.tga");
            if (!File.Exists(normalMapPath))
                // If this fails also (that texture does not exist), 
                // then we use a default texture, named null_normal.tga
                normalMapPath = "null_normal.tga";
        }
        else
            normalMapPath = Path.Combine(directory, normalMapPath);

        string specularMapPath;

        // If the SpecularMapTexture property is set, we use it
        if (!String.IsNullOrEmpty(SpecularMapTexture))
            specularMapPath = SpecularMapTexture;
        else
            // If SpecularMapTexture is not set, we look into the opaque data of the model, 
            // and search for a texture with the key equal to specularMapKey
            specularMapPath = mesh.OpaqueData.GetValue<string>(SpecularMapKey, null);

        if (specularMapPath == null)
        {
            // We search, in the same directory as the model, for a texture named 
            // meshname_s.tga
            specularMapPath = Path.Combine(directory, mesh.Name + "_s.tga");
            if (!File.Exists(specularMapPath))
                // If this fails also (that texture does not exist), 
                // then we use a default texture, named null_specular.tga
                specularMapPath = "null_specular.tga";
            }
            else
                specularMapPath = Path.Combine(directory, specularMapPath);

            // Add the keys to the material, so they can be used by the shader
            foreach (GeometryContent geometry in mesh.Geometry)
            {
                // In some .fbx files, the key might be found in the textures collection, but not
                // in the mesh, as we checked above. If this is the case, we need to get it out, and
                // add it with the "NormalMap" key
                if (geometry.Material.Textures.ContainsKey(normalMapKey))
                {
                    ExternalReference<TextureContent> texRef = geometry.Material.Textures[normalMapKey];
                    geometry.Material.Textures.Remove(normalMapKey);
                    geometry.Material.Textures.Add("NormalMap", texRef);
                }
                else
                    geometry.Material.Textures.Add("NormalMap", new ExternalReference<TextureContent>(normalMapPath));

                if (geometry.Material.Textures.ContainsKey(specularMapKey))
                {
                    ExternalReference<TextureContent> texRef = geometry.Material.Textures[specularMapKey];
                    geometry.Material.Textures.Remove(specularMapKey);
                    geometry.Material.Textures.Add("SpecularMap", texRef);
                }
                else
                    geometry.Material.Textures.Add("SpecularMap", new ExternalReference<TextureContent>(specularMapPath));
            }
        }

        // go through all children and apply LookUpTextures recursively
        foreach (NodeContent child in node.Children)
            LookUpTextures(child);
}

That’s it! Firstly, thanks for sticking with it, but we now have a Content Processor for our models. You will notice that we are referencing two files, “null_normal.tga” and “null_specular.tga”. You’ll need to extract these from Roy’s and add them to your “ProjectVanquishContent” project.

In the next part, we’ll start looking at rendering a model.

Content Pipeline – Part 1

I’ve decided to break this up into two posts. The first being the creation of the Content Pipeline project. So, let’s get going. Right mouse click the solution and click Add -> New Project. Select the Content Pipeline Extension Library (4.0) and give it a name. I’ve called mine “ProjectVanquishContentPipeline”:

Once the project has been created, you’ll see a class file called “ContentProcessor1”. Rename this to “ProjectVanquishContentProcessor”. You may see the following window:

If you do, click “Yes”. ¬†If you have the new class open, your code should look something like:

[ContentProcessor(DisplayName = "ProjectVanquishContentPipeline.ContentProcessor1")]
public class ProjectVanquishContentProcessor : ContentProcessor<TInput, TOutput>
{
    public override TOutput Process(TInput input, ContentProcessorContext context)
    {
        // TODO: process the input object, and return the modified data.
        throw new NotImplementedException();
    }
}

There is one more “ContentProcessor1” to rename in the “DisplayName” property. Just remove the 1.

[ContentProcessor(DisplayName = "ProjectVanquishContentPipeline.ContentProcessor")]

The next step we need to do is add a reference to our new project in our Content project. Right mouse click “ProjectVanquishTestContent” project and click “Add Reference”. If “Projects” isn’t selected, select it and locate your Content Pipeline Extension Library project and click OK.

Build your solution and you should have no errors. In the next part, we’ll add the code. The code won’t differ much from Roy’s version, but we’ll be looking at extending it to allow for Skinned models later on in the project.