Final Dissertation Post

This post marks the last post I will write on the development of my rendering engine as a part of my Dissertation. This project has proven a gigantic learning curve, not only regarding C++ and DirectX but also maintaining a blog and documenting progress of a project.

The repository for my dissertation artefact can be found on Github Here:

When I set out my milestones I had very little knowledge of Graphics Programming so some of the milestones I set myself were trivial, they could have and should have been grouped together, such as the individual lighting tasks; which I completed all at the same time anyway. Knowing what I know now on this subject, when I continue development on the project I will be able to far more accurately judge what functionality and tasks constitute an achievable milestone.

An issue I had over the course of this project was finding the time between working on the other projects I had been part of to write up the progress I was making; I know that writing is not my strong point and even though I managed to write posts with ease, I had to teach myself to do it immediately after completing the work. There were many points where I spent 100 percent of my time working on my other work, neglecting this, only to the spend all my time on this, powering through work and neglecting the other projects. I am now far more aware of how I need to make sure I manage my time so that I can effectively work on multiple projects concurrently.

Furthermore, getting posts proof-read proved quite an issue due to the complexity of the project, on a few occasions I had posts backed up while I was trying to find proof readers. Unfortunately, this is something that comes with working on more niche subjects.

One task had me wasting many hours trying to find solutions to many problems and that was retrieving material data from my FBX files. I found the FBX SDK Documentation to be very unhelpful at times; demonstrations were often focussed on creating/exporting data rather than importing meshes et-cetera. I regret spending so long trying to get anywhere with the task instead of seeing it as something to come back to later once I had completed more approachable and urgent tasks such as adding Specular and Normal maps to my shader or the Scene Graph which is currently only half implemented.

Overall I am happy with how far I have gotten through the milestones I set out, I completed all the minimum tasks and two of the advanced. I had also started implementing the scene graph, only to stop working on it in favour of tidying and organizing my code.

Specular Maps

Specular maps are used in 3D Rendering and Games to specify what parts of a material are shiny or more reflective, as the Games industry has moved more towards Physically Based Rendering (PBR) which uses both roughness maps and reflectivity maps for more physically accurate results, however as a beginner I have decided to stick with what I know and implement the more basic model, leaving PBR as a feature to be added way into the future.

Adding a specular map to the shader was very simple. Similarly, to how the Diffuse texture is multiplied with the diffuse value of the light, we can control the specular intensity with a texture in the same way. To add the specular map to my shader, I added another texture to the pixel shader named gTexSpec which I assigned to the second texture register and updated it’s sub-resource in the same way as I did with the diffuse map, and using 1 as the first parameter, as I’m using slot 1 for the specular map.

To demonstrate the effect of the specular map, I have rendered a plane with a charcoal material from with and without the spec map.


This milestone took far less time to complete than I had anticipated back in October when I wrote my proposal as I had already learned everything I needed when I added in the support for diffuse textures. Had I realized this i could have completed this task at the same time similar to how I completed my lighting milestones, saving time and freeing up mode space in the advanced milestones for more tasks.

Reference (n.d.). Charred Wood. [online] Available at: [Accessed 5 May 2017].

Tidying Things Up

The last few commits started out as the beginning of creating the scene manager, however as I began to move code around I realised that my D3D11App class was far too complicated, it’s purpose was originally only to deal with DirectX matters but as time has passed it slowly turned into what can only be described as spaghetti code. Therefore, I decided it would be a far better decision to tidy up the code and try to keep all my classes only handling what they should and having them make sense from an OOP perspective.

An important change I have made is that I have renamed my D3D11App class to D3D11Graphicsand this class now deals only with initializing D3D and Drawing. All the remaining functionality that D3D11App had has been moved to the new NoobieEngine class (Noobie being the name I have settled on calling the Engine) and the Win32 code like the window creation and message handling is now Dealt with in the Source.cpp file with the entry point of the program.

My plan with this change is to not only make it easier for the programmer to use the engine when they add their own content to render but to also allow me to easily implement different rendering APIs like DirectX 12 or OpenGL as per my Advanced features list, where I would only have to add a new Graphics class and slightly modify the NoobyEngine class to accommodate the variations different APIs will bring without changing the workflow of programming the engine. In its current state, if the user wishes to add meshes, move objects or in any way modify the scene, all of this can be done within this new class without having to go anywhere near unrelated code.

Changes to Materials

At one point, each mesh had a pointer to a material however for some reason I had changed this and have been using a material pointer in D3D11App, I have now reached the point where the meshes that I am using may well require different meshes and eventually these will be imported with the FBX SDK, for this reason I added the pointer back into the mesh class and removed it from D3D11App.

Additionally, materials now have a pointer to their diffuse texture and I am no longer manually swapping the texture from mTextures. This means that my UpdateMaterial method now only requires the device context and a material. This is a far more sensible way of dealing with materials and textures as the imported textures are always being used with a material anyway.

When I start using bigger and bigger scenes that contain many materials I will be able to sort my renderables by the material they are using. Updating the GPU’s resources is a costly procedure so by grouping all meshes by material I will only need to update that data the minimum number of times necessary as I will only have to update the resources once we have finished rendering one material. Reducing the number of times update the GPU will, in theory, lead to increased performance.

In the future, as I begin to add support for more texture types in the material, I would like to have a reference to both a texture for each map as well as an alternative Colour like how material in Unity 3D work. This will allow me to set a colour for that slot if I do not intend to use a texture there, and choosing whether to use it depending on whether the texture is set.

Textured Sponza Render

In its current state, the Rendering engine can take in an FBX and use its Mesh data and load either 24 or 32 bit TARGA files to be used as textures. At this point I want to finally get a textured render of the Sponza scene and the only thing stopping that from happening is not being able to get texture data from my FBX file so for this instalment I decided to take the long and tedious route of separating the meshes and importing all the parts one by one.

Originally, I had intended on working with the FBX sdk to figure out how it’s materials and textures work, however after a few long hours wasted and going around in circles I had been getting nowhere.

The solution I eventually landed on involves the following process. (Sadly, I did not get any screenshots during the process)

Firstly, I matched up the meshes in the Max file with the textures they are currently using, I did this by dragging on the texture I thought was correct and using trial and error, then adding the texture name to the end of the object name. In my knowledge of 3DS Max, there is no way to populate the Material Editor with materials in the scene so this was the only way I knew how. It was an arduous task.

Secondly, I attached all the objects that were using the same texture together. This was easy to do using Select by Name in max and selecting all the objects with the same texture suffix that I set in the first step, then attaching them in the Editable Poly: Modify tab.

Finally, I exported all the meshes, that had been joined by texture, as FBX files with the texture name as the Suffix.


  • exported all individual meshes and named them according to the texture they use

After all of this, all that was left was adding the various lines of code to the start function of the program to load the meshes and their textures into their arrays (in the same order) and then adjust the rendering code to change the texture inside the for loop. Of course, not forgetting to place the all-important camera at a cool and jaunty angle and we are Finally ready to get a Render (or two).



Important Lessons

A major discovery I have made is why people check that a pointer is not NULL before deleting, if for some reason a destructor has been called before an object has been fully initialized, there is a chance that some pointers may be still uninitialized, throwing an error. I will at some point go through and add these checks to all my classes just in case.

delete pObject;
pObject = nullptr;

Additionally, I changed mTexture to be an array of textures, in the future, I may use either a list or dictionary and have texture index referenced in the materials however for this render I only needed the simple array.

Finally, I removed the need for my axis conversion function and the backwards polygon reading inside of the LoadFBX method by using FBX’s FbxAxisSystem::DirectX.ConvertScene(scene) function. This tells FBX to read all the axes using the ‘y up’, right hand axis system as opposed to whatever system it has been exported with. (This does make me wonder why when exporting to FBX into Unity3D, that does not perform the same conversion as I have always had to rotate my mesh’s pivots.)

Targa: 24-bit support

Targa: 24-bit Texture Support

At this point in the project I just want to have a render of the Sponza scene with all its textures applied as it’s now close to the end of the project and I am getting excited to see the whole thing come together

I ran into a problem with the provided textures which were all in JPEG format, which the engine cannot import, and I could only get GIMP to export to TARGA as 24 bit which was not supported. I chose the sensible choice and added the support for using 24 bit Targas.

The method

Initially I had to convert the textures to .tga in GIMP which took far longer than I had anticipated, were it in Photoshop I could have set up a simple macro however I rarely use GIMP so that was not something I know how to do so I had to do it the old-fashioned way and manually export each file.

Then I started on modifying the importer in my content manager. The first port of call was removing the if statement that threw an error if the texture’s bit depth was not 32, I replaced this with a switch block and placed the existing colour revering loop under the case: 32.

To make the data usable as a texture in DirectX I had to add alpha data to the texture, therefore inserting a new byte after every three existing ones. To do this I created a new array of chars of size width * height * 4, which I fill up using the data from the file, simultaneously swapping the blue and red channels as I do with the 32-bit files.

this ended up being very simple to do as I just iterate through a variable for the index in the 24-bit texture and the 32-bit texture at the same time adding 3 or 4 respectively, realizing I could do this in a for loop was helpful and I am sure I will be able to use it elsewhere in similar situations. I then just fill in the new larger array using the 3 existing bytes and setting the 4th byte to 255, completely opaque.

The loop is as follows:

for (size_t tex24 = 0, tex32 = 0;
    tex24 < width * height * 3;
    tex24+=3, tex32+=4)
    imgData[tex32 + 0] = rawData[tex24 + 2];
    imgData[tex32 + 1] = rawData[tex24 + 1];
    imgData[tex32 + 2] = rawData[tex24 + 0];
    imgData[tex32 + 3] = (char)255;

Side note Bugfix

I was running into an error that meant that if textures were a certain size then the width or height would end up being negative, this was due to the header not being unsigned and me negating the low bytes. To fix this I cast the individual parts to unsigned chars before performing the bitwise maths, this uses up more lines and involves 4 temporary variables but it makes it a lot clearer what is happening and fixes the bug.

unsigned char widthLo = buffer[12];
unsigned char widthHi = buffer[13];
unsigned char heightLo = buffer[14];
unsigned char heightHi = buffer[15];

/* unsigned chars */

header.width = widthHi << 8 | widthLo;
header.height = heightHi << 8 | heightLo;

Also, it is important that when exporting to Targa in GIMP that you deselect the compression button, leaving that checked lead to a few wasted hours of debugging the 24-bit converter as the resulting image looked like this:


The FBX SDK – FBX Importing

Autodesk’s FBX is a common file format used when exporting 3D Model data for use in Games and other 3D applications, as you’re reading this I would assume you have already worked with it so I shan’t go into detail about it. I have used it over the last few years with 3DS MAX, Unity 3D and Unreal Engine 4. Autodesk has provided the SDK on their website for free and this is what I have decided I will use to import meshes into the engine. Learning the basics was rather a steep learning curve as the documentation can be rather cryptic or hard to navigate so I would like to use this post as more of a tutorial showing you how I used it and what I did wrong.

Installation and Setup

The SKD installer can be found on Autodesk’s website. On the downloads page, grab the version appropriate for your project; I am using FBX SDK 2017.1 VS2015 for windows (vanilla, not UWP). For the sake of simplicity, I left it with the default installation location on my C drive

Linking and Including

To use the SDK, you must include it in your project. Open your project’s Properties window > VC++ Directories > Library Directories and Edit the string. Here you must add the location of whichever library is right for your build settings.


Here is the include I use for 32 bit/Debug but you can just repeat the process to add the other libraries for x64 and Release builds.

You must also have the provided libfbxsdk.dll in the build directory for the compiled code to run, At the end of this post there’s a little tip to automate this like with the linked libraries.

The FBX Format

The FBX format uses a node Hierarchy like how a scene in Unity3D is structured. There is a root/scene node which has a tree of other nodes in a hierarchical fashion. Nodes can have certain attributes such as lights meshes and splines among many, here we are interested in Meshes.

Node Hierarchy

The hierarchy looks roughly like this:

Node Hierarchy:

Scene ¬
    Node : Mesh ¬
        Node : Mesh
        Node : Mesh
    Node ¬
        Node : Camera ¬
            Node : Light

You can navigate through the tree using the nodes and see what attributes each node contains using Loops or recursion, whichever you are comfortable using. I Iterate through the hierarchy recursively to find all nodes with FBX Meshes attached and add them to an array as I am more comfortable using recursion in this case.

FBX Mesh

To keep this simple, say we only need to know the positions of the vertices.
we do not care about normals or UV co-ordinates for the time being however using
that data requires a small addition to the loops and the functions are very similar.

FBX uses “Control Points”, “Polygons” and “Polygon Vertices”


Vertices are an index of a control point.

Polygon Vertices
1 1, 3, 2
2 2, 3, 4
3 3, 5, 4
4 3, 6, 5
5 1, 6, 3

A simplified version of the code I use to copy the data is as follows:

  • Loop through all polygons
  • loop through polygon vertices
size_t pgCount = mesh->GetPolygonCount(); // Number of polygons
Vertex * vertices = new Vertex[pgCount * 3]; // Assume only triangles
controlPoints = mesh->GetControlPoints(); // list of position vectors

// Loop through all polygons
for (size_t currentPg = 0; currentPg < pgCount; currentPg++)
      // Loop through the vertices in the polygon
      for (size_t currentV = 0; currentV < 3; currentV++)
            // Get the position of the current vertex in the polygon
            vertices[currentPg * 3 + currentV].pos =
                  controlPoints[mesh->GetPolygonVertex(currentPg, 2 - currentV)];

I then construct a mesh and initialize it with this vertex data which is ready to render!

Resource Management

While working on this milestone I had to iterate my test meshes to fix rotations and scaling; Continually having to copy my textures and meshes into the build folder got very repetitive so after a quick google search and poking around in Preferences I discovered “Build Events”. In Visual Studio, you can add commands that can be executed either before, during or after compiling, I added a little command that will copy both my resources and my DLLs into the build directory after the code compiles. This also means I won’t forget to drag the updated files into the build folder after compiling leading to time sinks looking for non-existent bugs.

Simply adding the two commands xcopy /Y /E "$(ProjectDir)res" "$(OutDir)res" and xcopy /Y /E "$(ProjectDir)dll\x86\Debug\*" "$(OutDir)" to the post build events will copy the \res\ folder and the contents of the \dll\ folder into the build location. It will do this for both Debug and release.

This little trick is a convenient QOL addition that can be used in all sorts of projects and i9 thought it would be cool to share it if any readers have not seen it before.

PSA Check Function Parameters

I will finish with a short PSA. Do Not Forget when you change the parameters on a function! A change I made to the Render function on the Lit Shader was that instead of taking the World, View and Projection matrices, it now used the WorldViewProj, combined matrix; Inverse WorldViewProj and the World matrix by itself.

I made the mistake of reverting to the old setup while writing the new on-Render method, all three are still matrices however, understandably nothing was rendering as they completely the wrong things and I lost about 6 hours rewriting and debugging the import code before I found the cause of the issue.

Thank you

Thank you very much for reading so far and I hope that this little guide will help you.

If you have any questions feel free to write up a comment or contact me at my email (Listed on my GitHub Profile).

Let there be light

I must begin by apologising for the lack of updates. You may have seen on my YouTube account and Facebook that I am also working on a game as a part of another module here at University. My attention was somewhat taken hostage by these other project as the weeks were drawing close to our final presentations for the year. I have been working hard on getting that project up and running which has left this project to be rather neglected. As a side note, I am aiming to write about other project here and document their progress in the future so keep an eye open for that.

Over the Christmas break, I decided to continue following 3D Game Programming with DirectX 11 (3D Game Programming) and strayed away from the milestones that I had set out: implementing Textures and importing FBX meshes. I had had started to add textures at the end of November using RasterTek again however I kept running into issues that I believe mostly came down to me not understanding the code so I eventually gave up on that branch and read into the next chapter in 3D Game Programming: Lighting.


As I hadn’t yet implemented Mesh file importing I needed to hard code meshes to render, additionally, the vertices now needed to have normals for directional, point and spotlights, as such the Vertex struct now has an XMFLOAT3 normal field instead of colour. This is reflected in the shader’s header.

I wrote two cubes for testing the lights, a “regular” cube with normals facing away from the faces, producing hard faces or sharp corners; and a “soft” cube with the normals all facing radially, this better shows off the effect of point lights and such by loosely imitating a sphere, albeit a very low poly sphere.

Here’s a render from 3DS Max of what I hoped to achieve:


As you can see the flat shaded “Hard” cube does not show off the point light very effectively.

The Shader

The lights are described in “LightHelper.h” I can include this in any future shaders that I write and save on having to tweak them in multiple places if I adjust the actual shaders. They are reflected Just like the Vertex struct in their own HLSL header.

I will assume that you know what ambient, directional, point and spot lights are so I will not go into detail about them, they are explained in detail in 3D Game Programming along with the various equations that you need to compute them correctly.

The bare minimum code required to compute these lights is written in the book so my code is almost identical when it comes to the individual functions; At this stage I am still learning the functions that HLSL provides and as I start to implement textures for colour maps and normal maps etc., I can see that I have the knowledge to continue writing them on my own.

At this point, the Directional light works and both the point light and spot light have issues with them that I am not happy with. There is a problem with attenuation the point and spot lights which means that the intensity of the light must be 0 – 255 and not 0 – 1, this caused many hours of debugging as any values below about 10 are so dim that they are almost impossible to see. Additionally light attenuation does not necessarily make it fade out to the radius of the light, this means that either the light will not have faded leaving a sharp “edge” to it or it will fade long before it has reached the edge, I can live with this for now and just fiddle with the attenuation until it’s correct (I know this is the point of the attenuation however I would rather it working at 1 and not having to tweak it just to get it working).

Both of these scenes have the same attenuation on the light but the range is greater in the right image.



When uploading resources to the GPU for use in Shaders, we use Constant Buffers (cbuffer), which are blocks of memory that we can change with the CPU. I was using one for telling the shader where the object I want to render should be with the WorldViewProj matrix, this is changed on a per object basis as the position of each object is most likely different, however updating this cbuffer can be an expensive operation to perform and if we also need to have lighting data available, this only needs to be updated on a per frame basis to be more efficient. For this reason, I Use a buffer for per frame data such as lights; a per object buffer with positions and other data that is unique to individual objects; in the future, a per material buffer will be implemented so that all objects using the same material are rendered at the same time, this is an important optimisation as not only changing materials all the time is an obvious waste of time but uploading Textures is costly.

I had been struggling with getting a second buffer working and felt like I was getting nowhere so while optimisation wasn’t a priority, to save my sanity I added the Lights to the existing per object buffer. Once I had gotten all of the lights working I went back to the problem of needing multiple cbuffers, I discovered that the size of a cbuffer needs to be a multiple of 16 bytes; I should have realised this much earlier as 3D Game Programming mentions it and it would have saved me lots of time.


Here are some renders. (These were made after implementing the FBX importer)

This slideshow requires JavaScript.


I discovered that HLSL shaders allow you to include other files with an #include pre-processor directive like in C++, 3D Game Programming uses this for the lights helper, to avoid rewriting structs for individual shaders I have a sort of header specifically for the “Lit” shader.

Drawing a Spinning Cube

Over the past two weeks I have been working towards completing my first programming milestone: Drawing a cube in DirectX. This has been a massive learning experience for me; I have learned about DirectX and Direct 3D in general, the rendering pipeline, shaders and many useful standards relating to C++ and OOP.


In short, I have managed to create a cube mesh; draw it using vertex colours and a set of very basic pixel and vertex shaders; and spun it on the y axis for the sake of “why not”.
The three main components that make up this program, on top of the nesscsary initialization, ade the Mesh class, Basic shader class and the Camera Class.

The Mesh class handles the Vertex buffer (essentially the list of vertices) and the Index buffer, it also has a matrix for it’s World transform. Not only does it take out vertex and index arrays and put them into the respective buffers but it also has a Render methos that Sets them on the GPU ready for our shaders to use.

Our Shader class compiles the shader programs we want to use and Deals with both mapping any constant buffers (variables we can take from System memory and then use on the GPU in shaders) to the GPU and actually drawing the geometry we’ve sent it.

Finally the camera class contains the view matrix (position and rotation of the viewpoint in the “world”) and the projection matrix (the frustum of the camera’s view) which we can change to either move the viewpoint or to keep the projection correct as the screen’s dimentions change and also get to multiply by a mesh’s world matrix for drawing.

I have also been trying to use the correct conventions for naming and using variables, i found a very comprehensive collection of these Naming Conventions online and have been uing it as a guide. The examples that come to mind are:

  • using m for class members
  • p for pointers
  • g for global variables
    • additionally using the global scope operator :: to emphasize the fact that it’s in the global scope.

These are proving very helpful as i can see at a glance what an object is and how i should access it.


Moving to DirectX 11 has presented some problems in the way of certain libraries that are used in Books and tutorials are no longer available or i just cannot figure out what i need to be including to get them working.
The worst example of this is the “Effects11” framework. This is a framework that’s in the DX SDK from June 2010, it’s essentially a convenient way of creating complex effects with shaders however, as DX11 has become somewhat un-supported as it’s bigger brother DX12 has been introduced it was more hastle trying to build it and use it than to just figure out how to get the same effect with regular shaders. I will add that this is a bit of a pain as Luna (2012) used effects throughout the book.

To get a basic introduction on how to use shaders, RasterTek does provide a very good tutorial/series on using DX11. For the finished build, it was enough to get me started and complete the milestone so i will make sure to keep the site handy for if i hit any more bumps in the road.

Youtube Video

Finally, what you’ve been waiting for: Proof!

It may not be that impressive compared to what i can do with Monogame or a full game engine such as UE4 but I am pleased with it and what i have learned.

Proposal Change 1

As per my initial proposal, I have been learning the DirectX 12 API (DX12) using Luna, F’s 3D Game Programming With DirectX 12.
I knew from light research and word of mouth that it was supposedly a lower level API than previous versions of DirectX however, as my delivery on this week’s planned milestone has demonstrated, It is far more confusing than I had anticipated.

I have been following the lessons and studying the examples that Luna provides with the text, yet in the space of a week all I had achieved was making my Window go from the default white I have become used to, to a slight blue of the clear front buffer. In Short, DirectX is too low level, too complicated.

I have decided, instead of this to use DirectX 11, which not only has the benefit of a plethora of resources, tutorials and books about it due to it’s age, it’s is also a considered a higher level API.

As a quick personal primer to DX11 I followed DirectXTutorial‘s DX11 tutorial up to drawing a triangle onto the window, implemented it in a similar class to that which I used in my try at DX12 so that the window and all handling of D3D was in that class and easily read.
As you can see I managed to draw this lovely “Triangle of Doom”!


Due to this change, I have adjusted my milestones by moving them forward by one week, this small adjustment should be enough as most of the theory behind DX11 is very similar to DX12 so I already know much of the basic theory required. Though this slippage is an annoyance, I am glad that I have caught it at an early stage and confident that I will complete this project again.