Along with how skins vertices work, normals are one of the hardest to understand aspects of Quake models. Both how they are stored in the .mdl format and what the quake engine with them is obscure, but hopefully this article will do something for that. The idea behind normals is that they store which direction the surface faces at each vertex, so that the engine can calculate better lighting and shading for the model. We’ll start with what they do for us visually.
For this article, we’re going to work with a simple model – the Grey Knight. He’s the standard quake knight model with one big change: we’ve painted his skin all over with the same tone of grey. This means that any shading we see on the model comes from the normals, not the skin. For comparison, here is a screenshot of the Grey Knight. This shot uses Fitzquake in fullbright mode, because that mode turns normals off and lets us see what the model is like without them. The answer? One colour all over:
The rest of the screenshots will come from Winquake, to prove that normals have always been a feature of the Quake engine, and weren’t just added to GLQuake later. Once Winquake knows which way the surfaces are pointing, it can light and darken parts of the model to define its 3D shape better. At the top we mentioned that the normals are stored per-vertex. Why do that, when we can just calculate which way the triangles face from their vertex coordinates?
The above screenshot shows the result of lighting by triangle faces. While it’s an advance over the initial picture for showing the 3d shape of the model, it might be a bit too explicit. The shading emphasises the triangulation, and our model is too low poly for that. The fix is Gouraud shading, which takes two steps. Firstly, at each vertex we find all the triangles which meet there, and average their normals as seen in the second screenshot. Then we smooth out the values across the polygons between the values at the vertices.
Although you can see banding (something GLQuake reduces), the shading disguises the polygons much better. The model appears to be one big smoothed surface. While it’s easy to see the benefit in general, in places you might want to add a hard edge back, like between the boots or gloves and the rest of the body. This can be done:
The effect is subtle but effective, highlights on the boots and shading at the elbow. While you can bake this kind of shading into the skin (and still should), the normals add an extra level to it because they are dynamic, they react to motion.
The smoothing groups in the last screenshot were created by splitting the vertices which sit on the seams, so the triangles on the boots are disconnected from the triangles on the thigh. This way when we average the normals at the separate vertices, we get different values, changing the shading. It’s worth mentioning at this point that the rule “the normal at a vertex is equal to the average of the neighbouring faces” is not imposed by the engine, it’s a decision made by the model exporter. In this case I used QME, which always follows this rule, and modified the model geometry accordingly.
This technique brings us to the connection with the skinmap article, which teased this article in the final paragraphs. In the former article we saw another reason to split a vertex in two – if we want to separate the polygons on the skinmap. The potential issue is that now the vertices are separate, the averaging normals rule will probably give them different normal values. This will create a hard edge in the shading, which may be undesired! The edge will be right where the texture seam occurs, drawing attention to something we’re trying to disguise.
What we would prefer is to calculate the normal which would occur if we hadn’t split the vertex, then store that normal value on both vertices. This is how md3tomdl works, although again we ride on the coattails of the md3 format. Doing this is standard practice for an md3 file, and we just replicate it vertex for vertex, normal for normal. The hard work and calculation is done by the md3 exporter.
This means that we can use the smoothing group features of our chosen modelling package to define any hard edges or unusual smoothing we desire, then export through md3 to mdl. The export tools automatically split vertices where adjacent polygons have different smoothing, while preserving the smoothing on vertices split for other reasons. There’s just one problem now: if we edit this model in QME and save it, then we throw away all this information and apply the old shading rule instead. This is why I recommend that you don’t edit the models from this site in QME, most of them will lose normals data if you do. For example, here’s how the shading on my ogre model changes when saved in QME:
That line down the chest is the ugliest change, but also notice the difference at the back of the grenade launcher – QME has shaded it flat because it’s split on the skinmap. Conversely, the flat shading on the body of the chainsaw was an intentional smoothing choice you can see on the original. If you want to tweak a model without QME, you might try AdQuedit, which lets you add or replace skins without modifying the vertices, and my own homegrown python module qmdl which lets you import skins, rename and group frames.
So we’ve got through the ideas about normals down, it’s time to learn about all the technical details – or rather where the Quake engine cuts corners. The normals are stored for each vertex in each frame, but indirectly and approximately. The Quake engine has a built in table of 162 normal vectors, and the “vertex” in the model file is really a single byte indicating which of these vectors best approximates the true normal. This approximation is not a great problem in practice, but if you are adding hard edges to your model with smoothing groups, you will need the polygons to have sufficiently different facings.
The last piece of the puzzle is how the engine handles the normals. The description so far has talked about light and shading in general terms. How does quake get the lighting information to illuminate the models correctly? Thanks to metlslime for pointing me in the right direction on this one, the answer is: it’s a hack! The quake engine has a set of lookup tables which map the 162 normals to a corresponding brightness. In total there are 16 tables, and they are selected based on the y-angle of the model. So for things that rotate only about the y-axis (like monsters, player, spinning weapons) the tables do a good approximation of being lit by a light source above and to the north. Rotation on other axes does not affect which table is used, so lighting may come from odd angles.
No description of rendering would be complete without the caveat that results may vary, custom engines often have a different rendering model. For example, darkplaces turns off the global “light from the north” and instead applies properly calculated highlights from any dynamic light, so the models appear flatter on their own but better lit when there are light sources. However, in classic and faithful engines, you can do a few things for your map by taking this light model into account. You can make sure your skybox has the sun in the north so that the monsters are better lit by it when outside. If you need a mapobject illuminated from a particular direction, you might be able to apply rotations on the x-axis or z-axis to get the light coming from the right place, then apply the reverse transformation to your model in the editor to make it face correctly.
Finally you can go all out and apply some weird normals which don’t relate to the model facing at all, to get special effects like the alarm-light console does. Just remember that since it’s taking the assumptions from the standard lighting model, it won’t work in all engines…