Wednesday, October 31, 2012

3D survival guide for starters #3, NormalMapping

Hey, fatman! Just heard a funny fact that makes you want to buy a horrorgame like T22, if you're too heavy that is. Seems watching scary movies or playing horror games is pretty healthy: watching The Shining burned more than 180 kilocalories! Because of fear/stress, you eat less and burn more. So go figure... playing T22 on a home-trainer screen should be even more tense than Tony Little target training or Zumba dancing.


Normals
-----------------------
Enough crap, lets continue these starters guide posts, explaining the world famous "bumpMap". Or I should say "normalMap", because that is what the texture really contains: normals. But what exactly is this weird purple-blue looking texture? So far we showed several techniques to affect surface properties per pixel:
albedo(diffuse) for Diffuse lighting
specularity for specular lighting
shininess for specular spreading
emissive for making pixels self-emitting

Another crucial attribute is the surface "normal". It’s a mathematical term you may remember from school. Or not. “A line or vector is called a normal if its perpendicular to another line/vector/object”. Basically it tells in which direction a piece of surface is heading to. In 2D space we have 2-axis’s, X(horizontal) and Y(vertical). In 3D space we have -surprise- 3 axis’s: X, Y and Z. Your floor is facing upwards (I hope so at least), your ceiling normal is pointing downwards. If the Y axis would mean "vertical", the floor-normal would be {x:0, y:+1, z:0}, the ceiling-normal {x:0, y:-1, z:0}. And then your walls would point somewhere to the -X, +X, -Z or +Z direction. In a 3D model, each triangle has a certain direction. Or to be more accurate, each vertex stores a normal, eventually a bit bended towards its neighbor to get a smoother transition. That is called "smoothing" by the way.

As explained in part 1, older engines/games did all the lighting math per vertex. I also showed some shader math to calculate lighting in part 2, but let's clarify a bit. If you shine a flashlight on a wall, the backside of that wall won't be affected (unless you have a Death Star laser beam). That's because the normal of the backside isn't facing towards your flashlight. If you shine light on a cylinder shape, the front will light up the most, the cylinder sides will gradually fade away as their normals face away from the flashlight further and further. This makes sense, as the surface there would catch less light photons. Lambertian (cosine) lighting is often used in (game)shaders to simulate this behavior:

NormalMapping
-----------------------
Since we have relative little vertices, per-vertex lighting is a rapid way to compute the scene lighting, but it doesn't allow to have a lot of geometrical detail/variation on your surfaces, unless you tessellate them (means dividing them into a LOT of tiny triangles). So, a football field made of a single quad (=2 triangles, 4 corner vertices) would only calculate the lighting at 4 points and interpolate the lighting between them to get a smooth gradient. In this case, the entire soccer field would be more or less equally litten.

However, surfaces are rarely perfectly flat. A football field contains bumps, grass, sand chunks, holes, et cetera. Same thing with brick walls, or wood planks. Even a vinyl floor still may have some bumps or damaged spots. We could tessellate the 3D geometry, but we would need millions of triangles even for a small room to get sufficient detail. C'est pas possible. That's french for "screw it".

This is why old games drew shading-relief into the (diffuse)textures. However, this is not really correct of course, as shadows depend on the lightsource(s) locations. If you move a light from up to down, the shades on a wall should change as well. Nope, we needed something else... Hey! If we can vary diffuse, specularity and emissive attributes per pixel, then why not vary the normals as well?! Excellent thinking chief. "BumpMapping" was born, and being implemented in various ways. The winning solution was "Dot3 normalMapping". As usual, we make yet another image, where each pixel contains a normal. Which is why the correct name is "normalMap" (not "bumpMap"). So instead of having a normal per vertex only, we now have a normal for each pixel (at least, depends a bit on the image resolution of course). So, for a brick wall the parts that face downwards will encode a normal value into this image, causing the pixel to catch less light coming from above.

Obviously, you can't draw multiple lighting situations in a diffuseMap. With normalMapping, this problem is fixed though. Below is a part of the brick normalMap texture:


Image colors
Now let's explain the weird colors you see in a typical normalMap. We start with a little lesson about how images are build up. Common image formats such as BMP, PNG or TGA have 3 or 4 color channels: Red, Green, Blue, and sometimes "Alpha" which is often used for transparency or masking. Each color channel is made of a byte (= 8 bits = 256 different variations possible), so an RGB image would store each pixel with 8 + 8 + 8 = 24 bits, meaning you have 256*256*256 = 16.973.824 different colors.

Notice that some image formats support higher color depths. For example, if each color channel gets 2 bytes (16 bit) instead of 1, the image size would be twice as big, and the color palette would give 281.474.976.710.656 possibilities. Now having so many colors won't be useful (yet), as our current monitors only support 16 million colors anyway. Although "HDR" monitors may not be that far away anymore. Anyway, you may think images are used to store colors, but you can also use it to store vectors, heights, or basically any other numeric data. Those 24 bits could just as well represent a number. Or 3 numbers in the case of normalMapping. We "abuse" the
Red color = X axis value
Green color = Y axis value
Blue color = Z axis value
About directional vectors such as these normals: these are noted as "normalized" vectors (also called "unit vectors"). That means each axis value is somewhere between -1 and +1. And the length of the vector must be exactly 1. If the length was shorter or longer, the vector isn't normalized (a common newbie mistakes when writing light shaders).

You can forget about "normalized" vectors for now, but it’s important you understand how such a vector value is converted to a RGB color value. We need to convert each axis value (-1..+1) to a color channel (0..255) value. This is because we had 1 byte(8 bits) per channel, meaning the value can be 0 to 255. Well, that is not so difficult:
((axisValue + 1) / 2) * 255
((-1 +1) / 2) * 255 = 0 // if axis value was -1
(( 0 +1) / 2) * 255 = 128 // if axis value was 0
((+1 +1) / 2) * 255 = 255 // if axis value was +1
Do the same trick for all 3 channels (XYZ to RGB):
color.rgb = ((axisValue.xyz) + {1.1.1}) / {2,2,2} ) * {255,255,255}
Let's take some examples. Your floor, pointing upwards would have {x:0, y:+1, z:0} as a normal. When converting that to a color, it becomes
red = ((-1 + x:0) / 2) * 255 = 128
green = ((-1 + y:0) / 2) * 255 = 255
blue = ((-1 + z:0) / 2) * 255 = 128
A bright greenish value (note red and blue are halfgray values, not 0). If your surface would face in the +X direction, the value would be bright red. If it would face in the -X direction, it would be a dark greenblue value (no red at all).



Obviously, normalMaps aren't hand-drawn with MS Paint. Not that its impossible, but it would mean you'll have to calculate the normal for each pixel. That's why we have normalMap generators, software that generates (approximates) normals out of a height-image, or by comparing a super detailed (high-poly) model with a lower detailed (game) model. The high-poly model contains all the details such as scratches, skin bumps or wood reliefs in a real 3D model. Usually programs like ZBrush or Mudbox are used to create those super high polygon models. Since we can't use those models in the game, we make a low-poly variant of it. When we compare the shapes, a normalMap can be extracted from that. Pretty awesome right?

Either way, once you have a normalMap, shaders can read these normals. Of course, they have to convert the "encoded" color back to a directional vector:
pixel.rgb  = textureRead( normalMap, texcoords );
// ! note that texture reads in a shader give colors 
//   in the (0..1) range instead of (0..255)
//   So we only have to convert from (0..1) to (-1..+1)
normal.xyz = {2,2,2} * pixel.rgb - {1,1,1} 

You are not ill, you're looking at a 3D scene drawings its normals. This is often used for debugging, checking if the normals are right. If you paid attention, you should know by now that greenish pixels are facing upwards, reddish into the +X , and blueish in the +Z direction. You can also see the normals vary a lot per pixel, leading to more detailed lighting, Thanks to normalMap textures.


Why is the ocean blue, and why are normalMaps purble-blue? > Tangent Space
--------------------
All clear so far? Good job. If not, have some coffee, play a game, eat a steak, and read again. Or just skip if you don’t give a damn about further implementations of normalMapping. With the knowledge so far, a wood floor normalMap would be a greenish texture mainly, as most normals point upwards. Only the edges or big woodnerves would be reddish or blueish / darker. However, it seems all those normalMaps appear purple/blueish (see bricks above). Why is that? The problem is that we don't always know the absolute “world” normals at forehand. What if we want to apply the same floor texture on the ceiling? We would have to invert the Y value in a different texture. If we wouldn't do that, the lighting becomes incorrect, as the shader still thinks the ceiling is facing upwards instead downwards. And how about animated objects? They can rotate, tumble and animate in all kinds of ways. There is an infinite number of possible directions a polygon can take.

Instead of drawing million different normalMaps, we make a "tangentSpace normalMap". What you see in these images, is the deviation compared to the global triangle normal. Huh? How to explain this easy… All colors with a value of {128,128,255} –yes, that blue purple color- indicate a normal of {0,0,+1}. This means the normal is exactly the same as its parent triangle(vertex) normal. As soon as the color starts “bending” a bit (less blue, more or less green/red), the normal bends away from its parent triangle. If you look at this bricks, you’ll see the parts facing forward (+Z direction) along with the wall, are blue-purple. The edges of the bricks and rough parts start showing other colors.


Ok, I know there are better explanations that also explain the math. What’s important is that we can write shaders that do tangent normalMapping. In combination with these textures, we can paste our normalMaps on all surfaces, no matter what direction they face. You could rotate a barrel or put your floor upside down; the differences between the per-pixel normals and their carrier triangle-normals will remain the same.

It’s also important to understand that these normals aren’t in “World-Space”. In absolute coordinates, “downwards” would be towards the core of the earth. But “down” (-Y = less green color) in a tangentSpace normalMap doesn’t have to mean the pixel will actually look down. Just flip or rotate the texture on your wall, or put it on a ceiling. –Y would point into a different direction. A common mistake is to forget this, and try to compare the “lightVector” (see pics above or previous post) calculated in world-space with this tangentSpace normal value. To fix this problem, you either have to convert the tangentSpace normal to worldNormals first (that’s what I did in the monster scene image above), or you convert your lightVector / cameraVectors to tangentspace before comparing them with the normal to compute diffuse/specular lighting.

All in all, tangent NormalMapping requires 3 things:
• A tangentSpace normalMap texture
• A shader that converts vectors to tangentspace OR normals to World space
• Pre-calculated Tangents and eventually BiTangents in your 3D geometry, required to convert from one space to another.

Explaining how this is done is a bit out of scope of this post, there are plenty of code demo’s out there. Just remember if your lighting is wrong, you are very likely comparing apples with oranges. Or worldspace with tangentspace.
When putting it all together, things start to make sense. Here you can see the different attributes of pixels. Diffuse(albedo) colors, specularity, normals. And of course to what kind of lighting results they lead. All that data is stuffed in several textures and applied on our objects, walls, monsters, and so on.


NormalMaps conclusion
-----------------------------------
NormalMaps are widely used, and they will stick around for a while I suppose. They won't be needed anymore if we can achieve real 3D geometry accurate enough to contain all those little details. Hardware tesselation is promising, but still too expensive to use on a wide scale. I should also mention that normalMaps have limitations as well. First of all, the actual 3D shape still remains flat. So when shining a light on your brick wall, or whatever surface, you'll see the shading nicely adapt. But when looking from aside, the wall is still as flat as a breastless woman. Also, the pixels that face away from the lightsources will shade themselves, but won't cast shadows on their neighbors. So normalMapping only affects the texture partially. Internal shadow casting is possible, but requires some more techniques. See heightMaps.

So what I'm saying, normalMaps certainly aren't perfect. But with the lack of better (realtime) techniques, you'd better get yourself familiar with them for now.


I thought this would be the last post of these "series", but I still have a few more techniques in my sleeves. But this post is already big and difficult enough so let's stop here before heads will start exploding. Don't worry, I'll finish next time with a shorter, and FAR easier post ;)

3 comments:

  1. A very good article about normal mapping. I learned a lot from this, so thank you very much for writing it! I think the article is stellar up to the point about tangent space, that's where I couldn't keep up any more at least, but even there it gave me a basis for further googling and reading. So I hope you keep writing these, they are incredible useful and entertaining! (A sidenode: the comment captcha is horrible, I bet you have lost a ton of positive comments due to it)

    ReplyDelete
  2. Capthas are always horrible. But I didn't know you had to enter one before commenting, I'll check if those can be disabled. Thanks for hinting.

    The tangentspace part isn't described very well indeed, as it would make the post twice as long probably. Plus I lack the mathematical background to correctly explain it hehe. The bright news is that there are billion code examples on normalMapping. Good luck figuring out the last bits!

    ReplyDelete
  3. great article man :) !!!

    ReplyDelete