Sunday, February 28, 2010

Shady techniques

Post arrives a little bit late this week. Yesterday we had a party in Highstreet, a club in Belgium. It has been ages we went there. Last time "Push me, satisfact me" was on the radio, I didn't had a daughter or girlfriend, and the average videocards didn't had pixelshaders yet. That must have been 140 years B.C. or something. Let's do some history.

I was already trying to make a game those days though. MD2 morphing animations, terrain rendering, and fixed lightmaps. Wait a minute, I did attempt to make a dynamic light with shadowMapping back then, but it was way too blocky for practical usage. Like most other games, lightMapping was the way to roll. Quake 1 was one of the games that made first use of it. Calculating realtime lighting for multiple sources was way too heavy those days, so instead game maps ussually had their lighting pre-calculated by "baking" an image with all light information. This image(lightMap) was typically made while creating the maps and it could take hours to calculate such an image. The shot below was from my previous engine, using "radiosity normalMapping" (fancy way to do lightMapping). The light spots on the ground and shady walls were all calculated and baked into an image in the map building phase.

LightMaps did service for many years. Quake 1/2/3, Halflife 1/2, Goldeneye, just a few examples. And it is still used. For a good reason, because once the image is created, it is a very simple and high-performance way to do quality lighting, including indirect lighting (ambient / global illumination / radiosity). In fact, most games still cannot do without because calculating ambient light at realtime is still extremely difficult, if not impossible for some scenario's... although CryEngine 3.0 may bring a revolution in ambi lighting soon...

Ignoring the indirect light portion is not a good idea either. The pitch black shaded regions in Doom 3 received many criticism. Its graphical competitor Halflife 2 looked more natural. However, Doom3 still had an important role when it comes to graphics evolution: realtime lighting with correct shadows. Halflife 2 may had the realistic renders, it still used the ancient lightMapping method, bringing some serious limitations with it. Popular bumpMapping shaders do not apply very well on a lightMap since this image only contains colors (light that falls onto that pixel), but not info about where it came from (direction vectors). Valve did a smart trick with their "radiosity normalMapping", but in the end it's still an approximation, not true correct lighting.

But more important, LightMaps are static. That means they won't change when you move a light. A day/night cycle, shooting lights, or using light switches is not possible with lightMaps. You have to recalculate the map whenever something changes. That is not so difficult, if it wasn't it takes seconds, minutes or even hours to do so. Updating lightMaps realtime = too slow. Unless you do low quality lightMapping maybe. I tried that, with success, but it's quality is way too low for accurate lighting. It can be used for dirty ambient lighting maybe, but not for direct lighting. Swinging your flashlight for example requires a spotlight with sharp shadows, but you can certainly not achieve that with low quality lightMaps.

Like I said earlier, ~eight years ago I tried dynamic shadows with shadowMapping already. This is a relative fast way to calculate shadows at realtime. Here's the idea: in the background, render your scenery from a light-point-of-view. Imagine you are a lamppost, put the camera in it, and render the street below you. Every pixel you'll see is litten by you. All others are shaded. We do not render colors, but the depth (distance between light and pixel) into a target texture. This depth image can now be used for lighting. When rendering your normal scene, check for each pixel if it was litten. Simple, if the distance between that pixel and the lightsource is equal or smaller than the projected depth image pixel on that location, it receives light.

This technique is called shadowmapping. Nowadays hardware is fast enough to generate multiple shadowmaps hundreds of times per second. And because images lend them selves for blurring, it is possible to create "soft edges". Another problem with Doom3 were the razorsharp shadow edges. In reality shadows are somewhat smoother due light scattering and stuff. Doom3 did stencil shading. Basically the CPU calculated silhouettes around each occluding object/surface and cut them out the buffer to prevent lighting behind them. ShadowMaps can be blurred more easily, and another advantage is the hardware acceleration. Instead of using the CPU, shadowMaps can be made entirely on the specialized graphics GPU. Well, no wonder that most engines are using shadowMaps these days, and so does mine. Here's a shot of my first test results, 3 years ago already. Notice there is no ambient lightig in the dark sections.

Cascaded Shadowmaps
~Eight years later, I still have that "blocky shadows" problem though. I'm not using lightMaps or fixed OpenGL lighting anymore, but shadows mapping techniques still tend to be "blocky" when distances between the lightsource and occluders grow. Makes sense, because stuff in the background receives less or none pixels in the depth image. This makes the shadows from objects far away from a light ugly or even invisible. Luckily there are always a bunch of smart guys who fix these things. Not me, I'm too dumb for all that mathemtical stuff. But at least I'm a persister, so after many Steve Irwin crocodile fights, I finally have "Cascaded Shadow Maps" working.

See that balcony railing shadow? Pretty sharp huh? It is casted by the sun, but the sun is pretty far away. With normal shadowMapping, this railing was probably invisible in the depth texture and therefore not casting any shadows at all. CSM is a technique to create multiple shadowMaps (see the 4 gray images at the bottom). The first one only covers a small section; the stuff you are looking at. The last image covers the entire scene. Now when shading, pixels pick the proper map based on their disances related to the camera. Neat, isn't it? Crysis and GTA IV used it as well to cast accurate shadows in dense (concrete) jungles. Allrighty, enough for this week.

No comments:

Post a Comment