Anyway, time to show her some next-century technology. Here, have a look at the PC! Internet is this button, you can also do Minesweeper but that sucks, and this button starts Microsoft Visual Studio. Ok ok, just type Google, then "shoes", + "cheap" please. No, left mouse button. Clicking once is enough. Here, see? No the computer isn't crazy, you just clicked that minus sign, making the screen disappear. No you don't stop by pressing the power button. Great stuff right, computers and Internet? Just when she finally has enough courage to start the computer with a cup of coffee.... Internet page not found. Router cannot be found. Windows has been recovered from a severe error(what to click?! Panic!). Google Chrome crashed, send report?
Or how about this one: "Windows delayed write disk failed". Reboot... Windows system.dll cannot be loaded. Windows cannot start. Don't know how she does it, but that happened about three times. For some reason, my Samsung HD103UJ disk suddenly starts failing to read/write. Well, suddenly... Maybe those little fingers from our petit gnome has something to do with it. Programming stuff like mad, then suddenly "zap", screen black, computer off. And I hear someone smiling half a meter below.
Samsung has a fix tool called shdiag.exe. But I simply don't trust that cursed piece of metal anymore. A (new!) disk crashing three times, fuck that. I'm going to enjoy torturing that thing with the hammer. But, tears up. A new HD has just arrived, and more important, I didn't lost any T22 code... Didn't reinstall everything yet, but when I saw those "write failure" balloons I already knew "make a back-up. NOW.". Which was a smart thing to do.
Indirect light... Without it, only the small spot on the floor would be litten.
So, that's why I didn't program much. But I just might tell a few details about the programming-work on T22 I'm lately doing though. All graphics programmers probably heard & saw Crysis new approach on realtime G.I. (Global Illumination, ambient light). Which basically means: when light falls on a surface, reflect("bounce") it further in all directions. And again, and again... The reason why most places, even during night, aren't completely black.
Easy does it, but while most visual phenomenons have been tricked by the game industry (reflecting water, dynamic shadows, smoke, refractions, god-rays, ...), doing realtime GI appeared to be one son of a bitch. Most games still aren't much further than Doom1 or Quake. Either by using a simple "ambient color" per area, and/or pre-baked lightMaps. Of course the lightMap quality is much better, and Occlusion Maps have been added to the weapon arsenal for somewhat more flexibility. But that's pretty much it.
For scenery that doesn't change much (switching lights, large moving objects, day/night cycle) lightMaps are usually sufficient. For bigger/outdoor scenes with only a few global lights (sun), an occlusionMap or even a simple fixed ambient color may do the trick because the (indirect) light doesn't vary much. So, developing GI wasn't that urgent, and the required computing power was simply not worth the deal. Of course, lot's of techniques have been tried in the background, but even on todays hardware, most of the tricks have serious drawbacks. Too slow, doesn't work with moving stuff, ugly, can't apply on bigger scenes, damn difficult to tweak it right, and so on. The bottom line is, a good old pre-fabricated still look better(and a hell lot faster).
But, stagnation means decline. Someone has to do the dirty job. And Crytek just did. Supergraphics with realtime GI, that even runs on a XBox 360 or PS3. And the Wii can render the skybox, without clouds. Problem solved? No, no, no. Even the brains at Crytek weren't able to tackle all problems. The solution is physically inaccurate, the detail is low (don't expect a bookcase model with small details having cool GI, think in terms of cubic meters). And it only does one bounce, which means a little light falling through a window still doesn't emmit the whole room). But from all solutions so far, it is probably the best one available. It handles huge area's, no pre-calculations at all, it works for dynamic objects, for sort of a fake it looks pretty good. And also important, the calculation time is relative low. The XBox/PS3 proves it.
Now I'm not exactly copying Crytek's approach for the T22 graphics (I'm too stupid for Spherical Harmonics anyway), but I took some of the components and combined it with my current realtime GI "solution". Yes, T22 does have realtime GI for 2 years already. But the quality is so bad that you hardly notice it. How it works? If you were a piece of wall, you might see other pieces of the opposite wall. Or a floor, ceiling, piano, skybox. Everything you see, could send (reflect diffuse) light towards you. You could raytrace to figure out the relations between all "surface patches", but you could also pre-calculate the relations. In my old solution, each patch has about 256 other patches to collect reflected light from. So what it did:
- Generate lightMap texture coordinates for a room.
- Each pixel on that map was converted to a patch (3D position, normal, surface emissive & reflectivity value)
- Calculate for each patches which 256 other patches it can "see" (by raycasting for example) when looking into it's normal direction.
- Store this data together with the room.
Since you quickly get hundred-thousands of data records, memory storage size grows rapidly. So, one of the biggest issues with this technique is the extreme low-resolution lightMaps to temper the size. Thinking about it now, I could have been better of with less relations per patch, but having more patches. Anyway, @runtime this data is used to spread(or collect if you prefer) direct light via the pre-calculated relations.
- Render a sector with lights(and shadowMaps) applied as a flat 2D texture
- Read the texture to the CPU.
- Let the CPU spread the light (used 4 bounces) with the pre-calc. patches
- Draw the results back into a texture (3 actually for simple normalMapping)
and use that as a "indirect-LightMap"
Asides having lightMaps being so small that the quality was awfull (just incorrect), it had more issues:
- The average room already required a few megabytes disk & memory space to store the patch data
- Can't use it on dynamic objects (well, you can, but not by nature)
- glReadPixels operation stalls the rendering pipeline. Speed drops.
- Realtime? Needing ~200 millisec to update a map (in a background thread) isn't exactly realtime. When having multiple rooms, you could clearly see the light being updated in steps.
However, I still like the idea of having pre-calculated info. It might result into somewhat more accurate results, and doing multiple bounces is far less of a problem than with the Crytek approach. The screenshots here for example use 4 bounces. So, I tried to take the best out of both worlds, resulting in my new "Ambi-Gather" technique. I won't reveal yet how it works (it still has to prove if it works at all!) but so far the progress is steady. This time it runs entirely on the GPU, it updates every cycle (thus real realtime), it requires far less memory, and it just looks better so far. Smarter, faster, bigger, better, Robocop 2.0, Terminator 3.0, Rambo 4.0.
Ok, give me some more time, and pray for the God's the hard-drive stays intact for a change.
Stupid light bouncing around all realistically… Horror scenes aren’t supposed to be bright. Luckily there are still parameters to adjust, telling how strong and what colors the indirect light should have for a specific lightsource, or for the entire scene. In this case I chosed acid green.