Not the
first time I write about Global Illumination... and probably not the last time
either. Following traditions, the GI system changes every 8 months or so.
Realtime, not realtime, Voxels, Probe grids, mega-fake-awesome GI, and so on.
Make up your mind man!
I think
I made up my mind this time... although, we'll speak again 8 months later. But
for now, I think I got it: Good'Ol fashioned lightmaps.
Say
what?! Lightmaps?! It's like NASA bombastically revealing they'll pick up the 1961
Apollo space program again. Lightmaps is like taking a step backwards, to
Quake1 or something. You'd better have a damn fine reason to justify this son!
And yeah, I do have a reason or two. And if you ask big-boy engines out there,
they may come up with the same story. Remember Unreal4 announcing it has true
realtime GI, using Voxel Cone Tracing? Finally! But... without saying much, it
suddenly disappeared again, not much later. Again, why?!
A Pixel Trade-off
When
doing business, there is always this thing called "ratio". Do we
supply our new model car with state-of-the-art, but super difficult
plasma-driven transmission? Or are we fine with a Volkswagen engine + some
cheatcodes? Do we put our precious best-man on this job, or do we give that cheap
intern a chance? Chose older but reliable components, or take a chance with new
fancy ones? Spray prince Abdul's private jet with real heavy gold, or apply fake
gold and keep the thing flyable? Bottom line is, you can't always pick the fastest,
safest, most beaty, most elegant, most awesome route. Quantity versus Quality.
Price versus Beauty. RAM memory versus CPU performance. Possible versus
Impossible. Titanic lifeboats versus a classy ship.
As for
Global Illumination -the art of having near-realistic lighting, including
indirect light- always boils down to a performance, do-ability, and quality
pay-off. One way or another, getting there will eat a serious piece of memory
and/or performance for sure. The solutions I have seen and tried don't score
very well in terms of "do-ability" either. Usually it's so complex,
constrained and relying on approximations, that the outcome is hard to predict
and even harder to modify to the artist taste.
But that
would be OK, if the quality was, well, OK... but it isn't. Not that they all
look bad; certainly VCT was/is promising. But the thing is, gamers are spoiled
with semi-realistic lighting for many years already. I'm not talking about
dynamic lights (sources that switch, move, dissapear, change, ...), but just
about a static scene where the light acts as expected. Darker in the corners, yet
not pitch black. Blueish "skylight" falling in from above or through
a window. Small gaps and holes feeding atmospheric bits of light into an otherwise
dark bunker. Glossy reflections on the floors, but also walls and organic
objects.
Halflife2
already had that stuff. On a lower resolution maybe, and yes - all static. But
hey, 90% of the environment and lights you'll see in an ordinary game doesn't
move or change anyway. Besides, who cares? You as an ambitious programmer
maybe, but the average gamer has no clue. And although I actually knew the
difference between pre-processed lightmaps and realtime (Doom3 back then)
lights, I never thought "Dang, I sure miss some dynamic lights in this
game!", while playing Halflife2.
I should stop about "the old days" with Half life2. So, here you go, Portal 2. A bit younger, yet still using ancient lightMaps. But admit, this looks pretty cool right?
But
Rick, that was more than 10 years ago again (holy shit). True. But believe me,
most of the stuff you see in games is still very static, pre-baked
lighting/reflections. Higher quality though. More video-card memory = larger
lightmaps = sharper textures.
Now, if
Unreal4, CryEngine, Engine22, or whoever abandons pre-baked lighting, and
introduces true realtime lighting today... The quality is probably worse than
Half life2 from 10 years ago. “Yeah, but its realtime! Look, the indirect light
changes if we open or close the door in this room! The ceiling also brights up
if we shine our flashlight on that wall in the rear! No more waiting-times for
the artist while baking a lightmap!” Cool and the Gang, but again, gamers don't
know / don't care. They WILL complain about low-resolution lighting, artefacts,
and the ridiculous system requirements though!!
Who am I
to say gamers don't care about realtime lighting? That's not entirely true. Features
like day-night cycles, local lights also illuminating around the corner, and destructible
environments that adapt correctly sure does matter. But, we can fake these
things! That's the point. Gamers don't care whether you applied a lightmap,
VCT, a realtime photon-mapper, or a brown shit-tracer for that matter. Just as
long it looks GOOD, and runs GOOD on their rigs. That's what matters.
The
trick is to make a hybrid system. Good old high-quality lightmaps(or probes)
for your static scenery -WHICH MAKES UP MOST OF THE GAME!- and realtime hacks
for dynamic objects. The latter usually relies on lower-quality, cheap tricks
(and that hurts us proud graphics programmers). But we can get away with that.
Because dynamic objects are usually relative small (puppet versus building),
tend to move a lot, and -I repeat- they only make a relative small portion of
the entire scene.
Back to lightmaps then?
It took
me quite long to get over this. Lightmapping is a technique from the previous
century. So much changed, but we still rely on that old crap to do some decent
lighting? It sounds unnatural, and having to apply weird tricks to get the
dynamic objects (monsters, barrels, cars, trees) litten as well sounds crappy.
This is why I kept investigating real-time techniques. And I'm probably not the
only one. Lots of papers out there. CryEngine tried Light-Propagating-Volumes,
Unreal4 focussed on Voxel-Cone-Tracing for a while, and so on. But the truth is…
nothing beats the old lightmap.
So,
while upgrading the Tower22 engine, I wanted to make a more final decision on
this one as well. For me it's fun to screw around with techniques, but now that
I'm trying to get some tools working *properly* for eventual future Tower22
artist, I really needed something robust. Now artists probably dislike
lightmaps for their long build-times. But at least we all know how they work.
Like women, can't live with them, can't live without them. If the end results
is satisfying, and if the artist has sufficient control over it, lightmaps are (still)
an accepted tool.
Nice realtime results. But smart professors please; get the fuck out of that stinky Cornell Box. Games take place in open worlds, with dinosaurs and robots running around, with many more lights. 5 years ago realtime G.I. was within reach, they said. I bet we hear the same story 5 years later. Or at least 8 months later ;)
Engine22 Implementation
Don't
know what other engines are doing exactly, but 2015 Engine22 Lightmaps are
slightly different than the old nineties-maps though. Yeah, there is some
progress at least :) Old lightmaps have a few major issues:
·
They
are static! If we switch on/off a light, they don't change!
·
They
are useless! to dynamic objects like characters, boxes or furniture that can be
moved
·
They
are flat! NormalMapping requires more info than just a colour.
·
They
are ugly / blocky!
Ugly?
The last
point is semi-fixed by the amounts of memory we have available today. More
memory = larger lightmap-textures = sharper results. But I say semi-fixed,
because STILL, you can see "blocks" or dots sometimes. Real-time
techniques like shadowMaps are much sharper in general, because they
concentrate on a small area or don’t use stored data at all.
Another
little problem is streaming Lightmaps. Old quake or Half life maps were loaded
one-by-one.
A game
like GTA, Fallout or Tower22 is free-roaming though. No small sub-levels, but 1
huge world. In Engine22, each entity (a wall, pillar, floor, but also a sofa or
cabinet that never moves) has its own small lightmap. Resolutions are
adjustable per entity, but think about 128x128 pixels or something. When a new map-section
is loaded, it will also load the lightmaps as separate images (though they are
packed together in 1 file).
A little
extra advantage of having separate, small lightmaps, is that the artist can
update a local entity only. Click the floor, change some properties or move a
lamp, and then re-bake the floor only. Obviously a lot faster than having to
re-bake the entire room/scene.
But
since GPU's like to batch as much as possible, hopping between lots of separate
textures sucks. So, currently, after being loaded, lightmaps are stuffed all
together into 1 huge "Atlas-Lightmap" texture. This happens
dynamically - subcells of this atlas come and go, as new map-sections are
loaded and dumped on the fly while the player moves. Downside of an Atlas
texture however, is that the lightmap as a whole is still limited though. So, I
might change strategies here.
Atlas showed in the corner... The red space reveals we didn't have a lot of guests at our party. Waste of space really. But keep in mind the scene is still very empty (and ugly).
Flat?
No
boobies with lightmaps. As said, normalMapping techniques need to know where
the light comes from. From the left? Right? Both sides maybe? An old style lightmap
only contains a RGB colour; light "power" that (indirectly, after a
few bounces eventually) stranded on that patch of surface. It doesn't remember where
it came from though. Problem is that there are infinite possibilities here. If
there were 20 lights, light could come from 20 different directions. And even
more really, if you count the indirect-bounced light as well. A direction can
be stored as a RGB (or even RG) colour into a second texture. But 20
directions? You’re asking too much.
Half life2
"fixed" this with "Radiosity NormalMapping". Difficult
words, but the clue is that they simply generated 3 lightmaps instead of 1. One
map containing light coming in globally from the bottom-left. One map storing
incoming light from the bottom-right. And a third one for light coming in from
above, thus mainly skylight or lamps attached to the ceiling. While rendering
the game, each pixel would mix between those three lightmaps, based on its
pixel-normal. Voila, normalMapping
alive
and kicking again. Not 100% accurate, it's an approximation. But at least a
brick wall doesn't look flat anymore.
A very, VERY old shot. Actually the very time I tried lightmaps in 2006 or something. Nevertheless, old techniques still seem to work.
I
considered using (and still consider) this as well. But... having fairly large
textures plus some more other stuff I'll explain later, the memory consumption may
rocket-launch. Instead, Engine22 does it even simpler. Dirtier, I might say.
Only one additional texture is used, storing the "dominant incoming light
direction". So, wherever the most light comes from (typically from above
in the open air, or from a window), that direction will be used for
normalMapping. It's even less accurate than the Halflife2 approach. But, since
E22 doesn't use a single lightmap with just one direction only, the final
result will get mixed with other influences as well, making the lack of
directional information very hard to tell.
There is
one stinky problem though. Transitions from light-to-shade will generate an
ugly flattened "band" in between, where the normal bends from one
direction to another
in all
of a sudden. Didn't find a fix for that yet.
Not saying accuracy can kiss my ass, but the thing with indirect-light is... it comes from all directions. Which makes the overall normalMap effect somewhat "blurred" and hard to verify its correctness. Just as long we see some shades and highlights, we're happy. For now.
Static?
Now the
biggest challenge. I mentioned that 90% (just grabbed a good-sounding number
here) of game-scenery is static. But how about that other 10%? Ignore it? That
wouldn't be a very humane thing to do.
This
problem splits into two parts. First of all, lights can change. Sky turns from
day to night. Room-lamp switches on and off. Second, dynamic objects can't use lightmaps.
Sure we can bake a correct, high-quality lightmap for a rock-object. But as
soon as we roll over the rock, the lightmap is wrong. It would have to be re-baked,
but that is (WAY) too slow.
Engine22
solves the first problem in multiple ways. For one thing, lights can be fully
dynamic, updating its shadowMap every cycle. But the consequence is that they do
NOT generate any indirect light. A source like your flashlight will not get
involved at all while baking a lightmap. Simply because we never know if &
where that flashlight will be. An ugly, yet somewhat effective hack is to add a
secondary, larger, weak pointlight to our flashlight. This way the flashlight
still illuminates the surrounding room a bit, also outside its primary light-cone.
But more
interesting are Stationary lights. These lights can't move, but CAN change
colours. Which also means they can be turned on/off (putting the colour on black
= off). They can optionally still cast cast real-time shadows, so dynamic
entities like our hero will cast a correct shadow on the ground. The lightmap
works different for these stationary sources though. It won't capture the
incoming light direction or colour - or what's left of the light energy after
some bounces. Instead, it only stores the “influence factor”. From 0 to 100%. A
RGBA image can hold up to 4 factors this way. Each entity can be assigned to 3
Stationary lightsources, and we reserve the Alpha channel for skylight, which
might change as well if you have a weather-system, or day/night cycle.
So this
way we have indirect light, and still the ability of switching sources on/off,
or playing around with their colours. However, only 3 sources per entity could be
a limitation in some special situations (yet the easy way to solve this is
simply by dividing a large floor-entity into smaller sections typically). And,
it does not support colour-bleeding. If white skylight bounces off a red wall,
the surroundings should turn slightly reddish/pinky as well. But since we store
an influence factor only, that effect is lost. It is active when using fully
static lights though.
Green stuff = skylight. We don't store "green", but just a percentage. So if the sky turns orange, so will the green stuff on the walls and floors here.
Useless
for dynamic objects?
As for
that other "static-issue", well, that still is an issue. LightMaps
are useless for anything that doesn’t like to stay put. Engines often fix this
by generating additional probes. Think of small orbs flying in the air, forming
a grid together. Each probe would capture incoming light from above, bottom,
left, right, et cetera. Same principle as a lightmap really, except that these
probes do not block or bounce lights. They capture, but don't influence light
photons.
I did
this in the older Tower22 engine as well, see this movie
Works
pretty well, but as usual, there are some problems. It's hard to see due all
the stuff going on (and blurry video quality hehe), but if you focus on the far
background in those tunnels, you'll see some light popping in/out. That's the
probe-grid moving along with the camera. The problem with an Uniform 3D grid of
probes, is its size. Even though a single probe only contained 6 RGBA values
here (can be done smarter with Spherical Harmonics btw), the total amount of
probes makes it big. I believe a 128x128x128 Volume texture was used in that
demo. Or actually 6 -one for each 3D cubemap-axis (up,down,left,...). So do the
math:
128^3 x RGBA8 x 6 = 48 MB
The grid
density was 0.5 cubic meters or so. So, the grid would only cover 64 cubic
meters. And since it was centered around the camera, you could only see half of
it, 32 meters, forward. All stuff behind those 32 meters didn't get appropriate
data.
So much
megabytes, and the painful part is that 90% (again, just throwing numbers) is
vacuum-space. If no particle or solid entity is placed there, actually sampling
the probe, it’s an absolute waste of space. Another awful issue are probes
placed behind a wall. The engine tried to eliminate that as much as possible,
but still it happened in some situations. It would cause light from a neighbour
room -or worse, skylight- to "leak" into the scene.
New
Engine22 uses a different approach. The artist will place probes wherever he
thinks they should be placed. Typically that is somewhere nearby a bunch of
entities, in the middle of a room, along a corridor path, behind windows, or in
dark corners. Placing probes sucks, it’s yet another thing the artist has to
bother. But the result is FAR less probes... Which allows us to use all those
megabytes in more useful ways. Like storing additional information for
Stationary lights... Or Reflections. Engine22 uses "IBL",
Image-Based-Lighting. Which is fancy talk for just using pre-baked (but high
quality) cubeMaps for local reflections. Again, the static vs dynamic issue
arises here. I won't explain in detail now, but E22 will mix static reflections
with real-time ones, IF possible. So, now that probes are also used for
reflections -something that pays off in a more clear, direct way- the extra
effort to place them is somewhat justified.
All in
all, as you can see, Engine22 doesn't use a single technique for Global Illumination.
Lightmaps here, ambient-probes there, a bit of SSDO on top, IBL reflections in
the mix, and so on. From a programmers-perspective, it’s a nightmare, making a
solid system that uses all those fake-hacks in harmony. But it works. Sigh.
A probe was placed in the middle of the corridor. It gives glossy reflections to the floors and walls, and also provides a small (convoluted) cubemap containing "Ambient" for all incoming directions. The Heart object uses probe-GI, instead of a lightmap. Additionally, SSDO (a screen-space ambient occlusion technique) adds some local shading in- and around the object, as well as on the wood-floor gaps and such.