Saturday, November 28, 2015

G.I. Engine22

Not the first time I write about Global Illumination... and probably not the last time either. Following traditions, the GI system changes every 8 months or so. Realtime, not realtime, Voxels, Probe grids, mega-fake-awesome GI, and so on. Make up your mind man!

I think I made up my mind this time... although, we'll speak again 8 months later. But for now, I think I got it: Good'Ol fashioned lightmaps.


Say what?! Lightmaps?! It's like NASA bombastically revealing they'll pick up the 1961 Apollo space program again. Lightmaps is like taking a step backwards, to Quake1 or something. You'd better have a damn fine reason to justify this son! And yeah, I do have a reason or two. And if you ask big-boy engines out there, they may come up with the same story. Remember Unreal4 announcing it has true realtime GI, using Voxel Cone Tracing? Finally! But... without saying much, it suddenly disappeared again, not much later. Again, why?!



A Pixel Trade-off

When doing business, there is always this thing called "ratio". Do we supply our new model car with state-of-the-art, but super difficult plasma-driven transmission? Or are we fine with a Volkswagen engine + some cheatcodes? Do we put our precious best-man on this job, or do we give that cheap intern a chance? Chose older but reliable components, or take a chance with new fancy ones? Spray prince Abdul's private jet with real heavy gold, or apply fake gold and keep the thing flyable? Bottom line is, you can't always pick the fastest, safest, most beaty, most elegant, most awesome route. Quantity versus Quality. Price versus Beauty. RAM memory versus CPU performance. Possible versus Impossible. Titanic lifeboats versus a classy ship.

As for Global Illumination -the art of having near-realistic lighting, including indirect light- always boils down to a performance, do-ability, and quality pay-off. One way or another, getting there will eat a serious piece of memory and/or performance for sure. The solutions I have seen and tried don't score very well in terms of "do-ability" either. Usually it's so complex, constrained and relying on approximations, that the outcome is hard to predict and even harder to modify to the artist taste.

But that would be OK, if the quality was, well, OK... but it isn't. Not that they all look bad; certainly VCT was/is promising. But the thing is, gamers are spoiled with semi-realistic lighting for many years already. I'm not talking about dynamic lights (sources that switch, move, dissapear, change, ...), but just about a static scene where the light acts as expected. Darker in the corners, yet not pitch black. Blueish "skylight" falling in from above or through a window. Small gaps and holes feeding atmospheric bits of light into an otherwise dark bunker. Glossy reflections on the floors, but also walls and organic objects.

Halflife2 already had that stuff. On a lower resolution maybe, and yes - all static. But hey, 90% of the environment and lights you'll see in an ordinary game doesn't move or change anyway. Besides, who cares? You as an ambitious programmer maybe, but the average gamer has no clue. And although I actually knew the difference between pre-processed lightmaps and realtime (Doom3 back then) lights, I never thought "Dang, I sure miss some dynamic lights in this game!", while playing Halflife2.

I should stop about "the old days" with Half life2. So, here you go, Portal 2. A bit younger, yet still using ancient lightMaps. But admit, this looks pretty cool right?

But Rick, that was more than 10 years ago again (holy shit). True. But believe me, most of the stuff you see in games is still very static, pre-baked lighting/reflections. Higher quality though. More video-card memory = larger lightmaps = sharper textures.

Now, if Unreal4, CryEngine, Engine22, or whoever abandons pre-baked lighting, and introduces true realtime lighting today... The quality is probably worse than Half life2 from 10 years ago. “Yeah, but its realtime! Look, the indirect light changes if we open or close the door in this room! The ceiling also brights up if we shine our flashlight on that wall in the rear! No more waiting-times for the artist while baking a lightmap!” Cool and the Gang, but again, gamers don't know / don't care. They WILL complain about low-resolution lighting, artefacts, and the ridiculous system requirements though!!


Who am I to say gamers don't care about realtime lighting? That's not entirely true. Features like day-night cycles, local lights also illuminating around the corner, and destructible environments that adapt correctly sure does matter. But, we can fake these things! That's the point. Gamers don't care whether you applied a lightmap, VCT, a realtime photon-mapper, or a brown shit-tracer for that matter. Just as long it looks GOOD, and runs GOOD on their rigs. That's what matters.

The trick is to make a hybrid system. Good old high-quality lightmaps(or probes) for your static scenery -WHICH MAKES UP MOST OF THE GAME!- and realtime hacks for dynamic objects. The latter usually relies on lower-quality, cheap tricks (and that hurts us proud graphics programmers). But we can get away with that. Because dynamic objects are usually relative small (puppet versus building), tend to move a lot, and -I repeat- they only make a relative small portion of the entire scene.



Back to lightmaps then?

It took me quite long to get over this. Lightmapping is a technique from the previous century. So much changed, but we still rely on that old crap to do some decent lighting? It sounds unnatural, and having to apply weird tricks to get the dynamic objects (monsters, barrels, cars, trees) litten as well sounds crappy. This is why I kept investigating real-time techniques. And I'm probably not the only one. Lots of papers out there. CryEngine tried Light-Propagating-Volumes, Unreal4 focussed on Voxel-Cone-Tracing for a while, and so on. But the truth is… nothing beats the old lightmap.

So, while upgrading the Tower22 engine, I wanted to make a more final decision on this one as well. For me it's fun to screw around with techniques, but now that I'm trying to get some tools working *properly* for eventual future Tower22 artist, I really needed something robust. Now artists probably dislike lightmaps for their long build-times. But at least we all know how they work. Like women, can't live with them, can't live without them. If the end results is satisfying, and if the artist has sufficient control over it, lightmaps are (still) an accepted tool.

Nice realtime results. But smart professors please; get the fuck out of that stinky Cornell Box. Games take place in open worlds, with dinosaurs and robots running around, with many more lights. 5 years ago realtime G.I. was within reach, they said. I bet we hear the same story 5 years later. Or at least 8 months later ;)


Engine22 Implementation

Don't know what other engines are doing exactly, but 2015 Engine22 Lightmaps are slightly different than the old nineties-maps though. Yeah, there is some progress at least :) Old lightmaps have a few major issues:

·         They are static! If we switch on/off a light, they don't change!
·         They are useless! to dynamic objects like characters, boxes or furniture that can be moved
·         They are flat! NormalMapping requires more info than just a colour.
·         They are ugly / blocky!


Ugly?
The last point is semi-fixed by the amounts of memory we have available today. More memory = larger lightmap-textures = sharper results. But I say semi-fixed, because STILL, you can see "blocks" or dots sometimes. Real-time techniques like shadowMaps are much sharper in general, because they concentrate on a small area or don’t use stored data at all.

Another little problem is streaming Lightmaps. Old quake or Half life maps were loaded one-by-one.
A game like GTA, Fallout or Tower22 is free-roaming though. No small sub-levels, but 1 huge world. In Engine22, each entity (a wall, pillar, floor, but also a sofa or cabinet that never moves) has its own small lightmap. Resolutions are adjustable per entity, but think about 128x128 pixels or something. When a new map-section is loaded, it will also load the lightmaps as separate images (though they are packed together in 1 file).

A little extra advantage of having separate, small lightmaps, is that the artist can update a local entity only. Click the floor, change some properties or move a lamp, and then re-bake the floor only. Obviously a lot faster than having to re-bake the entire room/scene.


But since GPU's like to batch as much as possible, hopping between lots of separate textures sucks. So, currently, after being loaded, lightmaps are stuffed all together into 1 huge "Atlas-Lightmap" texture. This happens dynamically - subcells of this atlas come and go, as new map-sections are loaded and dumped on the fly while the player moves. Downside of an Atlas texture however, is that the lightmap as a whole is still limited though. So, I might change strategies here.

Atlas showed in the corner... The red space reveals we didn't have a lot of guests at our party. Waste of space really. But keep in mind the scene is still very empty (and ugly).


Flat?
No boobies with lightmaps. As said, normalMapping techniques need to know where the light comes from. From the left? Right? Both sides maybe? An old style lightmap only contains a RGB colour; light "power" that (indirectly, after a few bounces eventually) stranded on that patch of surface. It doesn't remember where it came from though. Problem is that there are infinite possibilities here. If there were 20 lights, light could come from 20 different directions. And even more really, if you count the indirect-bounced light as well. A direction can be stored as a RGB (or even RG) colour into a second texture. But 20 directions? You’re asking too much.

Half life2 "fixed" this with "Radiosity NormalMapping". Difficult words, but the clue is that they simply generated 3 lightmaps instead of 1. One map containing light coming in globally from the bottom-left. One map storing incoming light from the bottom-right. And a third one for light coming in from above, thus mainly skylight or lamps attached to the ceiling. While rendering the game, each pixel would mix between those three lightmaps, based on its pixel-normal. Voila, normalMapping
alive and kicking again. Not 100% accurate, it's an approximation. But at least a brick wall doesn't look flat anymore.
A very, VERY old shot. Actually the very time I tried lightmaps in 2006 or something. Nevertheless, old techniques still seem to work.

I considered using (and still consider) this as well. But... having fairly large textures plus some more other stuff I'll explain later, the memory consumption may rocket-launch. Instead, Engine22 does it even simpler. Dirtier, I might say. Only one additional texture is used, storing the "dominant incoming light direction". So, wherever the most light comes from (typically from above in the open air, or from a window), that direction will be used for normalMapping. It's even less accurate than the Halflife2 approach. But, since E22 doesn't use a single lightmap with just one direction only, the final result will get mixed with other influences as well, making the lack of directional information very hard to tell.

There is one stinky problem though. Transitions from light-to-shade will generate an ugly flattened "band" in between, where the normal bends from one direction to another
in all of a sudden. Didn't find a fix for that yet.

Not saying accuracy can kiss my ass, but the thing with indirect-light is... it comes from all directions. Which makes the overall normalMap effect somewhat "blurred" and hard to verify its correctness. Just as long we see some shades and highlights, we're happy. For now.


Static?
Now the biggest challenge. I mentioned that 90% (just grabbed a good-sounding number here) of game-scenery is static. But how about that other 10%? Ignore it? That wouldn't be a very humane thing to do.

This problem splits into two parts. First of all, lights can change. Sky turns from day to night. Room-lamp switches on and off. Second, dynamic objects can't use lightmaps. Sure we can bake a correct, high-quality lightmap for a rock-object. But as soon as we roll over the rock, the lightmap is wrong. It would have to be re-baked, but that is (WAY) too slow.

Engine22 solves the first problem in multiple ways. For one thing, lights can be fully dynamic, updating its shadowMap every cycle. But the consequence is that they do NOT generate any indirect light. A source like your flashlight will not get involved at all while baking a lightmap. Simply because we never know if & where that flashlight will be. An ugly, yet somewhat effective hack is to add a secondary, larger, weak pointlight to our flashlight. This way the flashlight still illuminates the surrounding room a bit, also outside its primary light-cone.


But more interesting are Stationary lights. These lights can't move, but CAN change colours. Which also means they can be turned on/off (putting the colour on black = off). They can optionally still cast cast real-time shadows, so dynamic entities like our hero will cast a correct shadow on the ground. The lightmap works different for these stationary sources though. It won't capture the incoming light direction or colour - or what's left of the light energy after some bounces. Instead, it only stores the “influence factor”. From 0 to 100%. A RGBA image can hold up to 4 factors this way. Each entity can be assigned to 3 Stationary lightsources, and we reserve the Alpha channel for skylight, which might change as well if you have a weather-system, or day/night cycle.

So this way we have indirect light, and still the ability of switching sources on/off, or playing around with their colours. However, only 3 sources per entity could be a limitation in some special situations (yet the easy way to solve this is simply by dividing a large floor-entity into smaller sections typically). And, it does not support colour-bleeding. If white skylight bounces off a red wall, the surroundings should turn slightly reddish/pinky as well. But since we store an influence factor only, that effect is lost. It is active when using fully static lights though.

Green stuff = skylight. We don't store "green", but just a percentage. So if the sky turns orange, so will the green stuff on the walls and floors here.

Useless for dynamic objects?
As for that other "static-issue", well, that still is an issue. LightMaps are useless for anything that doesn’t like to stay put. Engines often fix this by generating additional probes. Think of small orbs flying in the air, forming a grid together. Each probe would capture incoming light from above, bottom, left, right, et cetera. Same principle as a lightmap really, except that these probes do not block or bounce lights. They capture, but don't influence light photons.

I did this in the older Tower22 engine as well, see this movie
                Youtube Tower22 Subway Test

Works pretty well, but as usual, there are some problems. It's hard to see due all the stuff going on (and blurry video quality hehe), but if you focus on the far background in those tunnels, you'll see some light popping in/out. That's the probe-grid moving along with the camera. The problem with an Uniform 3D grid of probes, is its size. Even though a single probe only contained 6 RGBA values here (can be done smarter with Spherical Harmonics btw), the total amount of probes makes it big. I believe a 128x128x128 Volume texture was used in that demo. Or actually 6 -one for each 3D cubemap-axis (up,down,left,...). So do the math:
                128^3 x RGBA8 x 6          = 48 MB
The grid density was 0.5 cubic meters or so. So, the grid would only cover 64 cubic meters. And since it was centered around the camera, you could only see half of it, 32 meters, forward. All stuff behind those 32 meters didn't get appropriate data.

So much megabytes, and the painful part is that 90% (again, just throwing numbers) is vacuum-space. If no particle or solid entity is placed there, actually sampling the probe, it’s an absolute waste of space. Another awful issue are probes placed behind a wall. The engine tried to eliminate that as much as possible, but still it happened in some situations. It would cause light from a neighbour room -or worse, skylight- to "leak" into the scene.



New Engine22 uses a different approach. The artist will place probes wherever he thinks they should be placed. Typically that is somewhere nearby a bunch of entities, in the middle of a room, along a corridor path, behind windows, or in dark corners. Placing probes sucks, it’s yet another thing the artist has to bother. But the result is FAR less probes... Which allows us to use all those megabytes in more useful ways. Like storing additional information for Stationary lights... Or Reflections. Engine22 uses "IBL", Image-Based-Lighting. Which is fancy talk for just using pre-baked (but high quality) cubeMaps for local reflections. Again, the static vs dynamic issue arises here. I won't explain in detail now, but E22 will mix static reflections with real-time ones, IF possible. So, now that probes are also used for reflections -something that pays off in a more clear, direct way- the extra effort to place them is somewhat justified.

All in all, as you can see, Engine22 doesn't use a single technique for Global Illumination. Lightmaps here, ambient-probes there, a bit of SSDO on top, IBL reflections in the mix, and so on. From a programmers-perspective, it’s a nightmare, making a solid system that uses all those fake-hacks in harmony. But it works. Sigh.

A probe was placed in the middle of the corridor. It gives glossy reflections to the floors and walls, and also provides a small (convoluted) cubemap containing "Ambient" for all incoming directions. The Heart object uses probe-GI, instead of a lightmap. Additionally, SSDO (a screen-space ambient occlusion technique) adds some local shading in- and around the object, as well as on the wood-floor gaps and such.

7 comments:

  1. That's the kind of stuff I'm personally considering also, but anyway.
    Why I think lightmapping is bad ( I will not say why is good )

    1.Along with the global illumination it will embed shadows also ( if not it will look awful ). Static shadows are big no-no for a modern engine. Do not cope well with shadow mapping and dynamic lighting. There are solutions to blend between the dynamic objects shadows and static environment shadows, but we are running away from the complexity of dynamic GI solutions and we probably don't want to arrive in other complex world of problems.
    2.Lightmapping is useful mainly indoor for relatively small and not so big and complex scenes. I can't imaging a big forest with bushes lite by the sun and every leaf and polygon is lightmapped. Huge memory hoged static scene with awful jagged shadows!
    3.How shadows mapping cope with mesh instancing ? Yep, probably along with other unique attributes the instance will keep tex coords.
    4.It forces you to make separation between static and dynamic environment and even worse - to blend seamlessly between those worlds.

    If a have a room with a window, a table and a dynamic box, that the player can pick and move around I need to find a away how to lite this objects.
    The light will come from the window, will light the walls of the room and the table, the table will cast soft shadow on the floor, the ceiling will pick up reflected light from the GI provided by the lightmapping. How am I supposed to lite the dynamic box. I need it shadowed when it's under the table - so I need shadow mapping to run along with the soft static shadow from the lightmapping and blend between those.
    Also when I move the box by the window it needs to pick up more directional light from it - I need light sources all around and they must be static to match the lightmap.

    ReplyDelete
  2. Hey!

    LightMaps come with their share of problems for sure, but solutions are there (and mainly, the alternatives are usually worse is what I found out):

    1. Baked vs Dynamic shadows
    Cant speak for other engines, but I guess they do the same; I can chose whether to bake direct light (thus shadows) yes or no. When not doing that -which is the case by default- direct lighting will still work fully realtime, with dynamic shadowMaps eventually. It's just that the indirect bounces are baked then.


    2. Indoor / Small
    Yes. Now Tower22 is a relative small-scaled indoor game of course, so that suits. Outdoor lighting is usually simpler, because the dominant sources (sun + moon + sky) are very predictable.

    This opens options to chose for a (semi)realtime GI lighting solution instead, though I know that quite some engine still actually do bake statici light. But with (SH)probes instead of memory-consuming lightmaps.

    Notice Engine22 also uses probes. In fact, I tried to do with LightMaps and probes only first. But the results are "flat", because probes only capture incoming light at a single point of course, while indoor area's have lots of variations due many weak lightsources, corners, furniture blocking light, etc. So what I did; use LightMaps IF assigned. If an entity does not have a lightMap, or can't have a lightmap (ie. dynamic objects), they "fall-back" to probes. So it's also possible to make larger outdoor scenes without lightmaps.

    Another note. Lightmaps can still work at significant lower resolutions / get stretched over larger surfaces, IF they don't contain 1st-Bounce lighting (thus direct shadows). Indirect bounced light tends to spread much more equally, giving a less blocky result.


    3. Mesh Instancing
    Engine22 does use mesh instancing. In order to draw, say 100 lightMapped barrels at once, it puts all lightmaps into 1 huge canvas so we don't have to swap textures for each object. OpenGL4.5 Bindless-Texturing could also be an interesting new technique here, but I haven't been able to use it properly yet.

    Anyhow, all objects (of the same instance) have the same UV-unwrap coordinates, but a different offset/scale in the overall atlas. Therefore, the slot-subrect gets stored along with the model-matrices we send over to the GPU before calling "draw".


    4. Separate Static / Dynamic environment
    As explained in #1, entities can fall back to probes if they don't have a lightMap. This is also how Unreal4 works I believe btw. But yes, there will be a quality drop on dynamic objects. Though if the probes capture their incoming light with the same methods, the difference shouldn't be too big, and the fact that dynamic objects tend to move and are relative small & curvy helps masking "ugliness". Also, in a game like T22 at least, 90% of the stuff you see is stationary anyway.

    The other alternative is to use probes everywhere to get a more uniform result, but the static (90% in case of T22) will result in overal lower quality. So I'd rather let the dynamic 10% suffer.


    All in all, the "solution" (and keep in mind there is no perfect solution) here is an hybrid, trying to take the best of both worlds. And yeah, merging them into one is tricky. But, I shouldn't complain now that its done (except for the photon-mapper itself, which is actually still an ultra slow ray-tracer at this moment ;) )

    ReplyDelete
  3. Damn, I don't like the constraints a lightmap imposes. It's so slow to generate, not very compatible with the idea of real-time WYSIWYG scene editors ( move a mesh a bit, sit back and relax to recalculate lighting ), it's not trivial to implement a fast and quality lightmapper and texture packer. You have to keep track of memory and texture sizes, eventually implement a robust world streaming system, to keep memory usage low at any given time. But damn, it looks really awesome especially for indoor scenes. Global illumination really plays a very important role in human eye. No "apartment" demo or architecture visualization can do without G.I. - switch off the G.I. and it suddenly looks like crap. Many modern games despite of the increased poly-count and per-pixel bumps and reflections look flat and boring when compared to Quake3 or HL1/2.
    So you say - the best from both worlds. A probes grid (regular or importance ) for dynamic objects and lightmap that stores second bounce of light. Something that HL2 is using, and maybe even modern engines like Unity 5 and Unreal Engine 4. It's great to know that probes work fine with small objects and are bad for illuminating big flat surfaces like walls.Those will fall back to lightmap storing indirect lighting and blurred and stretch (small texture) ( as you pointed out, indirect lighting can be blurred without noticeable loss of quality )

    ReplyDelete
  4. LightMaps and Speed aren't best friends indeed. To make it a little bit less painful, I can do quick (low quality, low ray-count) preview, and update the map per "entity" (say a wall, single furniture piece, ceiling, ...), so you don't have to wait hundred years each time something changed. Also, when not baking direct light, moving a light a little bit doesn't really require a full rebuild as you barely notice the difference.

    Probes are placed "smart". Since reflection cubemaps require relative lots of space, an uniform 3D grid wasn't a good option here. Instead, the artist places them (typically somewhere in the middle of a room and in some corners maybe). Deferred Rendering is used to "splat" the probes on screen. More probes = more local (accurate) reflections & GI, but again, also consumes memory and will slow down things eventually.


    As for LightMap size, the numbers aren't too bad. Or well, it all depends on the desired resolutions of course, but at this point I have a 2048x2048 canvas available. Entities like walls have resolutions like 128 x 32 or something, and will find an available spot somewhere in that larger "Atlas" canvas. Actually there are 4 such textures (to store directional & sky-influence and other additional data), so (GPU)memory consumption would be 2048 x 2048 x RGBA8 x 4 = 64 MB. Tried DDS compression (~4 times smaller) but the quality suffered too much, especially in dark regions. Anyhow, 64 MB isn't bad at all, though the resolutions at this point might be a bit low for a real "Wow".

    ReplyDelete
  5. Instead of using probes for dynamic objects, I wonder if I can use something like the following :
    1.If the dynamic object is small( for example player picks up a brick, or carrying gun, or his own hands ) sample the surrounding lighting with a real-time cube map, rendered every frame or two.
    2.Sample the lightmap texture at that position in space to figure out the lighting.
    3.For doors, windows, gates, etc, blend two lightmaps (one created from closed door, one created from fully opened door)get the percentage the door is currently opened and make smooth transition between two lightmaps.

    ReplyDelete
  6. Probes are essentially cubemaps. In my case, a probe is a tiny (16xx16x6) cubeMap with incoming diffuse light, and a bigger resolution cubeMap texture with incoming specular (reflections).


    1. Sample realtime CubeMaps
    You could update a cubemap every cycle (or do a few faces every cycle). In fact, that's how the "older" Tower22 engine did reflections partially, and what you see in the videos. It has some impact, thought not very bad. BUT, it only works for nearby stuff, like the gun in your hands indeed. And the quality is poor, as we capture want to capture the cubemap ASAP, disabling all kinds of things.

    Now those were reflections, not GI. Tried that too, though shortly. Main problem were the distances and especially "popping". It works probably pretty well for nearby stuff, like your own character. But since you can update only 1 or few cubemaps, distant dynamic objects have to share the same info. What you get is that if you walk around a corner, suddenly all dynamic objects will receive more/less/different less. This popping effect is very visible (in a bad way).

    Another problem with this approach is that you can't capture diffuse light (quickly) very well. Or at least, I did it wrong. I just downsampled/blurred my original realtime "reflection" cubemap to a very low-res cubemap, thus taking the average incoming light. That is close, but no cigar. If you move a bit, info will change, and the object (say your gun) will appear a bit reflective as info changes. The root of the problem was that you can't just blur and downscale a cubemap. What you really need to do (and this is expensive!) is to "convolute" your cubemap. Every pixel on your cubemap needs to sample incoming light over a 180 degree hemisphere. This is a bit different from just scaling down/blurring.

    2. Sample the lightmap texture at that position in space to figure out the lighting.
    Not sure what you mean here...

    3. Blend 2 LightMaps
    Doors suck indeed. Wouldn't care too much about windows and gates as these are partially transparent, but a door can fully block or let through light, resulting in something very different (not just the door itself, the whole room). And games often just don't do this. Opening a door will not affect the (baked) light in any way,

    Yeah, how to get around that... Blending would work, but the management is difficult. If your room has 4 doors, there are potentially 4x4=16 different situations. And which entity will need these alternative situations, and which don't? And where to store all those extra textures when they are in "dormant" state? In the end everything is possible of course, but it requires some smart programming and friendly tools for the artist to manage this properly.

    This is where true realtime solutions would take over. However, they often depend on static structures as well (like octrees in the case of Voxel Cone Tracing), which are very expensive to alter realtime. Tricky stuff.

    ReplyDelete
  7. "2. Sample the lightmap texture at that position in space to figure out the lighting.
    Not sure what you mean here..."

    I mean, get the dynamic object "gun" world position, transform that world position back to the texture space of the lightmap texture and sample the nearest texel color. In other words get the nearest triangle to the gun, get it's vertices world positions and lightmap tex coords, interpolate to get barycentric tex coords in the interior of the triangle and sample lightmap texture with this coordinates. If needed, get several nearest triangles in six directions around the gun, to get directional lighting information. If the gun is under the table ( in shadow ) get the nearest triangles above, below, to the right, to the left, in front, back etc.get those triangles lighting information ( by sampling the lightmap texture with their lightmap tex coords ), interpolate and shade.The gun should appear in shadow since, all of the surrounding triangles
    are in shadow. Popping may appear..Interpolation error may occur. A gun by the big wall triangle, most of which is in shadow, extending in other room, but small part of that triangle is visible and next to the gun. Sorry for the long messy post..

    ReplyDelete