Sunday, August 26, 2012

Reflective conspiracy theories

A small step for man, but a giant leap for mankind
Before moving on, let's zero-G for a moment for Neil Armstrong. First man on the moon (and hopefully not the last), died at the age of 82, 25 august 2012. If that was a real step on a real moon, Neil will reserve a well deserved page in the history books for a long, long time. Something we mankind as a whole should be proud of. Although we kill each other each day for various reasons, we should realize we all share this tiny globe. Zooming out puts things in perspective, and Neil literally took that perspective when having a view on our little planet while standing on another floating rock in this endless cosmos. It’s such a huge performance that it's hard to believe we really did it...

Moon-landing hoax? Who shall say. John F Kennedy, chemtrails, 9-11 inside job, Saddam & biological weapons, Area 51, New world order? Both the present and history are full of mysteries, and the more you think about it, the more questions arise. Things aren’t always what they seem. Having a Moon-landing sure came at a good timing with those crazy Russians trying to outperform USA as well. And it surprises me that modern space missions -50 years technology evolution since the sixties- seem so extremely vulnerable (control room being overexcited because the Curiosity drove a few centimeters on Mars ?!) that it puts the much more ambitious/dangerous Moon-landing in a weird contrast.

But before just following the naysayers... Being skeptic is a natural, psychological phenomenon. And not taking everything for granted the media says is healthy. But consider other huge achievements. Didn't we laugh at the brothers Wright? Would Napoleon even dare to dream about the awesome power of a nuclear bomb? Huge pyramids being built with manpower only? Even got a slight idea of how CERN works? Would you run with thousand other soldiers on Utah beach while German bunkers are mowing down everything that moves? Men can do crazy stuff when being pushed! But the bottom line is that you or me will never know what really happened, because we weren't there, nor do we have thorough, inside knowledge of the matter. All we do is picking sides based on arguments we like to believe. And for that reason, here an *easy-to-consume* series of the Mythbusters testing some infamous Moon-Landing conspiracy theories including the footprints (on dry "sand"?), impossible light & shadows (multiple projector lights?), and the waving flag (in vacuum?). So before copying others, get your facts right and check out this must-see:
Mythbusters & Moonlanding

Let me spoil one thing already. Something we graphics-programmers *should* know. Why is that astronaut climbing of the ladder not completely black due the shadow? Exactly, because the moon surface reflects light partially. A perfect example of indirect lighting, ambient-light, Global Illumination, or whatever you like to call it. Neil, rest in peace. And for future astronauts, don't forget to draw a giant middlefinger on the Moon/Mars so we have better evidence next time. Saves a lot of discussion.


Reflections
-------------------------------------
As mentioned above, shadows and light bounce off surfaces. Not just to confuse conspiracy thinkers with illuminated astronauts, also simply to make things visible. If not directly, then indirectly eventually. Reflections are an example of that, and take an important role in creating realistic computer graphics. Unfortunately, everything with the word "indirect" in it seems to be hard to accomplish, even on modern powerful GPU's. But it's not impossible. Duke Nukem 3D already had mirrors, so did Mario 64, and Farcry was one of the first games to have spectacular water (for the time) that both refracted and reflected light.

Well, a GPU doesn't really reflect/refract light-rays. Unless you are making graphics based on Raytracing, but the standard for games still is rasterization, combined with a lot of (fake) tricks to simulate realistic light physics. Reflections are one of those hard to simulate tricks. Not that the tricks so far are superhard to implement, but they all have limitations. Let's walk through the gallery of reflection effects and conclude with a relative new one: RLR (Realtime Local Reflections) I recently implemented for Tower22. If you already know the ups and downs of CubeMaps and Planar reflections, you can go there right away.


Planar reflections
One of the oldest, accurate, tricks are planar reflections. It “simply” works by rendering the scene(that needs to be reflected) again, but mirrored, The picture below has 2 “mirror planes”. The ultra realistic water effect for example renders everything above(!) the plane, flipped on the Y axis. That’s pretty much it, although its common to render this mirrored scene into a texture(render-target) first. Because with textures, we can do cool shader effects such as colorizing, distortions, Fresnel, and so on.

Planar reflections are accurate but have two major problems: performance impact & complex (curvy) surfaces. The performance hit is easy to explain; you’ll have to render the scene again for each plane. This is the reason why games usually only have a single mirror or water-plane. Ironically the increasing GPU power didn’t help either. Sure you can re-render a scene much faster these days, but don’t forget it also takes a lot more effects to do so. Redoing a deferred-rendering pipeline, SSAO, soft shadows, G.I., parallax mapping and all other effects for a secondary pass would be too much. If you look carefully at the water(pools) in the T22 Radar movie, you’ll notice the reflected scene being a bit different… uglier. This is because lot’s of effects are disabled while rendering the mirrored scene for planar reflections. Just simply diffuseMapping with a fixed set of lights.

The second problem are complex surfaces. The mirror-planes on the image above are flat. That’s good enough for a marble floor, and even for water with waves (due all the distortions and dynamics, you won’t quickly notice the error). But how to cover a reflective ball? A sphere has an infinite amount of normals (pieces of flat surface pointing in some direction). Ok, game-spheres have a limited amount of triangles, but still a 100 sided sphere would also require 100 planar planes = 100x reflecting the scene to make a correct reflection. To put it simple, it’s WAY too much work. That’s why you won’t see correct reflections on curvy surfaces.

Conspiracy people! Notice the reflected scene in the waterpool being a bit different than the actual scene?


CubeMaps
CubeMaps are the answer on the typical problems with planar reflections… sort of. The idea is to sample the environment from all directions, and store it in a texture. Compare it with snapping a panorama photo. It’s called a cubeMap because we take 6 snapshots and “fold” them into a cube. Now we can both reflect and refract light simply by calculating a vector and sample from that location in the cubeMap texture. The crappy sample below tries to show how a cubeMap is build and how it can be used. The right-bottom image represents the scene from topview, the eye is the camera, and the red line a mirror. So if the eye would look to that mirror, it creates the green vectors for sampling from the cubeMap. In this situation the house would be visible in the mirror.

• Paraboloid maps are a varation on cubeMaps that only require 2 snapshots to fold a sphere. PM’s are faster to update in realtime, but lack some quality and require the environment to be sufficient tessellated though.

Since cubeMaps sample the environment in 360 degrees, they can be used on complex objects as well. Cars, spheres, glass statues, chrome guns, and so on. Problem solved? Well, not really. First of all, cubeMaps are only accurate for 1 point in space. In this example, the environment was sampled around the red dot. Stuff located at the red dot will correctly reflect (or refract) the environment, but the further it moves away from the sample-point, the less accurate it gets. That means we should sample cubeMaps for each possible location? No, that would be overdone. The advantage of curvy surfaces is that it’s really hard to tell whether the reflection is physically correct for an average viewer.

But at the same time, you can’t use a single cubeMap for a large reflective waterplane, because you will notice the inaccuracy at some point. What games often do is letting the map-artists place cubeMap “probes” manually at key locations. At the center of each room for example, or at places you expect shiny objects. Then reflective objects would pick the most useful(nearby) cubeMap. In Halflife 2 you can see this happening. Take a good look at the scope-glass on your crossbow… you’ll see the reflection suddenly changing while walking. This is because the crossbow switches over to another cubeMap probe to sample from.
• Tower22 updates a cubeMap nearby the camera each cycle and uses it for many surfaces. This means pretty correct (& dynamic!) reflections for nearby objects. Distant surfaces will lead to visible artifacts sometimes though.

A cubeMap requires 6 snapshots, thus rendering the scene 6 times. This is quite a lot, so cubeMaps are usually pre-rendered. Since we don’t have the scene again from that point, cubeMaps provide a much faster solution than planar reflections. However, being not updated realtime, you won’t see changes in the environment either. Wondered why soldiers didn’t get reflected in some of the glass windows or waterpools in Crysis2? That’s why. All in all, cubeMaps are only useful for (smaller) local objects, and/or stuff that only vaguely reflects such as a wet brick wall or dusty wood floor.


Other methods?
I don’t know them all, but Crytek introduced an interesting side-quest on their LPV (Lighting Propagation Volume) technique. To accomplish indirect lighting, one of the things they do is creating a set of 3D textures that contain the reflected light fluxes globally. Asides from G.I., this can also be used to get glossy(blurry) reflections by ray-marching through those 3D textures. I sort of tried this technique (different approach, but also having a 3D texture with a global/blurry representation of the surroundings). And did it work? Well judge for yourself.


Personally, I found it too slow for practical usage, although I must say I only tried it on an aging computer so far. But the real problem was the maximum ray-length. Since 3D textures quickly grow to very memory consuming textures, their sizes are limited. That means they only cover a small part of the scene (surrounding the camera), and/or a very low quality representation in case the pixels cover relative large areas. In this picture above, each cell in the 3D texture covered 20^3 centimeter. Which is quite accurate (for glossy reflections), but since the texture is only 64x64x64 pixels, a ray cannot travel further than 64 x 20cm = 12,5 meters. In practice it was even less due performance issues and the camera being in the middle. Only a few meters. So the wall behind the camera would be too far away for the wall in the front to reflect. This was fixed by using a second 3D texture with larger cells. You can see the room pixels suddenly get bigger in the bottom-left buffer picture. However, raymarching through 2 textures makes it even slower, and the raylength is still limited. All in all, reflections by raymarching through a 3D texture are sort of accurate, but very expensive, and useful for very blurry stuff only. I also wonder if Crysis2 really used reflections via LPV in the end btw… guess not.


RLR (Realtime Local Reflections)
In case you expect super advanced stuff now, nah, got to disappoint you then. If you expect a magical potion that fixes all the typical Planar & CubeMap reflection problems, I got to disappoint you as well. Nevertheless, RLR is a useful technique to use additionally. It gives accurate reflections, at a surprisingly good performance, and implementing this (post)screen effect is pretty easy. And no need to re-render the scene.

How it works? Simple. Just render the scene as you always do, in HDR if you like. Also store the normal, depth or position of each pixel, but likely you already have such a buffer for other effects, certainly if you’re having a Deferred Rendering pipeline. Now it’s MC-Reflector time. Render a screen filling quad, and for each pixel, send out a ray depending on its normal and the eyeVector. Yep, we’re raymarching again, but in 2D space this time. Push the ray forwards until it intersects elsewhere in the image. This can be checked by comparing the camera-distance-to-pixel and camera-distance-to-ray. In other words, if the ray intersects or gets behind a pixel, we break the loop and sample at that point. Now we have the reflected color. Multiply it by the source pixel specularity to get a result. The code could look like this:
pixNormal = tex2D( deferredNormalTex, screenQuadUV );
pix3DPosition = tex2D( deferredPositionTex, screenQuadUV );

int  steps = 0;
float3 rayPos = pix3DPosition.xyz;  // Start position (in 3D world space)
float3 rayDir = reflect( pixNormal, eyeVector ); // Travel direction (in 3D)
bool collided = false;
float4 screenUV;

while ( steps++  <  MAX_STEPS    &&  !collided )
{
 // Move the ray
 rayPos += rayDir * STEP_SIZE;

 // Convert the 3D position to a 2D screen space position
 screenUV     = mul( glstate.matrix.mvp, float4( ray.xyz, 1.f) );
 screenUV    /= screenUV.w;
 screenUV.z  *= -1.f;
 screenUV.xy  = (screenUV  +1.f) * 0.5f;

 // Sample pixel depth at ray location
 float enviDepth = tex2D( deferredPositionTex,  screenUV.xy ).w;

 // Check if it hits
 collided = length( rayPos – cameraPos  ) > enviDepth + SMALLMARGIN;
}

// Sample at ray target
Float3 result = tex2D(  sceneHDRtex, screenUV );
The nice thing about RLR is that it works on any surface. The green spot gets reflected on the low table, but also on the closet door. Also notice the books being reflected a bit, and the floor, and the wall. No matter how complex the scene is, the load stays the same.

Perfect! But wait, there are a few catches. How many steps do we have to take, and wouldn’t all those texture-reads hurt the performance? Well, RLR does not come for free of course, but since rays take small steps and usually travel in parallel, it allows good catching on the GPU. Second, you can reduce the number of cycles quite drastically by:
A: Do this on a smaller buffer (half the screensize for example)
B: Do not send rays at all for non-reflective pixels (such as the sky or very diffuse materials)
C: Let the ray travel bigger distances after a while
Or instead of letting the ray travel x centimeters in 3D space, you could also calculate a 2D direction vector and travel 1 pixel each loop-cyclus. If your screen is 1200 x 800 pixels, the maximum distance a ray could possibly travel would be 1442 pixels. To complement, make good use of the power of love, I mean blur. A wood floor has a more glossy reflection than a glass plate. What I did is storing the original output texture, and a heavily blurred variant on it. The end result interpolates between the two textures based on the pixel “glossiness” value.
 pixSpecularity = tex2D( deferredTexSpecular, screenQuadUV );
 pixGloss = pixSpecularity.w;

float3 reflection = tex2D( reflectionTex, screenQuadUV );
float3 reflectionBlur = tex2D( reflectionTex2, screenQuadUV );
 endResult = lerp( reflection, reflectionBlur, pixGloss ) * pixSpecularity.rgb;
// use additive blending to add the end result on top of the previous rendering work
Of course, there are ways to jitter as well, use your imagination. However, the deadliest catch of them all, giving RLR a C+ score instead of A+, is the fact this screen-space effect can only gather reflections from stuff… being rendered on the screen. Imagine the wallpaper wall in the screenshot being reflective. It should reflect something behind the camera then. But since we never rendered that part, we can’t gather it either. In other words, pixels that face towards the camera, or to something else outside the screen boundaries, cannot get their reflections. That makes RLR useless for mirrors, although some women may prefer a RLR technology mirror. Also be careful with pixels around the screen edges. Your code should have a detection for this so you can (smoothly!) blend over to a black color (= no reflection).

As said, RLR is not a substitution for CubeMaps or Planar Reflection. Be a ninja and know your tools. Planar reflections for large mirrors / water. RLR for surfaces that only reflect at a steeper view angle, (pre-rendered?) cubeMaps for other cases.

Saturday, August 18, 2012

T22 Testament

Time flies when... getting older. A little special moment last week when we brought our our little girl to the elementary school for the first time. Little backpack strapped on her back, shy and carefully entering a new environment, inspecting the classroom a bit. Usually the moms are the softies, but this time I felt like Forest Gump dropping of his son at the bus as well. Probably mom and dad were more emotional than daughter herself. So sweet!


But what else did we do? Not a whole lot, since I had to visit England for work. But a week ago one of our guys -concept artist Pablo-, asked if I could write down some more about "game-mechanics". You know, how the game would play. How fast does our hero run? How to eliminate your opponents? How many hearts does his health-bar have? Does he have a health-bar at all? And, maybe more important, what will you be doing in this game anyway? Unless you have access to my head or tortured one of the team guys to extract information, likely you won't really know how Tower22 will be exactly played. It's a horror game, sure. But what kind of horror game? Killing zombie hordes with a fry pan like the addictive Left-4-Dead? Slowly exploring and puzzling through an infected mansion? Doing bondage & whipping like Castlevania? Or is it more like Luigi's Mansion?

Obviously, the horror genre splits up in several directions. Like Braindead, The Shining or Twilight(shivers) aren't the same things either. If you read the "Genre / Gameplay" or the T22 website, you do have an indication though. Tower22 won't be about killing things all the time. It's more focused on exploring an environment that gets stranger and stranger, solving puzzles, and trying to stay away from boogeymen. And as it comes to the looks, it will be a gritty semi-realistic "Soviet" style, mixed with a bizarre nightmarish/dreamy style. However, that still does not explain the deeper details or core features that should make this game "fun" or "scary" (the paradox about horror games is that they're often not fun at all in order to make them scary).

Each game attaches itself to several (new) features, trying to make a flagship of those. “Babes & Guns”, "Unbeaten 3D Graphics, using the Super FX chip!". "Customize your underpants", "Defeat enemies by combining magic spells with your turbo Vortex-Spin-Moves!". Although I didn't manage it for T22 yet, try to make a single phrase catchy slogan that described the best part of your game. Yes we can! Well, powertalk or not, in the end the fun-quality of a game depends on which rules or "mechanics" were chosen, and how well it was done. Brew the right concoction of game ingredients. 45% jump-force, some shotgun, a bit of doors with keys, et cetera. Combine that with a proper implementation, meaning your controls/game-world/design style/story exploits these ingredients wisely, and you have yourself a good game.


Easier said than done. As said, artists and audio composers need to catch the style that blends perfectly with the game theme. The programmers need to code the controls, physics, puzzles and A.I. like a well oiled machine. The map builders need to design the world in such a way it lends itself for the chosen gameplay features (whether that is jumping, gunning, running, racing, puzzling or whatever). If you could do it all yourself, you would do it right of course, as it's all in your head. But we all know we'll need extra people to realise a game project. How to make sure all of them are facing the right direction? Exactly, by giving clear instructions. And making a Game Document is one those help-tools.

Just writing an A4 with a global description of the game idea isn't enough by far. When it comes to fine-tuning, all details need to be provided. And that goes deeper than you may think. How fast is your player exactly? How does the stamina system exactly work? How long does it take before an enemy returns fire after seeing you? How much items can the player carry? Should the player be able to jump? And each feature needs to be weighed with care. Do not just throw in elements because some other cool game has it too. For example, only add the ability to roll up in a Morph Ball if it fits with the story, style, and if the environment provides plenty of puzzles that requires this feature. Otherwise it would feel like a dumb gimmick, out of place.

This will result in tons of text. And to make it worse, that text is likely going to change over time as elements need to be play-tested. Being able to do a Rambo ball-twister twirl might sound like a good idea at first, but after some testing it could still suck. Which requires parts to be rewritten / adjusted. As a machine programmer who writes manuals or guides occasionally as well, I know 2 things about documentation.
-----A: Writing them takes a lot of them, maintaining them even more.
-----B: Nobody really reads them.

Which brings me to C: documents -if there are any- are outdated, getting delayed. Documenting is a lost child, certainly in smaller companies/groups where the first priority lays on making the actual product. We all know we should write our stuff down, but... not now.


What we got cooking? A stove. Yet we have to make it a bit more dirty and old for the finishing ugly touch.


Wikipedia
This brings to Wikipedia. Wiki... That word always make me think about tropical juice with a package design containing monkeys swinging over pink crocodiles in a jungle. But I don't have to explain you what Wiki(pedia) really is. What is important, is that Wiki works. It contains a HUGE amount of information, it gets expanded, updated, refreshed and corrected every minute, and moreover, people read it. Not just professors with white moustaches smoking pipes, everyone does it.

Hmmm... wouldn't it be a good idea to use some Wiki power for your (game)documents then? Well, thanks to Brian here who pointed this out, I learned this is possible, and quite easily really! You can download "Wikimedia", the toolset that allows to install your own “Wiki” on a server computer (you can download Wamp for the additional components to setup an Apache + SQL server required by Wiki). Now if we think about Wiki, we think about a worldwide shared encyclopedia. But don’t forget you can set it up in a private network too, making it suitable for companies or turds like me who like to keep their game-document secret for now.


All right. But what exactly makes this better than any other random documentation system? Wiki, PDF or goddamn Cuneiform, the contents stay the same right? Well let me explain. But instead of taking a game as example, I’ll take a harvester-machine. Yep, I went to England last week to study a machine of our friends over there, in order to document the whole thing. Why? Well writing down stuff triggers you to learn the matter, as you do research while writing. And of course, it’s supposed to bring over knowledge to other engineers/programmers/service people some day. But as said before, writing this document involves several problems:
#1 It’s huge. As I can’t finish it in one or two days, there is a good chance a higher priority project will interrupt, leaving a half-finished(=useless) document.

#2 The machine will be changed / updated in the future. Having to do a revision is a lot of work though, as you’ll need to check the entire document for changes. This often leads to outdated or even faulty documents.

#3 Making 1 big document that reads comfortable, requires writing skills.

#4 I know the programming parts, but not specific details about which hydraulic valves were used, how a wheel steering sensor exactly works, or the electric schedules of the cabin. Need help from others, but they face the same problems and writing in the same document sucks. Having a pile of separated files sucks as well, unless well categorized.

#5 Do you really think a new programmer is going through all that stuff? Probably he will suggest to rewrite the system in his own way. So for who & why did you write that document? And even people want to read it, can they still find it 4 years later between the huge pile of other documents?


Plenty of good excuses you can use to convince your boss to keep you away from boring writing work. But sorry, Wiki eliminates all these problems more or less. Which is probably also the reason why it’s such a big success. First of all, instead of writing long chapters, you should try to write your system as separate small blocks. Don’t worry about the relation between those blocks yet. For example, for this machine I could write a specific page about the Joystick, how the Cruise Control exactly works, or the Dieselengine. Or to map it to games, a block about “Healthbar”, “Enemy 3”, or “Player biography”. There is no limit to the Wiki page length, but I’ll advise you to keep the pages short and to the point. One or 2 “screens” at max for example, and just pin down the facts and numbers rather than making a flowing story with “maybes” or “possiblys” that raises questions instead of answers.

This solves the “#5 reading” problem. Instead of having to scan large documents for useable text, the end-user now does a certain query. Want to know more about how the brakes work on this machine, or how the inventory should be implemented in your game? Search “Brakes” or “Inventory”, and go directly to a compact page. No bullshit, just useable info. Which also helps less experienced writers (problem #3) btw. Summing facts is easier than creating an informative, yet readable story.

As we know, Wiki allows linking. A page about Napoleon Bonaparte could refer to another page about Waterloo, or French Cheese. This allows to zoom in further and further. When I describe the machine software, it starts with an overview of main functions such as “Driving”, “Steering”, or “Engine”. Then each function gets its own page containing more detailed info. How it works, which sensors / actuators are involved, adjustable parameters, common problems (for a troubleshoot), et cetera. Then we can dive even further. A page that described the related source-code, or specific details about hydraulic valves being used on that particular system. Manufacturer, suppliers, maximum load, installation schemes, known problem, … In a game document, the description of a certain level could refer to characters, weapons, and other entities being used there.

Tying together the blocks allows to make a rich and informative system, yet remaining clear as the individual pages remain relative short; the reader decides how far he zooms in. Asides, it also solves some more of our typical problems. You can expand your Wiki step by step. A half finished document isn’t readable, probably neither available either as it still floats somewhere on the local hard-drive of the author. But you can already make use of a Wiki that only contains information on the top levels. A dead link simply brings the reader to a “to be written” page. Your Wiki page just gets an address like any other website, and can be seen by everyone (with access to your network) with a web-browser. This makes it easy to find, even years later, and hence, it even encourages you or other authors to fix the Wiki in case of a dead link or error. Maybe I don’t know crap about cooling fans, but if an engineer reads the document and bounces on faulty info or an unfilled page, he can quickly edit it. This makes the documentation more complete and easier up-to-date. Btw, in the case of Tower22, many of the Wiki pages also generate (concept)drawing tasks for the artists



Wiki works, and the internet proved that over the last 10 years. So if you are struggling with the documentation, feel like no one reads/checks or contributes your hard work, or getting tired of piles of files, Wiki might work for you. Whether you are writing about games, harvesters or your stamp collection.

Friday, August 3, 2012

Vertex-painting with Bob Ross

RLR (Realtime Local Reflections). A fancy word for... reflections. Realtime.

Did some interesting graphical enhancements last weeks. Realtime G.I. finally works a bit... got to be careful with such statements, getting consistent results that always look good is hard to achieve with G.I… Furthermore, RLR (Realtime Local Reflections) have been added for pretty accurate reflections on complex surfaces. I'll post about this soon, when I have some nice pics to show with it. And last but not least, we did some finger-painting.

Some artists make money just by throwing a bucket of paint or virgin menstruation blood on a white canvas. Random splatters, abstract stuff dude, smoke enough and you'll see what it means. Normal people however tend to paint / plaster their walls as smooth and equal as they can. Yet for games the artist has to be careful with repeating the same boring texture over and over again. For two reasons. First, even high-res textures still lack detail to vary enough, In reality, no matter how hard you try, even a white boring wall has some inconsistencies. Little drill hole here, crack there, darker spot in the corner, slight bump here, old brown blood from a squished mosquito, et cetera. In reality, you get all those details for free. But if we had to paste tiny mosquito blood decals or mini-holes on the walls in a game, it would take ages to finish.

Second reason, true realistic graphics suck actually. Why else do you think they need a light-experts, smoke machines and tons of make-up on a movie set? We want dramatic scenes, not clean white plastered walls we see every day in our own house. That's why we overdo it a bit with larger decals, damaged spots. And that’s also why I implemented an artist-throwing-with-buckets feature.

See that green wall? When our new artist Diego(from Spain, of course) showed me that texture, my first thoughts were "....". The wall looked boring (in an empty room I must say). But then again, what else do you expect from a green plastered wall? If you look in a new empty house, you won't see huge damage decals, random cracks, yellow pee stains and Mickey Mouse holes either. So basically, there was nothing wrong with this texture. But how a more dramatic look then?

Of course, you could add a few random details on your texture. For example, let's place a larger crack in the center, and paste some zombie vomit in the upper right corner, just for fun. Well, that might look good from nearby, but if you apply the texture in a larger empty room, it becomes a bit odd that this zombie vomits the same splatter every 3 meters at the same height. In practice, you probably won't use such a specific detail in your textures. That's where we have decals for. But also that bigger crack will become noticeable soon. Mip-maps may help you hiding this repeating pattern after a couple of meters, as such effects become more blurry in lower mip-map levels. But still... The magic trick for texturing is to apply as much detail and variation in your image, but without making it too noticeable repeating itself.


Entropy
================
As said, specific details should be added afterwards with decals. Decals can be placed everywhere, anywhere. Plus they can be rotated and scaled so you can reuse the same crack / hole / splatter / or whatever detail multiple times without the viewer directly getting aware of your dirty tricks. However, decals aren't always perfect either. Either you'll have to draw a LOT of variants, or use them a bit careful so you don't see the same picture being stamped on the walls over and over again. And using many decals may also have an impact on your performance. Also when you need to apply variations on a larger scale, you either need multiple decals or very large textures.

Since half a year or so, Tower22 has Vertex-Painting tools. You can draw(override) pre-baked occlusion values with those, in case you want to make a corner darker for example. But it can also be used for "Entropy". For each material, additional textures can be defined so you can manually draw variations on your surfaces. For example:
metal > corrosion
Brick > painted parts / worn parts
Asphalt > holes / wet(water pool) parts
Pavement > Green moss between the tiles / displaced normals to make an uneven surface
Wallpaper > parts with paper peeled off / dirty parts

Pay attention class. Notice the dark wood tiles being repeated (the texture contains 4x4 tiles) in the background? Now look at the foreground. A different texture variant (a more pale looking one) was mixed in around the TV.

If you google “Entropy shader”, you’ll find some nice (UDK) movies. The basic technique is pretty simple, just mix(lerp) between textures based on the per-vertex weight values. It's similar to terrain rendering shaders that allow you to draw grass patches, sand or rocks. But to make it look more natural, smart tricks and masking textures can be used. Let's take a stone floor where we want to add moss. Where does that green stuff grow first? Exactly, in the gaps between the stones. So if you supply a heightMap somehow, you could fade in moss at the lower parts of the texture first. Here some pseudo code
// Fetch Stone textures (base layer)
stoneAlbedo = tex2D( stoneAlbedo, uv );
stoneSpecular = stoneAlbedo.a; // we stored specular in albedo alpha
stoneNormal = tex2D( stoneNormalAndHeightMap, uv );
stoneHeight = stoneNormal.a; // we stored height in the normalMap alpha
 
// Fetch Moss textures
mossAlbedo = tex2D( mossTexture  , uv * uvRepeatValue );
mossNormal = tex2D( mossNormalMap, uv * uvRepeatValue );
  
// Fade in moss. Intensity is stored in vertex.weight1.x
// Lower parts will get moss earlier
fadeInFactor = saturate( (1.f - stoneHeight + bias) * vertex.weight.x );
outputAlbedo = lerp( stoneAlbedo.rgb, mossAlbedo, fadeInFactor );
outputNormal = lerp( stoneNormal.rgb, stoneNormal.rgb + mossNormal, fadeInFactor );
outputNormal = normalize( 2.f * outputNormal -1.f );
// Reduce specular on parts with moss
outputSpecular = lerp( stoneSpecular, float3(0,0,0), fadeInFactor );
That's just one way to mix. You can also use the normal. If you want to add snow for example, surfaces facing upwards should carry more snow, while surfaces facing downwards shouldn't carry anything at all. Obviously, the way you mix depends a lot on the type of material you'll be adding.


Wall painting
================
For our green wall, we did yet another trick. The ideas was to have "repainted" spots. The kind owner of this room in Tower22 repainted some worn parts with a fresh layer of paint. The repainted parts should have a slight different(brighter) color, and the cracks should be less visible on those parts. As shown above, you could make a greenish "repainted" texture variant. But what if we want white or orange paint instead of greenish? Got to make yet another texture? Or how about customizing the color values manually with the vertex-paint tools?

If enabled by the surface shader, it's possible to adjust the Hue / Saturation / Brightness values locally. Again, by painting per vertex. The colors would get transformed from RGB to HSV, then we add/subtract to offset values given by the vertex weight values, and transform it back to RGB again. This allows to make the green wall darker, brighter, white, or pink for that matter.

Yet the initial "Hue drawing" results sucked a bit. By nature, weight values interpolate between 2 vertices. So if we painted the center vertex red in the pic above, it color would smoothly go from red to green, like a gradient. Unless you are Bob Ross, that's not how painting works. The transition from one color to another should be harsh. And if we did a quick & dirty paint job, we should see brush-streak patterns right? No worries, the GPU cooling fan is the limit.

To make more realistic transitions, you can make use of a mask texture. The values on this texture can be seen as an offset (or "height"). With a different vertex-weight, we paint this mask-texture (invisible) on the walls. The more intense, the higher the offset. --- offset = tex2D(mask).x - 1.f + vertex.weight.x ---
Then afterwards, we colorize it with the Hue painting tools. If the offset is 0 or below, nothing will happens. Once above 0, the color rapidly changes into its second variant, given by the custom Hue values.

No, this is not what you think it is. Just an artist throwing with buckets of virgin menstruation blood, that's all.

Pretty neat huh? With a relative cheap tricks (Hue manipulation & one extra read from a mask texture), we can customize this wall in many, many ways. And of course, this can be used much wider on a lot of different surfaces. All to "break the patterns". Preventing the eye from seeing the same happening twice in a scene is another step fowards into realism. Or at least eye-candy :)