Friday, December 25, 2015

Tower22 - 2016

For those not too deep into technical engine stuff; you may wonder where the hell the nice screenshots have been. Once upon a time, I was able to post a few pics (almost) every week. And then... cowboy mouth-organ playing... not much. There’s lots of talks about new engine this, Fuel22 system that, new strategies, and pooh-hah. But, the question remains; You can Talk the Talk, but can you Walk the Walk?

No pics = No good. Usually. Unless we're like Valve working super-top-secretly on Half life3. But no. I certainly won't reveal Tower22 in-game details, but other than that, I'm pretty open about the whole development cycle on this blog. In fact so open, that I'm still planning to release the whole source code + editors. Huh? Where to find it then?

Well, so far nobody really replied on that, and if nobody is dying to give it a try, I'd rather perfect some things further first. You know, putting your baby on the world brings some responsibility as well. Putting people on-Hold forever won't give this engine a boost, but neither does a 3% finished product. And as you may know, finishing 100% is impossible when it comes to writing software. It works like a mathematical graph that climbs very fast at the beginning, then starts to flatten, and never really reaches the 100% target. 99,9 at most. There is always something to fix, to change, to improve, to add. And to drop & redo if you wait long enough.


Anyhow, another year flew by. Our little new born guy went from a never-sleeping crying monster into a walking demolishing monster, and I've been working on the new Engine22 (in a newer Delphi XE3) as well. So, SITREP? Pictures? Good news?

ARRHH *Squeezing* Pressing * Pushing *... And there we have a new Engine22 full-coloured turd. If you blur, colorize and rotate the camera long enough... Then even a quick & dirty dummy corridor like this may do the trick. A little bit.


Thou shall not steal
Let me get back on the pics first. The main reason there were a lot more "back then", is because I cheated, "back then". One of the most important things you'll need to get a 3D scene in shape, is a good pair of socks, textures I mean, and an idea of course. Without proper floor/wall/ceil textures (that nicely fit together as well!), it's pretty much impossible to get any applause. With cheating I mean that I borrowed a bunch of textures from games like Half life in 2010. Allowing me to get somewhere, even without any artists.

Being a good boy with guilt, I knew these assets had to be replaced ASAP with genuine, homebrew, T22 content. But generating a *good* set of textures (seamless, sharp, enough detail, normalMap, glossMap, ...) is an art on itself. I can do some 3D modelling, but textures... Thankfully, some people who saw the first T22 demo contacted me and offered help. Whoopy! Now I could get my very own T22 texture-set, without stealing! And sounds, and better quality 3D-props as well!

And I received some fine stuff indeed. Especially for a hobby game project. Now my goal has never been to defeat John Carmack, or to achieve photorealism. But being able to put the bar high, was exciting. Can't deny, I just like eye-candy. Sure I can play Super Mario with 8-bit graphics, but a horror-game like T22 should look atmospheric and believable at least. This genre simply requires compelling visuals and audio, and relies less on solid gameplay mechanics. In my opinion too many Indy games try to get away with simplistic graphics by putting a "Retro-look, duh" label on it.

That introduced a problem (or two) though. Plus some understanding for the simple Indy graphics. Who's gonna make all those high-quality assets?! Only few guys helped me, and only a few of them REALLY helped me, meaning they could deliver on a more regular basis. Although you can forget the word "regular" here. Somehow, usually only one person at a time, had some spare-time for a week or two. Which resulted in really nice textures/props/sound/drawings. But all in all, the progression tempo was worse than a mobility scooter stuck in a trench.

So there you are. Thou shall not steal anymore, albeit thou won't get new awesome assets either.
And I'm waiting, waiting... waiting for a world to change. But it doesn't change, and I can't blame
the people helping me. They have their own things going on, and especially when putting the quality-bar so high, it takes them a lot of energy to produce those assets. On top, you won't find a lot artists that are willing to share that degree of talent for free either.

You may recognize some HL textures in this old Tower22 shot... Although I did make the TV & glass thing (and map geometry) myself. But yet another problem arises here; it was good enough in 2010, it stinks in 2015. That damn bar keeps lifting itself up. Help!


Fuel22 Strategy
Time to re-evaluate strategies(one year ago). Money. Lead. Guidance. Targets. Tools. Money. Those were the missing chain links. Indy and no money, ok. But high-quality, lots of work and no money? That's a no-no. There are ways to generate some money. Crowdfunding, Kickstarter. But before heading that way, I want some guarantees. I won't announce a super-turbo-project without some basic team & fundaments first. But... how to get a team/artists first, without money? Chicken & Egg story. That's where Fuel22.net came around. I won't explain in too much detail here (has been done before: link). But in short, it’s a webshop. Hold on! A little bit different compared to other existing (big) ones though. It has a planning component that should tackle a couple of the other weak spots; Lead, Guidance, Targets. You see, the idea of this webshop is not to get rich, but to accelerate development. Listen up.

You can hire a bunch of skilled construction workers, but without blueprints and supervision, they
won't do shit except whistling at girls. Instead of mailing artistX, asking if he or she can make a medieval sofa "some day", pretty please, I'll put these tasks + explanation on the webshop. Now basically anyone can (try to) make it. This way it gets more clear what has to be done. And non-secret tasks could be viewed by any visitor in the shop as well. So if you feel you can bake that pavement texture or record crying monkey sounds; be my guest.

BUT, once accepting a task, the clock starts ticking. If you deliver bad work, or not in time, the tasks gets rejected. No more computer-crashed, dog-ate-my-homework, girl-got-pregnant, too-busy-with-FIFA17 excuses. Fuck that, I'm trying to make a game here, don't waste my time if you can't help. But also -fair and square- you get your money reward in return if you did a job well-done. Your asset will be bought via the Fuel22 webshop. By me. And hopefully, by some others as well. Most of the profit shall go directly to the artist, and a small bit goes back into the T22 depot, so I can keep buying stuff from my artists. Hence the name "Fueling"(22). Good for you, good for me.

Will it save Tower22? Who shall say. But some structure and reward(& punish) system is better than nothing at all, right? Only problem is... once again, I'm relying on some charity here. A guy is making this website for me, for free. So, you'll get the usual computer-crashed, parrot-shat-on-homework, floppies-missing excuses. Nah, just teasing. But yes, it's taking too long. Probably I just shouldn't be so picky, and pay the guy. Or let somebody else do it (you know anyone?). For money. In 2016 it's time to kick-ass and chew bubblegum, not to keep waiting forever.


Engine22 - 2.0
I wasn't too much in a hurry last year though. Put T22 in a dormant state, and focussed on rewriting the Engine + Tools. Because that's the other side of the coin. If Fuel22 was finished today, and some artists would be happy to help tomorrow, then... then what? To use the Construction Workers example again, if we have 8 handy hairy bulky chaps on the site tomorrow, you'd better have your materials ready as well. Bricks, hammers, drills, cement mixer... If you don't, they’ll walk away angry again.

One of the problems with the previous Engine/Tower22 build, was the lack of "do-it-yourself" tools. I spent a lot effort in explaining the wishes, helping, and reviewing their work. That wasn't the issue. But they didn't have the Map Editor or game executable. Basically they modelled/painted something in Photoshop, Blender, Maya, Max or whatever program- sent the raw files to me, and then a day later I returned some screenshots + comments. In times where any hobbyist can download Unreal4 and develop & test, this just isn't a playful, efficient, motivating way to get things done.

It's not that I didn't want to give them the tools, but these programs were error-prone, not very user-friendly, and a (proper working) game.exe was missing. Reason? 95% of my energy went into graphics, and trying to complete demo movies (myself mostly), instead of gameplay mechanics, user-friendly editors, and other ingredients a game needs. That had to change. And that was one of the reasons to redo the engine + tools, as well as to open up the source for you guys. Of course I'm a bit afraid people may steal my ideas or code. But then again... What can they steal really? As long as Tower22 isn’t much more than a few demo movies, there isn't much to steal to begin with. Why transform your home into a fortress, if there aren't any visitors?


Maybe I should try to get some visitors (back) first, and loosen up. So, what did 2015 bring us? Some pictures finally?! Well, as I was trying to explain, without any artist input (except for Cesar Alvares audio-track on the Subway demo movie, thanks man!) this year, there aren't any new rooms to show either. Not that aren't new rooms though. In fact, I began modelling the environment for a real playable demo a few months ago (yes, a downloadable & playable demo is the next station, no more movies!).

But those are "placeholder" maps. That means I do a quick & dirty first version, mainly to show how & what, and to test some proportions and geometric shapes. These maps are filled with "hints" (kinda cool new feature if you ask me). I can place text & pictures or website links in my dummy maps. Then a real artist will fly through these maps later on, look at the hints, and replace the ugly textures, poor maps with professional content. But in other words, the new environments pumped into the new engine so far, suck:

I warned you. It's up to the artist to transform this "bunker" into a real room some day, as shown in the hint sketch.

And no, a new engine with fancy shaders won't save the day either. In terms of visuals, the new engine doesn't have very obvious improvements anyway, except that things are done better, easier, smarter, and also faster (FPS went from ~20 to ~55 on my laptop, though it will drop again when more is added, I'm sure). In fact, quite a lot features from the old engine, like particles, water, DoF, or real-time reflections didn't make their way back into the new one yet. But to give you an idea nonetheless:

OpenGL 2.x --> OpenGL 4.5+
Cg Shaders (dead) --> GLSL
OpenCL Compute Shaders --> GLSL
Custom parameter system --> Physical Based parameter system
Lambert & Blinn shading --> Lambert & Cook Torrance
Baked probe lighting (for GI) --> LightMaps + IBL probes + influence maps for semi dynamic lights
SSAO --> SSDO
RLR + 1 realtime cubemap (for reflections) --> RLR(realtime) + IBL probes (static)
Shitty HDR --> Better HDR (better color balance)
Fake parallax (POM) effects --> True tesselation shaders & POM
Deferred lighting --> Tiled Deferred lighting
Simple linear fog --> Fog with light volumes
Layered materials (with Vertex Painting)FXAA Anti Aliasing 

Furthermore (todo) Light-beams ("volumetric fog") via raymarching, Water mirrors, RLR (realtime local reflections), Compute Shader particles + editor (rebuild old system into GLSL), DoF, Lensflares post FX, Sprites, Cascaded ShadowMaps for long-range lights...


Alpha Beta Test
In potential, the new engine should look better. But it's hard to compare at this point, without having a "finished" room & PBR compatible textures. Also, the old engine was tweaked and understood (by me at least). The new engine is still bare-bones, and especially the Physical Based approach may need some time getting used to.

But as said, I tried to focus more on other parts. Graphics are cool, but dated quickly, and not mandatory in such an early stage either. More important is to provide a *working*, fun editor for the artists this time. With a game.exe so they can actually run through their own creations. Therefore the main improvements aren't in the graphics section so far, but in game mechanics like LUA script support, the code fundaments and Map Editor. I can honestly say the map editor feels a lot better, and also the fresh, new, cleaned up code is a huge relief. It just smells more pleasant overall. There is still a lot to do, but adding new features just goes a lot quicker and cleaner, compared to previous work. On the longer term, it should pay back.

Just as important as "looking good", is how to get there. Picking entities, UV-Mappers, shortcut keys, easy to follow buttons/symbols, import/export tools, quickly previewing things, and so on.


Maybe more interesting for you; I "released" the Map Editor one week ago. Not to the public, but to an artist. And I need one or two more artists extra. Before I invite the rest of the world, I want some artist feedback first. I'm pretty sure he can crash the whole damn thing in ways I could never imagine, and instead of trying to code *everything* at once (which is impossible of course), I'll do things on demand. If he says "Rick, these shadows look like puke!", or "Really need some bloody particles here!", I'll give that priority. If he can make some cool-looking Tower22 rooms (for the playable demo btw) with it, we're back in business. Till then... Erh, play some other game ;)



Saturday, November 28, 2015

G.I. Engine22

Not the first time I write about Global Illumination... and probably not the last time either. Following traditions, the GI system changes every 8 months or so. Realtime, not realtime, Voxels, Probe grids, mega-fake-awesome GI, and so on. Make up your mind man!

I think I made up my mind this time... although, we'll speak again 8 months later. But for now, I think I got it: Good'Ol fashioned lightmaps.


Say what?! Lightmaps?! It's like NASA bombastically revealing they'll pick up the 1961 Apollo space program again. Lightmaps is like taking a step backwards, to Quake1 or something. You'd better have a damn fine reason to justify this son! And yeah, I do have a reason or two. And if you ask big-boy engines out there, they may come up with the same story. Remember Unreal4 announcing it has true realtime GI, using Voxel Cone Tracing? Finally! But... without saying much, it suddenly disappeared again, not much later. Again, why?!



A Pixel Trade-off

When doing business, there is always this thing called "ratio". Do we supply our new model car with state-of-the-art, but super difficult plasma-driven transmission? Or are we fine with a Volkswagen engine + some cheatcodes? Do we put our precious best-man on this job, or do we give that cheap intern a chance? Chose older but reliable components, or take a chance with new fancy ones? Spray prince Abdul's private jet with real heavy gold, or apply fake gold and keep the thing flyable? Bottom line is, you can't always pick the fastest, safest, most beaty, most elegant, most awesome route. Quantity versus Quality. Price versus Beauty. RAM memory versus CPU performance. Possible versus Impossible. Titanic lifeboats versus a classy ship.

As for Global Illumination -the art of having near-realistic lighting, including indirect light- always boils down to a performance, do-ability, and quality pay-off. One way or another, getting there will eat a serious piece of memory and/or performance for sure. The solutions I have seen and tried don't score very well in terms of "do-ability" either. Usually it's so complex, constrained and relying on approximations, that the outcome is hard to predict and even harder to modify to the artist taste.

But that would be OK, if the quality was, well, OK... but it isn't. Not that they all look bad; certainly VCT was/is promising. But the thing is, gamers are spoiled with semi-realistic lighting for many years already. I'm not talking about dynamic lights (sources that switch, move, dissapear, change, ...), but just about a static scene where the light acts as expected. Darker in the corners, yet not pitch black. Blueish "skylight" falling in from above or through a window. Small gaps and holes feeding atmospheric bits of light into an otherwise dark bunker. Glossy reflections on the floors, but also walls and organic objects.

Halflife2 already had that stuff. On a lower resolution maybe, and yes - all static. But hey, 90% of the environment and lights you'll see in an ordinary game doesn't move or change anyway. Besides, who cares? You as an ambitious programmer maybe, but the average gamer has no clue. And although I actually knew the difference between pre-processed lightmaps and realtime (Doom3 back then) lights, I never thought "Dang, I sure miss some dynamic lights in this game!", while playing Halflife2.

I should stop about "the old days" with Half life2. So, here you go, Portal 2. A bit younger, yet still using ancient lightMaps. But admit, this looks pretty cool right?

But Rick, that was more than 10 years ago again (holy shit). True. But believe me, most of the stuff you see in games is still very static, pre-baked lighting/reflections. Higher quality though. More video-card memory = larger lightmaps = sharper textures.

Now, if Unreal4, CryEngine, Engine22, or whoever abandons pre-baked lighting, and introduces true realtime lighting today... The quality is probably worse than Half life2 from 10 years ago. “Yeah, but its realtime! Look, the indirect light changes if we open or close the door in this room! The ceiling also brights up if we shine our flashlight on that wall in the rear! No more waiting-times for the artist while baking a lightmap!” Cool and the Gang, but again, gamers don't know / don't care. They WILL complain about low-resolution lighting, artefacts, and the ridiculous system requirements though!!


Who am I to say gamers don't care about realtime lighting? That's not entirely true. Features like day-night cycles, local lights also illuminating around the corner, and destructible environments that adapt correctly sure does matter. But, we can fake these things! That's the point. Gamers don't care whether you applied a lightmap, VCT, a realtime photon-mapper, or a brown shit-tracer for that matter. Just as long it looks GOOD, and runs GOOD on their rigs. That's what matters.

The trick is to make a hybrid system. Good old high-quality lightmaps(or probes) for your static scenery -WHICH MAKES UP MOST OF THE GAME!- and realtime hacks for dynamic objects. The latter usually relies on lower-quality, cheap tricks (and that hurts us proud graphics programmers). But we can get away with that. Because dynamic objects are usually relative small (puppet versus building), tend to move a lot, and -I repeat- they only make a relative small portion of the entire scene.



Back to lightmaps then?

It took me quite long to get over this. Lightmapping is a technique from the previous century. So much changed, but we still rely on that old crap to do some decent lighting? It sounds unnatural, and having to apply weird tricks to get the dynamic objects (monsters, barrels, cars, trees) litten as well sounds crappy. This is why I kept investigating real-time techniques. And I'm probably not the only one. Lots of papers out there. CryEngine tried Light-Propagating-Volumes, Unreal4 focussed on Voxel-Cone-Tracing for a while, and so on. But the truth is… nothing beats the old lightmap.

So, while upgrading the Tower22 engine, I wanted to make a more final decision on this one as well. For me it's fun to screw around with techniques, but now that I'm trying to get some tools working *properly* for eventual future Tower22 artist, I really needed something robust. Now artists probably dislike lightmaps for their long build-times. But at least we all know how they work. Like women, can't live with them, can't live without them. If the end results is satisfying, and if the artist has sufficient control over it, lightmaps are (still) an accepted tool.

Nice realtime results. But smart professors please; get the fuck out of that stinky Cornell Box. Games take place in open worlds, with dinosaurs and robots running around, with many more lights. 5 years ago realtime G.I. was within reach, they said. I bet we hear the same story 5 years later. Or at least 8 months later ;)


Engine22 Implementation

Don't know what other engines are doing exactly, but 2015 Engine22 Lightmaps are slightly different than the old nineties-maps though. Yeah, there is some progress at least :) Old lightmaps have a few major issues:

·         They are static! If we switch on/off a light, they don't change!
·         They are useless! to dynamic objects like characters, boxes or furniture that can be moved
·         They are flat! NormalMapping requires more info than just a colour.
·         They are ugly / blocky!


Ugly?
The last point is semi-fixed by the amounts of memory we have available today. More memory = larger lightmap-textures = sharper results. But I say semi-fixed, because STILL, you can see "blocks" or dots sometimes. Real-time techniques like shadowMaps are much sharper in general, because they concentrate on a small area or don’t use stored data at all.

Another little problem is streaming Lightmaps. Old quake or Half life maps were loaded one-by-one.
A game like GTA, Fallout or Tower22 is free-roaming though. No small sub-levels, but 1 huge world. In Engine22, each entity (a wall, pillar, floor, but also a sofa or cabinet that never moves) has its own small lightmap. Resolutions are adjustable per entity, but think about 128x128 pixels or something. When a new map-section is loaded, it will also load the lightmaps as separate images (though they are packed together in 1 file).

A little extra advantage of having separate, small lightmaps, is that the artist can update a local entity only. Click the floor, change some properties or move a lamp, and then re-bake the floor only. Obviously a lot faster than having to re-bake the entire room/scene.


But since GPU's like to batch as much as possible, hopping between lots of separate textures sucks. So, currently, after being loaded, lightmaps are stuffed all together into 1 huge "Atlas-Lightmap" texture. This happens dynamically - subcells of this atlas come and go, as new map-sections are loaded and dumped on the fly while the player moves. Downside of an Atlas texture however, is that the lightmap as a whole is still limited though. So, I might change strategies here.

Atlas showed in the corner... The red space reveals we didn't have a lot of guests at our party. Waste of space really. But keep in mind the scene is still very empty (and ugly).


Flat?
No boobies with lightmaps. As said, normalMapping techniques need to know where the light comes from. From the left? Right? Both sides maybe? An old style lightmap only contains a RGB colour; light "power" that (indirectly, after a few bounces eventually) stranded on that patch of surface. It doesn't remember where it came from though. Problem is that there are infinite possibilities here. If there were 20 lights, light could come from 20 different directions. And even more really, if you count the indirect-bounced light as well. A direction can be stored as a RGB (or even RG) colour into a second texture. But 20 directions? You’re asking too much.

Half life2 "fixed" this with "Radiosity NormalMapping". Difficult words, but the clue is that they simply generated 3 lightmaps instead of 1. One map containing light coming in globally from the bottom-left. One map storing incoming light from the bottom-right. And a third one for light coming in from above, thus mainly skylight or lamps attached to the ceiling. While rendering the game, each pixel would mix between those three lightmaps, based on its pixel-normal. Voila, normalMapping
alive and kicking again. Not 100% accurate, it's an approximation. But at least a brick wall doesn't look flat anymore.
A very, VERY old shot. Actually the very time I tried lightmaps in 2006 or something. Nevertheless, old techniques still seem to work.

I considered using (and still consider) this as well. But... having fairly large textures plus some more other stuff I'll explain later, the memory consumption may rocket-launch. Instead, Engine22 does it even simpler. Dirtier, I might say. Only one additional texture is used, storing the "dominant incoming light direction". So, wherever the most light comes from (typically from above in the open air, or from a window), that direction will be used for normalMapping. It's even less accurate than the Halflife2 approach. But, since E22 doesn't use a single lightmap with just one direction only, the final result will get mixed with other influences as well, making the lack of directional information very hard to tell.

There is one stinky problem though. Transitions from light-to-shade will generate an ugly flattened "band" in between, where the normal bends from one direction to another
in all of a sudden. Didn't find a fix for that yet.

Not saying accuracy can kiss my ass, but the thing with indirect-light is... it comes from all directions. Which makes the overall normalMap effect somewhat "blurred" and hard to verify its correctness. Just as long we see some shades and highlights, we're happy. For now.


Static?
Now the biggest challenge. I mentioned that 90% (just grabbed a good-sounding number here) of game-scenery is static. But how about that other 10%? Ignore it? That wouldn't be a very humane thing to do.

This problem splits into two parts. First of all, lights can change. Sky turns from day to night. Room-lamp switches on and off. Second, dynamic objects can't use lightmaps. Sure we can bake a correct, high-quality lightmap for a rock-object. But as soon as we roll over the rock, the lightmap is wrong. It would have to be re-baked, but that is (WAY) too slow.

Engine22 solves the first problem in multiple ways. For one thing, lights can be fully dynamic, updating its shadowMap every cycle. But the consequence is that they do NOT generate any indirect light. A source like your flashlight will not get involved at all while baking a lightmap. Simply because we never know if & where that flashlight will be. An ugly, yet somewhat effective hack is to add a secondary, larger, weak pointlight to our flashlight. This way the flashlight still illuminates the surrounding room a bit, also outside its primary light-cone.


But more interesting are Stationary lights. These lights can't move, but CAN change colours. Which also means they can be turned on/off (putting the colour on black = off). They can optionally still cast cast real-time shadows, so dynamic entities like our hero will cast a correct shadow on the ground. The lightmap works different for these stationary sources though. It won't capture the incoming light direction or colour - or what's left of the light energy after some bounces. Instead, it only stores the “influence factor”. From 0 to 100%. A RGBA image can hold up to 4 factors this way. Each entity can be assigned to 3 Stationary lightsources, and we reserve the Alpha channel for skylight, which might change as well if you have a weather-system, or day/night cycle.

So this way we have indirect light, and still the ability of switching sources on/off, or playing around with their colours. However, only 3 sources per entity could be a limitation in some special situations (yet the easy way to solve this is simply by dividing a large floor-entity into smaller sections typically). And, it does not support colour-bleeding. If white skylight bounces off a red wall, the surroundings should turn slightly reddish/pinky as well. But since we store an influence factor only, that effect is lost. It is active when using fully static lights though.

Green stuff = skylight. We don't store "green", but just a percentage. So if the sky turns orange, so will the green stuff on the walls and floors here.

Useless for dynamic objects?
As for that other "static-issue", well, that still is an issue. LightMaps are useless for anything that doesn’t like to stay put. Engines often fix this by generating additional probes. Think of small orbs flying in the air, forming a grid together. Each probe would capture incoming light from above, bottom, left, right, et cetera. Same principle as a lightmap really, except that these probes do not block or bounce lights. They capture, but don't influence light photons.

I did this in the older Tower22 engine as well, see this movie
                Youtube Tower22 Subway Test

Works pretty well, but as usual, there are some problems. It's hard to see due all the stuff going on (and blurry video quality hehe), but if you focus on the far background in those tunnels, you'll see some light popping in/out. That's the probe-grid moving along with the camera. The problem with an Uniform 3D grid of probes, is its size. Even though a single probe only contained 6 RGBA values here (can be done smarter with Spherical Harmonics btw), the total amount of probes makes it big. I believe a 128x128x128 Volume texture was used in that demo. Or actually 6 -one for each 3D cubemap-axis (up,down,left,...). So do the math:
                128^3 x RGBA8 x 6          = 48 MB
The grid density was 0.5 cubic meters or so. So, the grid would only cover 64 cubic meters. And since it was centered around the camera, you could only see half of it, 32 meters, forward. All stuff behind those 32 meters didn't get appropriate data.

So much megabytes, and the painful part is that 90% (again, just throwing numbers) is vacuum-space. If no particle or solid entity is placed there, actually sampling the probe, it’s an absolute waste of space. Another awful issue are probes placed behind a wall. The engine tried to eliminate that as much as possible, but still it happened in some situations. It would cause light from a neighbour room -or worse, skylight- to "leak" into the scene.



New Engine22 uses a different approach. The artist will place probes wherever he thinks they should be placed. Typically that is somewhere nearby a bunch of entities, in the middle of a room, along a corridor path, behind windows, or in dark corners. Placing probes sucks, it’s yet another thing the artist has to bother. But the result is FAR less probes... Which allows us to use all those megabytes in more useful ways. Like storing additional information for Stationary lights... Or Reflections. Engine22 uses "IBL", Image-Based-Lighting. Which is fancy talk for just using pre-baked (but high quality) cubeMaps for local reflections. Again, the static vs dynamic issue arises here. I won't explain in detail now, but E22 will mix static reflections with real-time ones, IF possible. So, now that probes are also used for reflections -something that pays off in a more clear, direct way- the extra effort to place them is somewhat justified.

All in all, as you can see, Engine22 doesn't use a single technique for Global Illumination. Lightmaps here, ambient-probes there, a bit of SSDO on top, IBL reflections in the mix, and so on. From a programmers-perspective, it’s a nightmare, making a solid system that uses all those fake-hacks in harmony. But it works. Sigh.

A probe was placed in the middle of the corridor. It gives glossy reflections to the floors and walls, and also provides a small (convoluted) cubemap containing "Ambient" for all incoming directions. The Heart object uses probe-GI, instead of a lightmap. Additionally, SSDO (a screen-space ambient occlusion technique) adds some local shading in- and around the object, as well as on the wood-floor gaps and such.

Monday, November 16, 2015

Basic principles of game-graphics, 2015

How does Engine22 bring pixels on your screen? How does a game in general draw its graphics? For me as an unofficial graphics-programmer, it all makes pretty much sense. But when other people ask about it –including programmers-, it seems to be a pretty mysterious area. Also, for those who didn’t touch “graphics” the last, say 10 years, a lot might have changed maybe?


Quite some years ago, an old friend without any deeper computer background thought I really programmed every pixel you could possibly see. Not in the sense of so-called shaders, but really plotting the colours of a monster-model on the screen, pixel-by-pixel, using code-lines only. Well, thank the Lord it doesn’t work like that exactly. But then, HOW does it work?


Have a Sprite

Graphics is a very complex subject, with multiple approaches and several layers. There is no single perfect way to draw something. Though most games use the same basic principles and help-libraries more or less. On a global level, we could divide computer-graphics into 2D and 3D to begin with. Although technically 3D techniques overlap 2D (you can draw Super Mario using a 3D engine – and many modern 2D games are actually semi-3D), old 2D games you saw on a nineties Nintendo, used sprite-based engines.

A sprite is basically a 2D image. Like the ones you can draw in Paint (or used to draw when Paint was still a good tool for pixel-artists, modern Paint is useless). In addition, sprites often have a transparent area. For example, all pink pixels would become invisible so you could see a background layer through the image. Also, sprites could be animated, by playing multiple images quick enough after each other. Obviously the more “frames”, the smoother the animation. But, and this is  typical for the old sprite-era, computer memory was like Brontosaurus brains. Very little. Thus small resolution sprites, just a few colours (typically 16 or 256), and just a few frames and/or little animations in general.
Goro, the 4-armed sprite dude from Mortal Kombat.

When we think about sprites, we usually think about Pac-Man, Street Fighter puppets or Donkey Kong throwing barrels. But also the environment was made of sprites. The reason why Super Mario is so… blocky, is because the world was simply a (2D) raster. Via a map-editor program, you could assign a value for each raster-cell. A cell was either unoccupied (passable), a brick-block, a question-mark block, or maybe water. And again, the background was made of a raster –but usually having larger cells. Later Mario’s would allow sloped (thus partially transparent) cells by the way.

So typically an old fashioned 2D “platform-game” engine gave us a few layers (sky, background, foreground you walk/jump on) for the environment, and (animated) sprites for our characters, bullet projectiles, explosions, or whatever it was. The engine would figure out which cells are currently visible on the screen, and then draw them cell-by-cell, sprite-by-sprite. In the right order; background sprites first, foreground sprites last. And of course, hardware of the Sega, Nintendo or PC provided special ways to do this as fast as possible, without flickering. Which is terribly slow and primitive for now, but pretty awesome back then.


Next station, 3D

2D worlds made out of flat images have one little problem; you can move and even zoom the camera, but you can’t rotate. There is no depth data or whatsoever.               

               
3D engines made in the last years of our beloved nineties, took a whole different approach (and I’m skipping SNES Mode7-graphics for Mario Kart or 2,5D engine like the ones used for Wolfenstein or Duke Nukem 3D). Whereas 2D “sprites” where the main resources to build a 2D game, artists now had to learn how to model 3D objects. You know, those wireframe things. To make a box in a 2D game, you would just draw a rectangle, store the bitmap, and load it back into your game engine. But now, we had to plot 8 corner coordinates called “vertices”, and connect them by “drawing” triangles. Paint-like programs got extended with (more complicated) 3D modelling programs, like Maya, Max, Lightwave, Milkshape, Blender, TrueSpace, et cetera.

A bit like drawing lines, but now in a 3D space. A (game) 3D model is made out of triangles. Like the name says, a flat surface with 3 corners. Why is that? Because (even on this day) we made hardware specialized in drawing these triangle-primitives. Polygons with 4 or more coordinates instead would also be possible in theory, but give a lot of complications, mainly mathematically wise. Anyway, Lara Croft is made out of many small connected triangles. Though 15 years ago, Lara wouldn’t have that much triangles, resulting in less rounded boobs. 



How the hell does an artist make so many tiny triangles, in such a way that it actually looks like a building, soldier or car? Sounds like an impossible job. Yeah, it is difficult. But fortunately those 3D modelling programs I just mentioned have a lot of special tools. There are even programs like Z-Brush that sort of “feel” (but then without the actual feel) like claying or sculpting. You have a massive blob made of millions of triangles (or actually voxels) and you can push, pull, cut, slice, stamp, split, et cetera. But nevertheless, 3D modelling is an art on its own. But, unlike my friend thought, 3D modelling is not a matter of coding thousands of lines that define a model. Thank God – though there is this exception of insane programmers who make "64k programs" that actually do everything code-wise. But I’ll spare you the details.


We didn’t ditch Paint (or probably Photoshop or Paint shop by then) though. A 3D wireframe model doesn’t have a texture yet. To give our 3D Mario block a yellow colour and a question-mark logo, we still need to put a 2D image on our 3D object. But how? In technical terms; “UV mapping”. To put it simple; it’s like wrapping (2D) paper around a box, putting a decal-sticker on a car, or tattooing “I miss you Mom” on your curvy arm. UV Mapping is the process of letting each vertex know where to grab from a 2D image.



3D techniques – Voxels

So far we explained the art-part; feeding a 3D engine with 3D models (a file with a huge array of coordinates) and 2D images we can “wrap” around them. But how about the technical, programming part? How do we draw that box on the screen?

Again, we can split paths here. Voxel engines, Raytracing and Rasterizing are the roads to Rome. The paved roads at least. I’ll be short about the first one. Voxelizing means we make the world out of tiny square… ehm… voxels? They are like square patches. If you render enough of them together, they can form a volumetric shape. Like a cloud. Or this terrain in the 1998 “Delta Force” game series:

The terrain makes me think about corn-flakes, though this "furry" look had a nice-side effect when it comes to grass simulation (something quite impossible with traditional techniques on a larger scale back then).

Although I think its technically not a Voxel-based engine, Minecraft also kinda reminds me of it; volumetric (3D) shapes getting simplified into squares or cubes. Obviously, the more voxels we use, the more natural shapes we get. Only downside is… we need freaking millions of them to avoid that ”furry carpet” look. Though Voxels are making their re-entrance for special (background) techniques, they never became a common standard really.


3D techniques – Raytracing / Photon Mapping

Raytracing, or variants like Photon mapping, are semi-photo realistic approaches. They follow the rules of light-physics, as Fresnel, Young, Einstein, Fraunhofer or God intended them to be. You see shit because light photons bounce off on shit and happen to reach your lucky eye. The reason shit looks like shit is because of its material structure. Slimy, brownish, smudgy – well anyway. Light photons launched by the sun or artificial sources like a lightbulb bounce their way into your eye (and don’t worry, they don’t actually carry shit molecules).

A lot of physical phenomena happen during this exciting journey. Places that are hard to reach because of an obstacle, will appear “in shade”, as less photons reach here. Though they often still manage to reach the place indirectly after a few bounces (and this is a very important aspect for realistic graphics btw). Every time a photon bounces, it either reflects or refracts (think about water or glass), plus it loses some energy. Stuff appears coloured because certain regions of the colour spectrum are lost. A red wall means it reflects the red portion of the photon, but absorbs  the other colours. White reflects “everything” (or at least in equal portions), black absorbs all or most of the energy. Dark = little energy bounced.


Well, I didn’t pay much attention during physics classes so I’m a bad teacher, but just remember that Raytracing tries to simulate this process as accurate as possible. There is only one little problem though… A real-life situation has an (almost) infinite number of photons that bounce around. Since graphics are a continuous process (we want to redraw the screen 30 or more times per second), it would mean we have to simulate billions of photons EACH cycle. Impossible. Not only the numbers are too big, also the actual math –and mainly testing if & where a photon collided with your world- is absolutely dazzling. If the world was rendered with a computer, it would one ultra-giga-mega-Godlike PC! We’re not even a little bit close.


BUT! Like magicians, we graphics-programmers are masters of fooling you with cheap hacks and other fakery. Frauds! That’s what we are. Raytracing doesn’t actually launch billions of photons. We do a reverse process; for each pixel on the screen (a resolution of 800 x 600 would give us 480.000 pixels to do), we try to figure out where it came from. Hence the name ray*tracing*. Still a big number (and actually still too slow to do it real-time with complex worlds), but a lot more manageable than billions. Though it’s incomplete… By tracing a ray, we know which object bounced it off to us. But where did it came from before that? We have to travel further to a potential lightsource… or multiples. And don’t forget yet another obstacle might be between that object and a lightsource, giving indirect light. You see, it quickly branches into millions and billions of possible paths. And all of that just to render shit. Shit.



Well, there you have the reason why games don’t use Raytracing or Photon mapping. And I was about to put “(yet)”, but it’s not even a “yet”. We’re underpowered. It might be there one day, but currently we have much smarter fake tricks that can do almost the same (- must say some engines may actually use raytracing for very specific cases to support special techniques - hybrids).


But it might be useful to mention how (older?) 3D movies were rendered. If you remember game-cinematics like those pretty-cool-ugly movies I mentioned in my previous ”Red Alert” review, you may have noticed the “gritty-spray” look. Now first of all, movies are different than games, as they are NOT real-time. Games have to refresh graphics 30 or more times per second to stay fluent. Movies also have a high framerate, but we can render these frames “offline”. It doesn’t matter if it takes 1 second, 1 hour, or 1 week to draw a single frame. If you have two production years, you have plenty of rendering-time. And of course, studio’s like Pixar have what they call “Render-Farms”. Many computers, each doing a single frame or even just a small portion of a single frame. All those separated image-results are put together in the end, just like in the old days where handmade drawings of Bambi were put in line.

Toy Story must have been one of the first (if not first) successful, fully computer-animated movies.


So that allows us to sit back, relax, and actually launch a billion photons. Well… sort of. Of course Westwood didn’t have years and thousands of computers for their Red Alert movies, nor were the computers any good back then. So, reduce “billions” to “millions” or something. It’s never enough really, but the more photons we can launch, the better results. Due limitations or time constraints, especially older (game) movies appear “under-sampled”, giving that gritty-pixel-noisy-spray look. What you see here, is just not enough photons being fired. Surface pixels missed important rays, and blur-like filters are used afterwards to remove some of the noise.


3D techniques – Radiosity & LightMaps & Baking

A less accurate, but actually much faster and (nowadays) maybe even nicer technique when taking the time/quality ratio into account, is baking radiosity lightmaps. Sounds like something North Korea would do in a reactor, but what we actually refer to, is putting our camera on a small piece of surface (say a patch of brick-wall) and render the surrounding world from its perspective. Everything it can “see”, is also the light it receives. If we do that for “all” patches in our world, and repeat that whole process multiple times, accumulating previous results, we achieve indirect light.

But again, it’s expensive. Not as expensive as photon mapping or raytracing maybe, but too expensive for real-time games nevertheless. To avoid long initial processing times, we just store our results to good old 2D images, and “wrap” them on our 3D geometry later on. Which is why we call these techniques “pre-baked”. An offline tool, typically a Map Editor, has a bake-button that does this for you. This is also what Engine22 offers by the way.

Only problem is that these pre-baked maps can’t be changed afterwards (during the game). So it only works for static environments. Walls / floors / furniture that can’t move or break. And with static lightsources, that don’t move or switch on/off (though we have tricks for that).


3D techniques - Rasterizing

Now this where I initially wanted to be with this Blog post. But as usual, it took me 4 pages to finally get there. Sorry. What most 3D games did and still do, is “Rasterizing”. And we have some graphical API’s for that; libraries that do the hard work, and utilize special graphics hardware (nVidia, AMD, …). Even if you never programmed, you probably heard of DirectX or OpenGL. Well these are such API’s. Though DirectX does some other game-things as well, the spear point of both API’s is providing graphics-functions we can use to:
·        Load 3D resources (turn model files into triangle buffers)
·         Load texture resources (2D images for example)
·         Load shaders (tiny C-like programs ran by the videocard, mainly to calculate vertex positions and pixel colours)
·         Management of those resources
·         Tell the videocard what to render (which buffers, with which shaders & textures & other shader parameters)
·         Enable / disable / set drawing parameters
·         Draw onto the screen or in a background buffer
·         Rasterize

Though big boys, these graphical API’s are actually pretty basic. They do not make shadows or beautiful water-reflections for you. They do not calculate if a 3D object collides with a wall. You still have to do a lot yourself. But, at least we have guidance now, and utilize 3D acceleration through hardware (MUCH faster).

If we want to draw our 3D cube, we’ll have to

Or something like that. Drawing usually includes that we first load & transfer raw data (arrays of colours or coordinates) towards the videocard. After that, we can activate these buffers and issue a render-command. Finally, the videocard does the “rasterizing”.

In the case of 3D graphics, this means it converts those triangles to pixels. A vertex shader calculates where exactly to put those pixels/dots on the screen. Which usually depends on a “Camera” we’ll define elsewhere, as a set of matrices. These matrices tell the camera position, the viewing-direction, how far it can look, the viewing angle, et cetera. The cube itself also has a matrix that tells its position, rotation and scale eventually. How & if the cube appears, is a calculation using those matrices. If the camera is looking the other way, the cube won’t be on the screen at all. If the distance is very far, the cube appears small. And so on. Doing these calculations sounds very complex, and yeah, matrix-calculations are pretty scary. But luckily internet has dozens of examples, and the videocard & render API will guide you. And if you use an engine like Engine22, it will do these parts for you most of the time.

During the rasterization process (think about an old matrix printer plotting dots on paper) we also have to “inject” colours. Fragment or Pixel shaders are used for that nowadays. It’s a small program that does the math. It could be as simple as colouring all pixels red, but more common is to use textures (the “wraps” remember?), and eventually lightsources or pre-baked buffers as explained in the previous part. This is also the stage where we perform tricks like “bumpmapping”.

Note these “shaders” weren’t there 15 years ago. The principles were the same more or less, but these parts of the drawing “pipeline” were fixed functions. Instead of having to program your own shader-code, you just told OpenGL or DirectX to use a texture or not, or to use lightSourceX yes/no. Yep, that was a lot simpler. But also a lot more restricted (and uglier). Anyhow, if you’re an older programmer from the 2000 era, just keep in mind shaders took over the place. It’s the major difference between early 2000 and current graphics techniques. Other than that…  some old story more or less.

Shots from the new Engine22 Map Editor. Everything you'll see is rasterized & using shaders.

So yeah, with (fragment) shaders my old friend maybe was a little bit right after all, drawing the scene pixel-by-pixel. Either how, it’s quite different than more natural (realistic) approaches like photon mapping. We rasterize an object, say a cube, monster or wall. We plot the geometric shape on the screen –eventually culling it if something was in front!-, but don’t have knowledge about its surroundings. We can’t let our pixel-shader check our surroundings to determine what to reflect, what casts shadows or which lightsources directly or indirectly pisses its photons on it. This is done with additional background steps, that store environmental information into (texture)buffers we can query later on in those shaders. For example, such a buffer could tell us what a lightsource affects, or how the world is captured at a single point so we can use it for reflections.

It’s complex stuff, and moreover, it’s fake stuff. Whether its shadows, reflective orbs or the way how light finds it way under that table; it’s all fake, simplified, approximated, guessed or simulated. But so damn smart and good that a gamer can hardly tell J Though game-engines like Unreal or Engine22 do a lot more than just graphics (think about audio, physics, scripting, AI, …) their selling spear-point and major strength is usually their magic box of tricks there. And as videocards keep getting faster and faster, Pandora’s box is getting more powerful as well. But remember kids! It’s not physically correct. Fresnel would punch me three black eyes.


Sunday, October 25, 2015

Post-mortem-review #6: Red Alert

My little brother wasn't as much a gamer as I was, but more than once, he would point me the classics. When Command & Conquer came out, I wasn't really familiar with the "RTS" genre(Real Time Strategy). Played the first Warcraft, which was fun, but Command & Conquers predecessor "Dune" didn't really caught my appetite. A top-view world with sand or pavement tiles and blocky things that would represent buildings or tanks... and worms coming out of the sand now and then. Nah, I was more of a Doom guy back then.

First RTS games were developed in te eighties. As a PC gamer, Dune (1992) & Warcraft (1994) were my first encounters.

High Tower

The first Command & Conquer was made in the early CD-ROM era. I can imagine some of you kids have never seen a CD-ROM. Well, neither did we back then. It was hot-brand-new, and beyond cool not to forget. A PC nowadays is, ehm... hell it has been a while since I bought a PC. Most people have a laptop or tablet. Asides from Auto-CAD engineers and the police-station that still operates on old mainframes, those big Desktops are a thing of the past. In the past it was pretty badass to have a “High Tower”, nowadays people will laugh at you. Anyhow, buy a laptop and it has a DVD or Blue-Ray drive, Sound card, 3D card, wide-screen with ten-zillion colours, WiFi, network card, et cetera. Of course. What kind of shop would sell you a laptop without a sound-card, or without networking capabilities? Can you imagine there was a time that none of the mentioned parts above were standard, or even existed at all?

That's right. In early 1995 -when C&C was released btw- we didn't have a sound-card at home. Without much internet, a network card wasn't exactly common either. Our monitor only had 256 colours or so. WiFi would have been a pet-bird's name back then. And CD-ROM? Hahaha. No. Only for rich people. My dad was a computer fanatic (in terms of tinkering/destroying hardware) but not THAT fanatic; if I remember well the first CD-ROM drives were sold for no less than 400$. In 1995, you could buy a new house or slave for that.

That didn't stop us from staring at sunday evening TV programs where "experts" were assembling computers and teaching us how to use them. You got to understand, 20 years ago there certainly wasn't a computer in each household. It was an expensive, and hard to grasp thing. Most people didn't work with computers yet, internet was still a baby, and for games we had a Sega, Nintendo... or almost a Playstation that used CD-ROMs btw. But, the PC was gaining popularity. The office had them, some classrooms had a (extremely old 286 or 386) computer, and besides typing spreadsheets you could now also listen music or watch a digital movie with this fantastic toy called "CD-ROM"! Quality was horrible but… Just the word itself... CD-ROM! No idea what it exactly meant, but a CD just felt so much better than those broken 3" floppy disks.

The Bigger, The Better.

And yes, it actually was a whole step forward. A floppy could store up to 1,44 MegaByte, and was terribly slow. Because of those limitations, PC games obviously had to respect some boundaries, by using low quality sounds/images, and having not too many disks in a box. I believe I once received Doom on 4 or 5 disks. Insert disk 4 of 5. Type A:\. Wait and hear the drive making digital lawn mower sounds, ggggzzzzzkrrt kkrtt krt. And then at disk 4/5, chances were big it would say "Unable to read disk". FFF*****!!

With floppies, you just knew at least one of them would be broken. CD-ROMs were more robust. And although the first drives were still slow as shit, they were fast compared to floppies as well. But far more important, CD-ROMs were about 500(!) times bigger in size. MegaBytes I mean, not actual size. A common CD had about 740 MegaBytes of space. Today, that sounds like floppies again as the average USB drive has 8GB or more. Hence we don't even use physical drives anymore. It's all somewhere in that digital cloud baby.

DigiWood

But back then, it was awesome of course. So far the PC had never been a very popular gaming platform, but now it revealed a secret super-weapon: Game-Movies. Thanks to all those extra megabytes, in all of a sudden, every developer equipped their games with crazy music and even more crazy movies. Not the slick photo-realistic Hollywood (in-game!) 3D renders we have now. But real (very-low-budget) actors on semi 3D (read ugly) rendered backgrounds. Silly for todays standard, but a big deal then. It really separated PC games from consoles like the Nintendo that still used much smaller cartridges. Sure, a PC was an expensive hobby, but if you were lucky enough to have one... Holy Moly!

Silly as the acting, decors, costumes and pretty much everything might have been, visuals like this were absolutely impossible on game consoles till that point.

We were lucky, when the prices started to drop a bit, dad felt life would be better with a CD-ROM drive. And our very first CD-ROM game was? Command & Conquer? No. The 7th Guest. Not just a few cut-scenes, the whole game was rendered like a movie! Truly amazing, being used to simple 2D side-scrollers mostly! Too bad the puzzles were a bit too difficult for an 11 year old though. Command & Conquer was released not much later. But… not being charmed by "Dune 2", I didn't pick up the game. But the pictures of the C&C cut-scene movies in games-magazines, were intriguing. 

The very first screenshots from C&C I saw in a games magazine.

But as said, once again I missed the ride and it was my little brother that came home with big stories about the dad of his friend playing "Command & Conquer", driving a so called Mammoth tank over 100 men, bombarding bases, deploying machine-gun towers, and so on. Whatever little dude. It sounded interesting, but it couldn't be cooler than "Crusader: No Remorse". When little, you can't try & buy every game that sounds interesting. There were only a few times a year where you could ask mom & dad for a game, so you'd better pick wisely. For Christmas 1995-1996, Quake1 or Full Throttle were on my list. Fortunately little dude persisted and asked Command & Conquer... Unfortunately for little dude, older & fatter dude would take his place behind the computer and play C&C from then on.

Though I sort of ignored the game, it didn’t have to try hard to steal my heart once its CD was loaded in our drive. Hence, this game had two CD-ROM’s! Not one, but two! That really felt as getting two games for the price of one. Two times better, two times bigger, two times more fun. One CD contained the “GDI” (goodguys) campaign, and the other CD the “NOD” (badguys) campaign. The in-game graphics weren’t that special, but the slamming electro-metal-hiphop-whatever-its-called music made by Frank Klepacki gave goose bumps right from the start. Humvees, Machine guns, guitars and cut-scenes with explosions. What could a guy ask more?


Real-Time-Strategy genre

But other than that, it was also just a very nice game to play. No fast-paced dumb shooting like Doom. This time you had to think about your moves. For those who never played a RTS genre game, let’s recap. We have a (top-down “God” view) terrain with obstacles such as rocks, trees, villages, bridges or water. In most missions you’ll have to establish a base first, and make a defence system (walls/bunkers/towers) to keep the enemy from destroying your base. Money comes from harvesting Tiberian, which is sort of large radioactive coleslaw. And since your harvester is usually somewhere outside the base, you’d better keep an eye on it as the enemy may try to destroy it. In the meanwhile you’ll be making soldiers, tanks, APC’s, buggies, artillery and get-to-the-choppers. First to recon the terrain, and later on to do a counterstrike on the enemy base. You could knock on the front door with a whole tank battalion, or maybe you prefer a more subtle approach, sneaking in, and weakening the base defence first by capturing power-plants for example.

The key is to do all those things simultaneously, and to do them right with the limited time/resources. Spend all money on making tanks, and they all get destroyed by air-units as you forgot Anti-Air units. Waste too much time on building a base while the enemy keeps pounding your units and wallet. Not spending on an extra harvester will keep the cash flow slow. And it’s usually smart to attack the enemy base from the right direction(s) with the right timing, with the right units.


That’s a RTS (Realtime Strategy Game) in a Nuttshell. But probably I didn’t need to tell you. Since C&C, hundreds of RTS games have been made. Total Annihilation, WarCraft, StarCraft, Age of Empires, Total War, Company of Heroes, Earth 2150, and the list goes on and on. Command & Conquer wasn’t the first RTS game, but probably it was the first real successful one, that made the genre popular till today. However… only one can be the king of RTS. And to me, the king is C&C: Red Alert. And do I have a good reason for it, besides being a living-in-the-past guy? Well, I think I do. 



Yin and Yang

Last years not anymore, but I have played quite some RTS games. And you know what? I didn’t like most of them. Mainly because the word “strategy” can be replaced with “make a billion tanks ASAP”. There is no thinking in most games. It’s more a like a giga-multi-tasking fest. You, you & you - go dig gold here, make 100 strong tanks, repair building, commit airstrike, deflect enemy attack at the north side, send 100 tanks to enemy base, et cetera. If women are truly better in multi-tasking, they should love RTS games.

C&C was also about doing lots of things quickly, but on a somewhat slower, more manageable level. Better balanced, more thinking, less massive. Especially in Red Alert, if you did well, you could throw over an enemy base with minimum casualties on your side. The trick is to find a weak spot, uncovered by Tesla Coils, anti-air cannons or turrets. Sneak in with spies or engineers, capture the right buildings, and destroy the base from inside out. It also paid of to deploy the right units. Soldiers are good against other soldiers, tanks aren’t. Yet tanks can punch over walls or defensive structures such as turrets or towers. Artillery can inflict heavy damage but needs to be protected. Boats can beat up a base from a safe distance, yet you’ll need to construct a shipyard & sweep the canal from submarines first. And powerful air support can only be reached by taking out anti-air units first, as planes are too expensive to mess around with.

(Almost) every unit in Red Alert was useful, from the cheapest soldier to the most expensive battleship or tank. Each defensive structure and buildings has its purpose. This is because the game-rules and balancing were done very well. There were “hacking” programs that allowed you to alter the C&C unit properties. You could make tanks more powerful, planes faster, or give soldiers long range super-lasers. In the beginning, it always annoyed me that it took ages to take out 1 soldier with a tank, and that the soldier fire-range was the same as a Super Soaker. So, we pulled the sliders and mangled some values… and… the gameplay was gone. Fun to see mega airstrikes of course, but it all just felt wrong. Too easy. Winning now was about making enough units of type X only.

Much more than downloading a virtual box today, old game boxes had charms. When seeing such a kick-ass box with 2(!) CD-ROMS, you just knew magic was inside.


Red Alert had its game-rules tuned perfectly. The prizes, the speed, the fire-ranges, the damage, the everything. Duh, of course. Of course? Looking at many of those other games I mentioned, that is certainly not always the case. Half of them are too chaotic / random, and the other half is too complicated. In a game like Total Annihilation, it really doesn’t matter if you attach the enemy base via West or East with tin-can robots, spider-tanks or missile planes. BIG numbers, that’s the only thing that counts (or a few Big Bertha’s). I’d hate to send 50 planes kamikaze style into the enemies base, but there is just no other way to beat the enemy here. So after a while, you don’t care anymore about your units. And since it’s so much and fast, you don’t get a chance to execute well coordinated attacks with small groups of specialized units. As soon as you can make stronger boats/planes/vehicles/Big Bertha’s, you’ll forget about the smaller ones.

The other half, and then I’m especially talking about the WO2 style RTS games, were too complicated for my taste. Not a whole lot of units, but many controls, too many tiny things to remember, and too many ways to screw up. “Luckily” you’ll get a whole lot of dialogs and hints, telling exactly what to do. But without those hints and voices telling you what to do, I really have no idea whether it’s better to send 3 bazooka guys to that tank, or to lay down. Either it all feels very scripted, or there is too much coincidence, timing importance and variation to predict your chances. It’s very hard to measure the actual gain when doing this or that. In Red Alert on the other hand, after a while, it gets very clear that it takes 5 units of type X to perform task Y. That sounds a bit artificial, but it’s actually nice to get a good in a game because you actually understand how it works. I prefer clear rules over “randomness”.
After some excersise, you would exactly know what a group of 10 paratroops could and couldn't do.


Back to the Soviet future

But having the rules setup correctly isn’t the only reason why Red Alert is still enjoyable today. It’s just… It’s just… it has the P from POW! Don’t know how to explain. A few hours ago I finished Red Alert2 (which can be downloaded for free –legally- via EA btw!). And though it was nice, it doesn’t come close to its older brother in my opinion. Not because of bad rules – RA2 had them pretty right as well. Where is the catch?

First of all, I liked the RA graphics and style more. It’s not beautiful, but it’s *clear*. With my bad eyes, I can clearly distinguish a Yak fighting plane from a V2 rocket truck. Red Alert 2 however has more futuristic metallic shiny weird shaped mobiles. And to make it worse, as RTS battles got more massive, they also zoomed out, making the puppets tiny. In later 3D RTS games you can zoom in though, that’s true. But let me tell one thing about 3D RTS games. I never liked their graphics either. Certainly not the early ones. Early 3D graphics were very limited, so a tank usually was just a milk-box with an ugly low-res texture on it. Can’t blame them with 200 tanks on the screen, but nonetheless the hand-drawn sprites just looked better. Obviously we can do a lot more these days, but still RTS games keep behind the high quality physics and visuals you got used to from First Person Shooters. Soldiers run like if they shat their pants, and die without ragdoll physics. Not impressed.

What I also disliked a bit about RA2, C&C Tiberian Sun, and pretty much all other C&C titles that came after, was its futuristic setting. The first C&C and Red Alert had science-fiction elements though. Orca helicopters, laser towers, chronospheres. But, subtle. The majority of units were based on somewhat real military hardware, and in case of Red Alert, with a big wink to Soviet toys. The more Sci-Fi elements were used as advanced, powerful weapons. A tasty combination. But in C&C / RA2, Sci-Fi really took over and I just missed good old bombers, artillery cannons and guys with normal machine guns.

Less Sci-Fi than the first Command & Conquer, but still a weird mix between outdated Soviet hardware and crazy futuristic techniques such as the beloved "Tesla Coil".


Kaboom!

All right. That’s a matter of taste maybe. But I want to emphasize that clear sprites sometimes just work better than a chaotic mess of colourful laser shooting “things”. Speaking about taste, one other reason to love Red Alert, was the sound. I mentioned “POW” and “Punch” before. And I think that’s the combination of big ass sprites, heavy guitar music, rattling machine guns, and death-screams. Of course, every (RTS) game has that. But some just have it better than another. And RA had it really well. Modern games are ashamed to have a good background music track and leave it to some threatening ambient and some classical march music. It’s all so goddamn serious these days. Where are the drums and electric guitars? Where is the Hell March?!

And as for the sound effects, guns often sound like popcorn. In Red Alert guns sound like big guns with bass. As if they’re shooting 20 inch solid lead pipes all the time. Here, take that you dumb bunker. Barrels explode in huge flames, numb “bum - bum” from a Cruiser ship would mean serious trouble. And when a man dies in RA, he screams like a man. Not like a stumbling figure skater.



Do the math. My eyes were pleased. My ears were pleased. My tactical brain was pleased. The right concoction. Battles not too huge, sound of battle not too quite. Combat not too slow, choices of life and death not too hasted. Packed in more than thirty missions, as well as skirmish missions if you can’t get enough. AND… of course, interspersed with movies where a real life guy played as Stalin.