Tuesday, January 25, 2011

Mega-Structures; Geometry Shaders #1

"Some American scientist claims that using rap makes a good way to remember and understand mathematical formulas"...
Yo yo, Pythagoras in da house. F%ck you b!tch, MC^2 on the mic, spitting square roots that make you poop in your boots. What comes up, must go down, I'll bombard you and your homies with g=(m1 + m2) / r^2. I don't talk algebra operators, I shout integrators. Respect, subtract, smoke my crack, Blaise Pascal out.

Yeah, that would learn them... O pardon me, that was the first thing going through my mind when just hearing this little news on the radio. All right, let's rap something else: Geometry Shaders. have a moment for Snoop GPU, Easy E++, Dizzy Pascal, dirty old Register. and dr.Hashpipe.

Geometry Shaders? Who what where?
"A Geometry Shader can generate, adjust or discard primitives(triangles, dots, lines, ...)"

Even when you never touched a keyboard, you probably heard of "Ssshaders". Vertex- and fragment(pixel)Shaders. What they do? They are like tiny programs running on the videocard "GPU's", telling how to compute their output: vertex coordinates & pixel colors. Vertex shaders can move the vertex / UV coordinates. Fragment shaders calculate the pixel colors. Usually based on quick & dirty lighting physics that approximate the real thing. Some practical examples (from the streets yo):
- Transform 3D coordinates to screen space / eye-space / world-space / cyber-space
- Animations; apply matrices from 1 or more bones to each vertex ("skinning")
- Water ripple physics
- Displacement mapping (read a value from a texture such as an heightMap)
- GPU cloth / particle physics

- Diffuse, specular, ambient and emissive lighting computations
- ShadowMapping
- Reflection / refraction computations
- Cell-shading (cartoon look)
- Drawing specialized textures required for other techniques (depth buffer for example)

Well if you did some shaders, that's nothing new. Hey, they already exist for about 10 years now! The word "next-gen" can be replaced by "common-tech" again, time flies. The recent "Geometry Shader" program isn't really that new either anymore. It appeared somewhere in 2007 or 2008 I believe. How does it come I didn't notice it already then? Hmmm, let's just say it's usage is pretty limited in most cases.

The “GS” is an optional third program that can be executed after the vertex-shader. Unlike the Vertex Shader, the GS does not take single vertex-points, but complete primitives as input. Primitives? You know, dots, lines, triangles, the stuff you pass with a glBegin() command. So, when working with triangles you get array’s that contain 3 positions/texcoords/tangents or whatever you pass in the vertex-shader.

The GS passes this data further on to the Rasterizer(and then Pixelshader). But you want to perform some custom actions here right? Like said, you can decide here whether to "emit" a primitive or not. You could perform sort of culling here. Of course, you can adjust all the coordinates before passing them as well. But even more spectacular is the ability to generate new primitives. When processing a simple line as input, you could break it up in 10 pieces and bend it into a curved line. It's kinda like an advanced vertex-shader, with more input data and a flexible output. And just like in any other shader, you can use custom parameters including texture reads. You could for example render a terrain with rather large quads, then let the GS sub-divide the quad based on the camera distance, using a heightMap texture to apply the correct height value for each new vertex.
Some applications
- Handling sprite/particle clouds
- Tessellation / Level Of Detail
- Beziers, curves
- Fur / fins / grass
- Generating silhouettes (stencil shadows, light shafts, …)
- Render cubeMaps in one pass (this is why I got interested finally)

WIP, Work-in-Progress. Very simple 3D geometry, lot's of stuff missing, not very special so far... Except the fact that all textures are made by our own man Julio so far, instead of stealing them somewhere. Oh, did it use Geometry Shaders? No, but since lot's of objects are reflecting the environment here, a single-pass cubeMap technique can boost the performance pretty well...

Simple demo Code?!
Let's do an example with Cg shaders. Ow, GLSL and some other shader languages support GS as well of course, as long as you have a card that supports Shader Model 4 or higher(I believe). In Cg, creating a GS program is pretty much the same as creating and using vertex- or pixel programs:

// Setup
gpProfile := cgGLGetLatestProfile(CG_GL_GEOMETRY);
prog := cgCreateProgram( cgContext.context, CG_SOURCE, pchar(code),
gpProfile, 'main',
pArgs );
// Use it
cgGLBindProgram( prog );
cgGLEnableProfile( gpProfile );
// --> render something

Now let's find a good & simple application for a Geometry Shader... Uh .... That's difficult. Maybe GS isn't that useful yet... Wait, how about an electro / lightning bolt? There are more ambitious demo's, but therefore I'd like to refer to the nVidia SDK's (check OpenGL SDK 10 for example). All right, here's the idea:
- Render 1 line (glBegin( GL_LINE ), 2 points)
- Let the shader break it up in 20 pieces, then applying random coordinates on each sub-point to make sort of a zig-zag.
The CPU can do that just as well, I know. But it's just to demonstrate how a GS program looks, smells, and works.

// Dammit, SyntaxHighlighter doesn't allow to use the >< characters, so I placed
// float3 between || pipes instead
LINE void main(
// Input array’s
AttribArray|float3| position : POSITION, // Coordinates
AttribArray|float2| texCoord0 : TEXCOORD0, // Custom stuff. Just pass that

uniform sampler2D noiseTexture,
uniform float boltMadnessFactor // Pass params as usual
// The bolt is just a simple line with 2 points:
// point 1 is the origin, point 2 is the target (impact point).
// Now apply "The Zig-zag Man"

const int steps = 20; // You could use distance LOD here, although that
// won't be needed for huge lightning bolts
float3 beginPos = position[0]; // Get input for simplicity
float3 targetPos= position[1];
float2 beginTX = texCoord0[0];
float2 targetTX = texCoord0[1];
// Interpolation values
float3 deltaPos = (targetPos - beginPos) / steps;
float2 deltaTX = (targetTX - beginTX) / steps;

// Output first (begin) point, don't modify it.
emitVertex( beginPos : POSITION,
beginTX : TEXCOORD0 );

// Generate 18 random points in between
for (int i=1; i < steps-1; i++)
// Interpolate position and texcoord custom data
float3 newPos = beginPos + deltaPos * i;
float2 newTX = beginTX + deltaTX * i;

// Just pick some random value from our helper texture
// then use it to modify the position.
// Reduce noise strength as the bolt approaches target
float3 randomFac = (2*tex2D( noiseTexture, newTX ).rgb-1) * boltMadnessFactor
* (steps-i);
newPos += randomFac;

// Generate extra line point
emitVertex( newPos : POSITION,
newTX : TEXCOORD0 );
} // for i

// Output last (target) point, don't modify it.
emitVertex( targetPos : POSITION,
targetTX : TEXCOORD0 );

// Of course this shader sucks. In fact, I didn't even try it :#
// But it roughly shows what you can do here.
} // Roger out

Still messing around with SyntaxHighlighter. It reminds why I hate HTML and such. How to shrink the gap between a line?! Why do the tabs suck so much?!

The next part of this Geometry shader adventure will follow in two weeks. More about rendering stuff in a single pass then. One of the best reasons to use that weird Geometry Shader :)

Oh, for all us forgotten Delphi souls. Can someone explain where to upload a small code-file so it can be downloaded here? The last Cg DLL headers for Pascal I could found were for Cg 1.5, written by Alexey Barkovoy in 2006. So I updated them a little bit. Warning, I haven't tested the whole though!!!

Another WIP. Same stuff, different light.

Monday, January 17, 2011

Turd in the punchbowl

Rest in peace Major Richard Dick Winters. Now I don't this guy personally of course, but the impressive "Band of Brothers" series showed the actions of Easy-Company during WW II pretty well. Yes, asides from taking part on a couple of key battles, they really entered the Eagle's nest, discovered a concentration camp, and had that little soap with Capt. Sobel (the real one, not that guy from Friends). War, isn't it romantic? Well...

Maybe you can't care less, but here another opinion from a computer geek about Julian Assange & Wikileaks. Saw a coverage on a Belgium TV channel, so the type machine in my head started ticking.

Before starting, once again you and me should be happy we are free to write on a place like this. In my particular case, I might need to thank Easy-Company for that as well(operation Market Garden wasn't that far from here). Sure, maybe the Pentagon is watching, but so far I'm not really limited when being critical, or making jokes about Jesus balls. Wait, what is that car with the blinded windows doing outside here all day?

I haven't read through all those leaked documents. Only heard a few on the radio. Was it world-shocking? No, not really. I found the American opinions about other politicians & leaders in this world rather amusing. Especially "Batman & Robin", Poetin and Medvedev, was striking :)
Come on, EVERYONE talks shit about others. And so do countries and their leaders. Was it really necessary to leak that info? I mean, you can claim you have nothing to hide. But if some friend hacks your SMS and email accounts, I'm damn sure they'll be finding little secrets or nasty things you said about friends, your boss, your girl, your penis or whatever it is. Spreading such info is... a little bit childish. We want transparency, yet privacy at the same time. That collides…

More serious are the videos like the U.S. Apache above Baghdad of course. What you see there is horrifying. A pilot who pulls the trigger, tearing apart a group of "terrorists". Even more shocking is the radio communication. The other guy on the line permits to fire, without even asking or hesitating. As if they simply had to sweep a dirty floor... Does these guys have beards? Well let the .30 mm hollow point spit already. However, such images were already available before Wikileaks. And sure they are 100% wrong. But does it surprise? It's a damn war there. Or did you think everybody played by the rules in the romantic 40's? War is ugly per definition. And when seeing documentaries like "Restrepo" on National Geographic, it doesn't surprise me at all that some flip out completely. Is that an excuse? Certainly not, but we shouldn't act as if we never expected this either. That's just naive, stupid, or lying.

Another leak. Parts of the Player(first concept). Should we be transparent and throw all the game info on the table right now? Or is it better to keep it hidden for now?

Julian... Good or wrong? Internet-hero of freedom, or a turd in the punchbowl (Southpark), trying to make havoc? First, let's forget a moment about this Assange guy. Whether he is a girl raping devil or Mickey Mouse, it doesn't change the fact of sensitive documents being leaked & the contents of them. The goal of the Wikileaks organization (and the likes) is not just to share information. It's trying to force transparent politics, “true democracy”.

Yeah, one can't deny a lot of mysterious stuff is going on. I'm not the type of guy who believes 9-11 was an inside job, or clamping to conspiracies that also just rely on a few vague "sources". If you can't trust the government, then why trust "ctrlAlrDel H@ckEr '86" then? But you shouldn't close your eyes either when Michael More has something to say. Shit happens, all over the place. And it's not just America and their war on terrorism. North Korea & deathcamps, China & censoring, Holland & billion costing projects, uniting European countries, justice going wrong because of “mistakes”, Bilderberg group, pharmaceutical companies, oil concerns, Africa & Corruption. And all the stories about Soviet/Communistic regimes we are reading for Tower22 doesn't cheer up the vision about mankind either.

Does money & power turn people into selfish beasts? Or are only the selfish beasts attracted to (political) positions that provide money & power in the first place? I dare to say many of the politicians have their own agenda's, and sure not all for the good. If the system would be forced to make their plans and ideals transparent, wouldn't that make a better world on the longer term? I think so, really. We are not talking about cheating a little bit while playing Super Mario Kart. We are talking about serious issues, having effect on millions of people.

Off-topic. With reading juicy background info about the Soviet era and such, this is what I mean. Finding reference websites is part of the job to give drawers & modelers happy inspiration. This cozy town is somewhere in Siberia. Yes, even the weather sucks there.

But as with most of these almost utopia-ideals, the world doesn't work that way. Or at least, not yet. What if the Western world suddenly sais "Sorry Julian. Here a lollypop, and from now on we share everything". Would he be happy, or is this just a personal crusade against U.S.A.? Anyway, information and knowledge is a powerful thing. Not only for us voters that believe in democracy. Also for "the enemy". And I'm not talking about a bunch of “Allahu Akbar!!” yelling, running bombs. Do you really think other important players like Russia, China, Iran or North Korea would sign this "be cool, be transparent" contract? Or how about more local criminals? Tribes, gangs, mafia, extremists, drug cartels?

Great goods like the freedom of speech, voting, social support, or human rights are also our fragile weaknesses, as they are easy to abuse. Why not kick the balls if you don't have to play by the rules anyway? Other regimes, criminals, or extremists know that very well. You can try to be the best kid from class all the time, but you'll have to play hard if you want to win. If everyone uses steroids in Tour the France, there is not much else you can do except becoming one of them.

Politicians master this game, and know when to cheat way better than we normal people, not caring too much about money & power. Then who is right in the end? I'm afraid no one is. A social worker with the heart on the right place can't run a profit-making business between all the competition. Sure he/she wants to do it right, but a naïve attitude at the wrong place brings the whole company in danger.

Sharing all our information is like disarming the whole world from nuclear weapons, except North Korea; it only works if EVERYONE plays according rules. Which is clearly not going to happen anywhere soon.

So, is leaking all wrong then? No, I guess not. Look, this whole thing started the discussion about our ethics, politics and global roles on this planet. And for you and me, yet another showcase not to follow everything blindly. Always think for yourself. You have to start somewhere if you want to make a revolution. Maybe a more subtle way would have been wise, although... People don't tend to change old rusty habits, unless they are confronted with drastic news. Tell a smoker that he will die if he doesn't stop, and he will laugh. Tell that we screw up the environment, and we will be laconic. Show some soldiers who shoot an injured men, and we will say "Oh well, war is hell". It takes some dead relatives to set the smoker in motion. It takes a big (overdone) movie from Al Gore to start those brains thinking about the environment. And it might just as well take a massive leak to start questioning our governments and "democracy". Small doses don't help, as we like to face away from problems. NIMBY, Not in my backyard.

NIMBY? What strikes me in many "Soviet inspiration" pictures is that EVERYTHING is left behind as if… Godzilla suddenly appeared on the scene. For example, why not having a sub-marine in your backyard?

Good or wrong...? I can't chose, it's not black and white. Personally I think Wikileaks and the likes should think better about the possible consequences. What if Batman & Robin decide to break contact with the Western World. Does that makes us all better? What if Afghan helpers get found and slaughtered by Taliban? What if a crazy bored fool gets the lay-out of a nuclear power plant in your country? What if ...? One shouldn't live in fear, but thinking that people won't hurt each other is naive.

But ok, the leaking already happened. Whether you agree or not, let's focus on the future. It would be nice if America wouldn't respond that harsh on this whole WikiLeak thing. Dangerous or not, learn something from it. And not how-to hide information better the next time, but to stop lying and to prevent "incidents" like Abu Ghraib or pilots playing Call of Duty. Don't shoot the messenger (literally, in the case of Julian who probably has some sleepless nights). The same lesson goes for all the other leaders. Take responsibility for your deeds.

But to be honest, I'm afraid that's just wishful thinking. Business has to go on, with all the related unethical aspects. Therefore I tend not to care too much about. Just make the best of your own lives and help the ones who come along your path.

Sunday, January 9, 2011

Take me to Screen-Space land

- As promised previous week, a techno-post here. Next week it will be less technical again. -

While making a new real-time GI technique, I ran into SSAO & SSGI again. Sounds like a sexual disease and a new crime series, but actually they are screen effects to enhance(fake) ambient lighting / occlusion:
- SSAO = Screen-Space-Ambient-Occlusion
- SSGI = Screen-Space-Global-Illumination

Programmers here probably know SSAO already. Crytek showed this technique in their first Crysis game, a couple of years ago. Hell, time flies. In short, SSAO is a (post)effect that approximates the occlusion per pixel by using other pixeldata from the rendered screen. The idea is as follow: the more nearby/surrounding objects, the less (ambient)light a pixel catches. This is somewhat true. Look in your room and you may notice the corners or space below cabins are darker. Because less light reaches here of course.

Well, it all depends from where the light is falling in. But as graphics programmers probably know, it's near to impossible to compute where indirect light comes from, in real-time. SSAO is another cheap hack to approximate this effect without actually knowing shit about light. Although cheap... even today it requires quite a lot to compute decent looking SSAO, while the effect isn't that big. Which is why I had to have a look in my existing SSAO shader again. And as some notified on Youtube, the SSAO effect caused an ugly black halo around objects.

Nice thins about Screen-Space techniques is that the complexity of the scene doesn't matter at all. 1 box, 600 planets, pixelcount stays the same.

How does it work? Pretty simple, although the shader requires a few error-sensitive tricks: For each pixel on your screen:

- Calculate where it is (reconstruct position via depth, or store positions in another deferred buffer)
- Take n samples in a circle around the source pixel. Check if these neighbor pixels are occluding
the source pixel by comparing depth/positions, and eventually normals.
- Take the average of all samples
- You can use a (Gaussian) blur pass in the end to smooth the results
- In the end, multiply the grayscale SSAO texture with the (ambient) scene.

Since the amount of samples is limited, 16 or something, you only have a relative low amount of references. Don't make the sample circle too large (which is why SSAO only works locally, in corners and such), and use a "dither" or noise texture to vary the sample coordinates for each pixel. Some pixels sample nearby, others \ use a somewhat bigger range. This leads to varying pixel-results, but a blur can smooth that away.

Besides taking the proper sample coordinates, the difficult part is to decide which neighbor pixels occlude. I've seen several implementations, but in my case they always lead to weird results. Half-grayish walls when rotating the camera, or an either darkened or highlighted edge everywhere. With the wrong comparisons, SSAO quickly looks like an ordinary edge-detector effect, while it shouldn't be. So instead of lazy copying shaders from others, I took a try for myself this time:

for (int i=0; i <16; i++) // a few less samples is possible
// Create neighbour sample texcoords
// w and h depend on a variable sampleRadius, screenSize and distance from camera
half2 tx = half2( iTex.x + sampleDir[i].x * w , iTex.y + sampleDir[i].y * h );
// Get neighbour data
half4 nbPix = tex2D( posTex, tx ).xyzw; // get WORLD position & depth(w)
half3 nbNrm = tex2D( nrmTex, tx ).xyz; // get WORLD normal

// Occlude if:
// - Neighbour pixel is not too far away
// - Direction between the 2 pixels can affect on the sourcePixel normal
half3 dir = nbPix.xyz - srcPix.xyz;
half dist = length( dir );
dir = normalize( dir );
half shineFac= saturate( dot( dir, srcNrm ) ); // Prevents self occlusion, compare with source normal

half ao = shineFac * saturate( (nbPix.w - srcPix.w + maxPixDist )*10000 ); // Discard pixels that are too far away
aoSum += ao;
} // for i
ao = 1 - (aoSum / 16 );

It uses the deferred render buffers as input instead of depth reconstructions. Simple, just like my brains are. Probably not the fastest way around, but it works pretty well. Also on background buildings. It prevents self-occlusion or foreground objects to mix with background stuff. Surfaces can still self-occlude with their normalMaps though, although the effect is barely visible in the end-result. But if you get it for free, why not. Bullet hole decals that affect the normalMap will create darkening for example, pretty neat.

Not implemented here yet, but I always have fights with the skybox as that area doesn’t have a position, normal, or depth by default. By rendering an extreme high depth in the position or depth buffer (glClearColor( much ) ), it will be skipped here now though. You can abort the shader right away when the sourcePixel depth is also high, as you don’t have to process the skybox. In outdoor area’s that can save up to 50% of the calculations!

Anyway, what I really wanted to share was that other technique: Screen-Space-Global-Illumination. Used to spread light to create "color bleeding". No, that won't be my top-secret next take on real-time G.I. but it *might* be useful to complete it. Just like SSAO. Due limitations the few "realtime G.I." solutions available so far, including the Crytek LPV one, are still computing the indirect light distribution on a rough, inaccurate scale. To deal with the small details, SSAO and SSGI can be used. Crytek for example uses SSGI to approximate G.I. for background scenery that falls outside the LPV workspace (3D volume textures around the camera).

So... what is SSGI then? If you can compute occlusion by looking at neighbor pixels, then why not using it to reflect (direct) light? Hey, another nice usage of the Inferred Rendering pipeline approach, where we produce a diffuse & specular light screentexture. Just copy the SSAO shader, and in addition read the diffuse value from the neighbor pixels. I also read the reflectance & emissive value from a second texture. Those colors roughly represent the outgoing light from a neighbor pixel. Now we only have to test if it reaches the source pixel... yep, same stuff as the formula I did above, but with a small addition:

half3 giCol = tex2D( diffuseTex , tx ).rgb * 0.5f + tex2D( additiveTex, tx ).rgb;
giCol *= shineFac * saturate( dot(nbNrm, -dir) );
giSum += giCol * saturate( (nbPix.w - srcPix.w + maxPixDist )*10000 );

You can simply add these lines in the loop so you can calculate AO and SSGI at the same time. Ow, the brighter the G.I., the less Ambient Occlusion should occur of course. You can simply lerp between the two, based on the G.I. result luminance.

Direct light falls on the ground here, then the surrounding walls / objects pick it up again. Without any G.I., the backsides of the boxes would be pitch black. Also, the emissive monitor creates a blur.

Life can be so simple. But does it really work? Hmmm... well... Three problems. First of all, it makes the already expensive SSAO shader even nastier. Second, the SSGI effect is, just like SSAO, only very local. Again, you barely see it unless applied on really bright colored objects such as a computer monitor or bright green plastic wall.

The third problem is the product of problem one and two. To make the effect more noticeable (worth the additional cost), SSAO and SSGI should use different sample radiuses. The bigger the circle, the wider the light spreads (or actually gathered) of course. But that doesn't work too well with SSAO, unless you like blurry crap. So, the only proper solution I could think about, was to put the SSGI in a separate loop that uses a wider sampling range around the source pixel. And thus requires even more horsepower, for just a small effect. Is it worth it? Mehh, if you target for somewhat older hardware, NO.

In a scenario like this SSGI helps (though a cubeMap could do as well). But when the hell do you see things like this?

Just when I pushed the speed to 70 FPS (30 on my older card), SSGI is making havoc again. Currently SSAO & SSGI are done in the same pass, on a buffer half the size of the screen. What I could do is moving SSGI to a separate pass on a 1/4 sized buffer. Less quality, but then again SSGI allows more blurring & smearing than SSAO does I think. Didn't try it yet though.

And that boys & girls, was probably the most technical piece of text I ever wrote.

Tuesday, January 4, 2011

In the year 2011

And a healthy 2011! I already made a good start, producing magical shader/particle effects above the toilet on 1 January morning. Nah, I didn't make any promises. Nothing is going to change when it comes to bad habits, as I actually love my bad habits too much to give them up already. Although taking up jogging may be a good one. I used to run ~6 kilometers about three times in a week to a place where my friends gathered... to end up with a beer and a cigarette, very healthy. Nevertheless, after two years I finally had that Mike & Jim Ab-Pro belly. Leading to a girlfriend, leading to get well fed by her, leading to getting lazy as you don't have to score a girl anymore, leading to gain 25 kilograms again :) Oh, when she complains about that, I have some words to defend myself:
"At least you don't have to be jealous & nervous for the competition of other girls, as they don't look at me anymore"
1-0 for the Fatman.

Well, 2010 was a bumpy year for many. Quite a few big disasters here and there, a record when it comes to the death toll actually. Leslie Nielsen died, North Korea barks again, half of the Polish government died in a plane crash on an already terrible place. The Economical crisis saga continues, eating jobs along with it. Cancer took the lives of a couple of beloved ones, and Holland took a relative drastic turn in the political spectrum. Even worse, it seems the iPhone alarm didn’t work on 1 January. Thank God I’m still using an old Prince of Bell air Buzzer. My phone isn’t even capable of calling, let alone running a real-time clock!

For me personally, 2010 was an easy ride. Nothing really changed, except that our little girl learned how to talk my ears of. When it comes to game-development, a few milestones were set though. This blog is exactly one year old now, but the desire to make a game was already there after playing Doom2 when I was 11 years old. I've been trying to do something for years and years, but so far I never really shared it with another person. Don't like to enter the spotlights, so starting a blog to announce something was quite a big step.

Sooner than I expected, this blog would get the attention of a few readers. Thank you for that! Getting noticed and receiving some feedback is what it makes all worth it. Sure I know sites like these won’t reach Perez Hilton statistics, but really, it boosts the enthusiasm and devotion for this project. In fact, it triggered to make a very first movie of a real project: "Tower 22". Placing that movie on Youtube and getting all the positive feedback was something I could only dream of when 2010 started. Who the hell would be interested in the programming attempts of yet another fool on the internet?

As a cherry on the pie, and what I expected least, was to form a small team in 2010 already as well. Initially I intended to delay such a request for help until I really got something awesome to show. But hey, what the heck. I've been way too passive on this whole game-programming thing for too long, just go for it! Wait too long and… as Acda & de Munnik sing:
“van al zijn jongensdromen was alleen het oud worden behaald”
Which means something like “from all his youth(boys) dreams, only “getting old” has been realized.

And so the prayers were answered. Not with my 14 year old nephew who would like to help after learning Java and MS Paint for 2 months. But with real creative people, some of them with actual experience in the (commercial) game-biz.

Quickie by Julio. Making ideas for the environment in a second demo...

All in all, 2010 was a fabulous year for this project. And the nice thing about a blog is that you write your own history/diary as you go, this whole course has been documented so far. Nice for the grandchildren around the hearth over 30 years. But we have to look forward. There is no game yet. In fact, the whole thing has just begun! Besides implementing new techniques into the Engine, 2011 will be the year where I hopefully learn how to instruct a small team. Hey, it's not easy to make fun & challenging assignments for five hungry men! Let's hope we as a team can lift this project to a next level;

Game & Team
- Player character (not that rusty box-robot)
- Making a more definite game-plan, including environment sketches
- Documenting those ideas in a private(sorry!) Wiki
- Second (and maybe third) demo movie with more gore, more atmosphere, and more advanced techniques
- Creating a real sound library instead of "borrowing" it from other sources
- Maybe looking for another modeler/mapper when Demo #2 releases

- Volumetric light-shafts & fog
- More lights & better shadows
- Geometry shaders
- Upgraded real-time G.I. (ambient lighting)
- FMOD sound library
- Upgraded AI module
- Upgraded Physics module, possibly with the help of a second programmer

As for this Blog, a few things may change as well. As expected, the poll turned out that most of the visitors would like to see some more technical specs. Not a surprise, although I won't turn this blog into Nehe, Humus3D or the likes. I still like to maintain the non-technical aspect as well. So, after some puzzling I thought about:

- 1 week a technical post, the other week a non or less technical post. So, you know when to skip a post ;)
- Can't guarantee a post every Sunday / Monday anymore. Man, I'm just too busy!
- But, hopefully some of the other team members can occasionally write about their sound / modeling / drawing or writing experiences. Meet the Creative side!
- Trying to make a few more short game-stories. Not revealing clues though.
- To make the technical info somewhat more accessible, a new “indexing” page that refers to other history Blog posts. Well, just have a look the
“blog index” page to see what I mean.

Next week I'll tell you something about either Geometry shaders or Light shafts. Well, a happy 2011 to all of you!