As explained in another post, the lack of progress is partially due a busy summer. Lots of programming work for new prototype machines at work, besides the usual trouble-fixing when the harvesting season starts, swallowed many night and weekend hours. Remaining hours in the weekend were used to help friends with their "new" houses (read having to rebuild the whole joint), birthdays(+hangovers wasting the Sundays), and later on fixing our own garden. Got a metal chain + ball around my ankle, and had to dig sand the past month. Oh, and even though the new pavement tiles are finally placed, we still have to move a whole Himalaya of sand again, to its final container-destination this time. That's the problem of having a relative small garden boxed in by a bunch of other houses; no space, can't easily reach it, and you'll be moving the same pile of junk hundreds of times.
But you probably aren't interested in my summer struggles. To the games then. Actually there has been some interesting progress, but it's secret. I could show you some cool shots, but it would spoil things... But let me lift my burqa just a little bit then; a few monsters have been modeled, a player model is being sculpted, and one of the monsters has reached the "rigging & animation" stage.
And this is where the shoe pinches. I can assist quite well with environments, modeling and textures. Not that I'm very good at it, but I know how it works, and what I want. Animating on the other hand is a foggy terrain for me. Did some Parkinson animations in Milkshape, but I barely knew what the word "Rigging" means. You neither? Well, rigging is preparing a model to be animated. The process of making a skeleton and binding the model to those bones and joints. When you rotate a joint, limbs and parts of your skin will rotate/stretch with it. If you are anatomically correct at least. So, a Rigger first constructs a skeleton, eventually defining joint restrictions and Inverse/Forward Kinematic properties (how a chain of bones affects each other when 1 moves) to help the animator later on. Then he connects each model vertex to one or more bones.
Gunfire/smoke FX. Got to add "soft-particles" to mask the particles intersecting the (gun) geometry though.
Once the rig is done, an Animator can start playing with the bones to create sequences such as walking, breathing, jumping, weight-lifting, abusing wife, whatever. The idea is pretty simple, for each bone you define some key-frames. That means for example that your left knee is placed at position XYZ at 0.5 seconds, then pitched 20 degrees at 0.7 seconds, and so on. The computer will interpolate the movement and rotation between those key-frames as the clock ticks. Making a fluent animation is extremely difficult though. No wonder that the 3D animations in many (cheaper) TV productions or kids series look as if the character shat its pants. T-Rex walking with the legs of a retired Scout Walker, humans expressing either stiff or very overdone, and also the speed and timing is often far from realistic. The pro's make use of Motion Capturing studio's, but we mortals still have to do an old fashioned handjob. Moving the bones by hand on a computer I mean. Besides, some of the T22 monsters don't exactly look like the average human anyway, so how to Motion Capture that sir? I hope stuff like this cool action doll gets affordable one day though.
Well, luckily we have a Rigger(Stephen, and he’s white) and an Animator(Antonio). Yet the process, flow and tools are kinda new so it takes some time to acclimatize. One of the major issues (for me), is providing a good importer. Damn, I wasted quite some hours with that. You can talk crap about Milkshape, but at least their MS3D file format was well documented and very easy to integrate in your game. But leaving that boat, we had to find another file-format. One that provides everything we need, but also being supported on a wide range of platforms. Joe uses Maya for animations, Timmy uses Blender, Muhammad digs Max, and so on --> We need an interchangeable format.
My first bet was on Collada, which has been invented for exactly doing that. But as often with "standards", it ain't so standard after all as packages deliver different exports, have faulty importers, and the Collada format itself is so goddamn bloated you can export elephants with it. It tries to be everything, and so it became (too) difficult to quickly write a wrapper for it. Just loading meshes was fairly easy, but the animation part completely sucked in my opinion. Animations and their whole hierarchy are complicated enough already, and Collada makes it even worse for making a whole forest of data-nodes, where I couldn't figure out all the meanings and relations. Maybe I'm impatient, but I just don't want to spend more than half a day on writing a stupid file loader. I'm not interested in file formats, I'm interested in end results!
The (Autodesk) FBX format then. Seems to be supported by quite a lot 3D packages as well, but I ran into the same problems. At a first glimpse, the textual file format seems pretty easy, and again loading a mesh was childsplay. But the animation part was ruined by utterly weird matrix transformations and other additional math. Why?! I noticed Adobe offers a FBX SDK though. So I tried that one. Just finding my way around their 700(!) MB maze of files, examples and other poohaa was frustrating. 700 Megabytes of "stuff" for a freaking file format? Well the SDK is more than just a file-loader, but the sheer size isn't exactly helping to quickly figure out what you should be doing. I managed to make a FBX file importer, but it's one of those programming tasks you want to get rid of and don't look back. Unfortunately, this is also typically the kind of tool that works for 10 files, and then crashes on the 11th because of some differences you didn't take into account. I hate file-readers. Boring work, and too much research.
Too make a long story short, we're kind of experimenting with the animations, rigs, and putting it all into the engine. And you're not quite done yet when its loaded. GPU skinning is the common path, so you also have to write (compute) shaders to perform all the bone-matrix math, and eventually store the transformed vertices back into a secondary buffer so you can reuse it for multiple rendering passes. Hopefully it all works, but practice will tell. We sure didn't pick the easiest model to animate for a start! You'll hopefully see it soon in a demo movie!
There you go, soft particles.
ParticleEd, the Speaking Horse
Another development from the past weeks, is our new ParticleEd ("Particle-Editor"). Particles have been in the engine for a while, but they always made me nervous. Sometimes you just code things that feel like they can fall apart like a dusty mummy any moment. Fragile porcelain, starving Panda, Soviet quality, bone disease code. Several things were wrong or outdated about the code. Though the particles were already handled on the GPU, I never liked the OpenGL “Transform Feedback” way of storing results back into a buffer. Also the memory management wasn’t 100% robust (which is important for effects like particles that come and go all the time).
But even more important was the lack of a proper editor, and the way how particles(motion) were defined. Virtual Effects such as particles are complicated, because the way how they behave can vary a lot. Just think about the way how water-drops in a fountain would move compared to cigarette smoke or Dragon Ball Z Destructo-Discs. Though particles often just drop down due the laws of gravity, the way how they move, fade in, colorize, grow, shrink, collide or get their initial velocity can differ a lot. Puke particles would fall straight into the toilet. A bird feather on the other hand falls down with a gentle “wave”, eventually affected by a breeze, and certainly doesn’t splatter on impact. Just that you know!
It’s up to a (vfx) artist to define such things. Making particles is not only about drawing some (animated) sprite textures, it’s also about playing with physics, timing, speed, blending, and so on. A lot of parameters that can completely differ per effect… and GPU’s don’t like that. GPU’s shine when they can batch, and do the same stuff en masse. You earn more money letting machinery print 1$ bills non-stop for 10 minutes, instead of having it print 10$ but requiring changing components halfway. You could write a special shader for each possible particle, but it will lead to tons of different shaders, long loading times, more switching, and more sweat to code & maintain them. We’d like to make “universal” shaders that are capable of everything… Or no, actually that isn’t the preferred way either, as that kind of code gets a lot of overhead, doing stuff you don’t actually need. If only one out of 100 particle effects require a tornado-spin motion, it would be a waste to implement code + parameters for this in every shader by default. It’s all about finding a good balance somewhere.
One of the next improvements is lighting the particles somehow again. Right now they are too bright or dark compared to the environment, as they don't catch light at all.
So, the mission briefing:
- Upgrade code
- Make a more flexible system that can deal with any kind of particle, BUT without adding additional overhead
- Make an editor the artist can actually use
An editor the artist can use… hehe. As a one man army doing most of the T22 stuff, including importing many of the assets the artists made, the editors are usually only used by myself. Which isn’t exactly great for the robustness and user-friendliness. I know how they (won’t) work, so the priority on making good tools is low. But of course, on the longer term the artist really has to be able to use such tools himself to get a good understanding of the capabilities, and also getting direct feedback from the tool, instead of a mail from mister with a screenshot filled with red arrows, pointing bugs.
I began with replacing the Transform Feedback shaders with OpenCL Compute Shaders. These shaders will spawn & evaluate all the particles (and eventually sort them on depth later on). So basically they initialize (recycling dead) particles, giving them all kinds of randomized attributes such as a start position, color or velocity. Then each particle moves, grows, colorizes, collides and eventually dies, calculated by the same shader. I chose Compute Shaders here because they are more flexible, and better readable. You can grab a particle-struct from an array, modify it, and store it back in the same buffer. Even cooler, you can look fetch other particles in the same way, and let particles follow, attract or rebound each other. Not often needed, but if you want magnets, smoke trails, or planetary systems…
To prevent overhead, the Compute Shader is a collection of chunks. In the editor, the artist chooses which options he wants to use. Does it use gravity? If so, add a chunk of code that pulls the particle down over time. Most of the options are pretty common though. For the real advanced effects, you can add your own custom chunks. Yes it requires the artist to know a bit OpenCL, but since he can always look in the code, there are plenty of examples. This isn’t the most user-friendly way ever, but at least we got rid of restrictions, and the code-base remained pretty small and powerful. In practice most effects can do without custom code, and for the exceptions I’m happy to give a helping hand.
ParticleEd. The (OpenCL) code gets automatically changed when picking other options. If that doesn't suit your needs, feel free to add code in the empty boxes below.
So, now you know again what we're (supposed to be) doing. Animating badguys and smoking particles. If those tasks are done, expect a new Demo soon!