Off-topic, but nice to show nevertheless, some first-concepts for one of the many corridors...
Not too long after making a bunch of ideas, the first maps rolled out. And with a map I mean an empty room/corridor. Like buying a new house, you still have to argue where to place the sofa, and whether to use pink paint or clown wallpaper. But that's for later concern. First, a new map has to pass the Engine22 immigration services.
A finished model does not yet make a suitable game-map. This is because a "freestyle" 3D package like Max, ZBrush, Lightwave or Blender, does not have to follow any rules. Where games have to deal with physics/collisions, closed worlds, size boundaries, portals, spatial sorting and all other kinds of optimizations, a 3D package does not have to worry about performance, memory usage, or how the model will be used. Hence we're talking about making environments, but you could just as well model flying spaghetti.
That's why we have border watch; an importing tool that tests if the supplied model meets the requirements, scraps shit, and eventually adds additional data. This is common for pretty much every engine or game-tool. Though the bigger professional engines like UDK or Source often come with their own map-editors that force you into a certain way of modelling. Sorry if I bother you with ancient terms, but let's take Quark as an example. A long time ago, when the Halflife1-Saurus still wandered our planet, I used Quark to make some custom maps, just for fun. It can be a bit compared to Hammer, or the Unreal Editor.
Quark. 3 closed rooms, hollow primitive shapes -> Brushes.
In Quark, you couldn't just throw triangles wherever you pleased to do. No, you had to work with "brushes", sort of big hollow Lego blocks. Although I don't know the indepth details, using those blocks for rooms made sense, as they assured your world to:
- have a floor to walk on
- being closed (walls, floor, ceiling (or skybox))
- having a definition of "rooms" / "sectors". And this would help splitting up the world in logical sections, used for lightMaps, AI calculations, physics, culling, collision trees, and whatever was needed.
The downside however was that those blocks were a bit sturdy. It was hard, if not impossible sometimes, to make a complex (organic) shapes. If you needed small or complex details, you had to insert "props". Boxes, furniture, barrels, but also doors, lamps, pipes and railings are examples of props. That doesn't sound very handy, but keeping the maps as simple as possible makes sense, especially for older hardware. In many (older) games, props follow a bit different rules than the static world that carries them. Props don't reserve space in a lightMap, but use (simplified) lighting. Props use none or simplified collision hulls (resulting in slight less accurate but faster collision tests). Props can fade out and be hidden after X meters, or toggle to a lower-detailed version to win some polygons and speed. Props can be moved around, while static parts of the world such as the streets, buildings or walls can't. The player can be inside a room. but not inside a prop (unless it's a vehicle or something). Making a strict line between the static world and props is done in pretty much any game, although modern games manage to get things more flexible. No lightmaps for the static world, destructable worlds, accurate collision detection with props so you can climb and enter them, et cetera.
The maps (= static environment), using relative simple cube-like meshes like those Quark brushes, have a low polycount. Props on the other hand have a very high polycount compared to them. For example, the monster in the Radar Station demo uses almost more polygons than all radar station walls/floors/ceilings/pillars together. A computer model takes about 1.200 triangles in T22. A simple room with a window and a door only needs 400 triangles or so. Having a low polycount for the environments is useful for various reasons. First of all, the less polygons, the faster things render. You can fade-out a 1.2k triangle computer object after 30 meters or so. But you can't fade out the Empire State Building in a GTA-like game. Even not after a few thousand meters. Using a low-poly model for this (background) model is a solution.
Another reason to watch your polys, is collision testing. When you fire a bullet, the engine needs to check where it intersects a wall, or head. or... The most stupid thing you can do is looping through ALL triangles in the world, then for each check if the bullet intersects. You don't have to be a programmer to understand that testing this for thousands, maybe millions, of triangles is not such a good idea. For that reason, games often split up their worlds in invisible cubes (or another type of spatial grouping). With octrees for example: Imagine 1 huge cube around the entire world. Then you can split that cube into 8 sub-cubes. If your bullet flies somewhere in the upper-left-front cube, you can already skip thousands of triangles that are not (partially) inside this cube. Each cube can be divided again in 8 sub-sub-cubes, and so on. How many times you subdivide depends on you. You could for example keep dividing until either the cube-size is less than 1.0 M3, or when there is only 2 or less polygons intersecting. The idea is minimize the amount of triangle checks. Ray-versus-triangle checks are expensive, while sorting out in which sub-sub-(...)cube your bullet is, is cheap. So with the help of an octree, BSP, quadtree, or whatever, you can dive to the deepest level for a certain position in your world. Then test collisions with the triangles that are inside or intersect that octree-node.
Now, worlds that use a relative low number of big polygons, will have simple, thus easy-to-access, octrees as well. A lot of (small) triangles on the other hand will still require many checks, and/or an octree with many subdivisions (taking up some performance, but also memory).
One more very good reason to keep the worlds relative simple and to separate props, are lightMaps. Now Tower22 doesn't use LightMaps (though they might return). But many games did, and still do. When using lightMaps, each polygon needs it's own spot reserved somewhere in an image so you can store it's own incoming light values on those pixels. Images are not infinite though. A 512x512 image for example has "only" a quarter million pixels. The amount of space you need depends on the polygon size (large walls need more pixels than tiny stuff), but also on the polycount. If each polygon needs at least 1 pixel, you can't store lightdata for more than a quarter million polygons in a 512x512 image. For simple walls that are made of just a few triangles, this is not a problem. But a stupid sphere shaped doorknob, no matter how small, may already use 40 triangles. So you already need 40 pixels, or at least 6 if you pack them together based on the XYZ axis direction they're facing. A better idea would be to kick out that doorknob and use another (simplified) lighting method on it. Who will notice the difference anyway?
Importing maps in T22
Back on topic. Although we do have a Map Editor, it's not suitable for actually constructing the mesh. We use the Map Editor to insert props, paint walls, hang-up the lamps, attach sounds, write scripts. And eventually do some small cosmetic surgion such as welding vertices, shifting the UV coordinates, or removing a polygon. The modeling itself still happens in another 3D program. Why reinvent the wheel?
You can't expect me to make an equal or even better modeling tool within a few months while the 3D Max or Blender boys are working years and years on it. No, way too little time. Instead the artist models the worlds (and props) in his/her favourite program, then exports it to OBJ files, an old simple industry standard. When importing these files for the first time (thus when adding a whole new map to the game), the OBJ first has to pass border watch though. And that's me & my loyal sidekick Birdman, uhrm, Lightwave.
Like explained before, a 3D modeling program or OBJ files, have to follow any rules. An OBJ file is not much more than a listing of coordinates (vertexdata) and the relation between them (polygons made of X vertices). But we need to know a bit more to make it suitable for a Tower22 map though. A map is more than a bunch of visual geometry. For example, we also need to define collision shapes, and eventual special trigger zones (water, lava, ladders, teleporters). That's why I supply the model with some more layers in Lightwave. One nifty feature in Lightwave is to work in layers, each containing its own data. That makes it easier to separate things while importing. Here an idea what a map is made of in Tower22:
1- LOD's (the visual geometry in several variants, from full to low detail)
2- Collision Geometry
3- Sound occlusion geomeetry
6- Portals (to see other neighbor rooms defined in another map)
9- ...Probably more to come...
In the LOD's we basically the model as you see them in the game. But talking about levels-of-detail(LOD), ever noticed buildings to become suddenly more detailed when approaching them in a game like GTA? That's because they used multiple versions of each map-area. When looking on a distance, a simple textured cube for a flat could be sufficient. You don't see the small details such as normalMaps, antenna's, ornaments, signs or other architectonic quirks anyway. Same principle of Tower22. Each map has a HIGH, MEDIUM, LOW and ULTRALOW(optional) mesh. Depending on the distance, but also if you can see it or not (T22 = mainly indoor this lot's of occlusion), a version is picked for rendering.
Not only the geometry uses less triangles, the surfaces can also use simplified materials where all the special shader tricks such as normalMapping, parallax or entropy are disabled. This also helps loading the sectors smoothly in the background. First load the simple mesh, than the medium, et cetera. You can actually see this happening in GTA San Andreas when you drive faster than the world could load. Eventually ending up in a weird void with flying cars and pieces of pavement here and there.
Notice the PC version being more detailed (dig that XBox boys!)? That's most probably because the lower LOD variants fade-in earlier on the XBox to gain some speed.
2- Collision Geometry
Normally what you see is also what you can touch. But not always in gameland. Sometimes games define invisible walls to ensure the player-idiot not falling of a roof. Or vice-versa, they remove the collision for a piece of wall so you can jump through a painting like Super Mario 64 did. Very useful for ghost stuff or making secrets hallways. If your player has problems with stair-climbing physics, and believe me, climbing stairs is difficult, then it may help to make a crippled-friendly invisible variant of the stair. But mostly, the collision geometry is exactly the same as the Highest LOD in our case.
3- Sound occlusion geomeetry
With all the visual violence, we often forget our ears. But to make things sound realistic in an indoor game, sound needs to follow some physical rules as well... Like getting absorbed when travelling through a thick wall. Luckily FMOD allows you to define a 3D world, where you tell the occlusion factors for each polygon. So what I do is making a copy of the LOW LOD mesh (no need to let small crap block sound), and define their materials. A "medium concrete wall" for example may occlude 60% of the volume.
Here we define spherical zones that contain a "Reverbs", another cool sound feature. Reverbs are basically sound modifiers. Here is an experiment:
produce a fart in A:the bathroom B:a concert hall C:a cave D:under water
E:(optional) next to your girlfriend in the livingroom.
Maybe you didn't smell the difference, but you should have heard the difference. Due acoustics, each room sounds different. Reverbs add echoes, and do all kinds of crazy math I have no idea about. But it sounds cool. In Tower22, you can define such an effect for each room. But if needed, you can also do it more local. If a corner of the room has airduct metal around it, place a "airduct" reverb there. The Occlusion volume will not only block sound, but also the effect or reverbs by the way.
Ever since the Atari, game worlds have special zones that give you bonus points, kill you, or warp you to a next level. Whenever the player (or something else) enters a zone, something happens. A practical example would be a water-volume or hazardous zone. If your player intersects a water volume, he has to toggle to swimming (or drowning) modus. In the map, I can define zones (which do not have to be cubes btw) and name them with a specific identifier + parameters in some cases. "GRAVITY 0 -9,8 0", "LADDER +Y", "TRIGGER eventX", et cetera. Basically these volumes help you driving the player state-machine.
I could tell a whole lot, or just refer you to portal culling. In short, we split up the entire Tower22 in sectors, which are typically rooms or corridor(pieces). So 1 lightwave file contains 1 sector. Usually rooms are connected via doors, holes, or windows. In this layer we define those portals via simple quads. The engine will figure out which neighbour sector is connected via this shape and link them up. If it fails to find anything, you can also do that manually. Or you can play Valve Portal / Prey tricks with it, by defining an entire different sector behind a portal.
A special kind of geometry are surfaces that use cloth physics. Think about flags, sheets, curtains or Batman capes. You can hang a (highly) sub-divided shape, and define which vertices are attaced, and which are free to move (by gravity / wind / collisions).
A set of 2D lines to set out paths for camera's, animated sequences or nodes for AI routing. This way, you could for example make all possible routes a car could drive through your city, then pick a rail and let the car (globally) follow the nodes. Or use it for a Tour of Duty patrol.
Oops, forgot a triangle. Now what?
That's quite a lot ey mate? Well, most maps only define the LOD's, physics, sounds and portals initially, which are mostly extracted parts or copies of the High mesh. Extra stuff like triggers or cloth can be imported later on. Once the Lightwave file has the required layers filled, it can be imported into the game via our own map-editor. This editor will then save it to our own map file format, which contains additional data such as ambient info, script, props, and other T22 specific stuff. From now on, we work with the T22 map file...
However, what to do if the artist wants to make a last-minute change? Forgot a polygon, move a vertex, resize a bit... Yeah we programmers don't expect such things to happen, but it does. The big downside of using importers/exporters or any other kind of extra steps in between, is the extra work the artist has to do in order to get his model working. Boring work, and bigger chances the artist forgot a certain step. If your importer does not catch those flaws, it could result in weird bugs, not working maps, frustration, and flying keyboards.
To reduce the chance on that, the Map Editor itself has a few tools to make simple adjustments. UV-maps can be remapped, textures can be changed, vertices can be weld, polygons can be removed. For operations that require to re-import the model anyway, it can be done locally. With that I mean you don't have to throw away and rebuild the entire map. A specific component of the map, such as the UV-coordinates for a few polygons, or the collision mesh, can be reloaded on their own, while preserving the rest of the map.
It's still not as user-friendly as Hammer, Quark, UDK Editor or a Crysis Sandbox, but hey, you got to make some compromises with a 0$ 1 man programming team. And that's why I import all the maps instead of the artist, protecting them from frustrations. Plus the urge of fixing something is bigger when you are confronted with the bugs yourself, rather than getting complain-mail ;) Next time: Making Props.
And there you have your map imported...burp. I'm always dissapointed after working X hours, then seeing an ugly (faulty) mesh like this. But hey, don't let the first impressions take you down! New born babies are ugly too ;)