There's a lot of shit arguments on the internet all right. For some reason, boys want to dominate digital discussions by saying difficult tech-terms, like a dog marks territory by pissing against poles. And the funny thing is, a lot of arguments are based on "something they heard", and then misinterpreted. I remember the “Next-Gen” vibe 5 years ago. New technologies are treated like magical ingredients only possible on platform-X, but the truth is that any modern platform can do all tricks, as long as the programmers are smart enough and the platform fast enough to handle it realtime (meaning at least 25 times per second). A Wii can do Parallax-Occlusion-Mapping or realtime G.I. too, but at such a large cost it's unlikely to be implemented in a game. If you really want to compare game consoles on visual capabilities, then look at the videocards, available memory, and shader instruction sets.
Either way, good graphics start with five main components:
A: Proper LightingHow you do it, doesn't matter until you start meeting the platform limitations (which is pretty soon usually). These posts focus on the crucial component “Texturing”, and the technology behind it. Oh, for starters, with a texture we mean the images being pasted on 3D models. High-end engines can still look like shit when using bad textures. Low-end engines can still look good when using good textures. Usually when we develop a new Tower22 environment, the first versions look pretty bad. The same happens when a amateur makes a map in a UDK or whatever powerful engine. Techniques such as shadows and reflections mask the uglyness a bit, but in the end it's still like smearing expensive make-up on a pig. Then on the other hand, games such as Resident Evil4(Gamecube / PS2), God of War(PS2) or Mario Galaxy(Wii) still look good while their engines really weren't exactly cutting edge technical miracles. Neither back then, compared with PC engines such as Source(Halflife2), CryEngine(Farcry) or IdTech (Doom3/Quake4).
B: Proper Texturing
C: 3D models / worlds
D: Technicians & Hardware making it possible
E: Creative people making good use of A,B,C and D
Ok, but how does it work? Probably you heard about bumpMaps and stuff, but maybe no idea what it really does. Let's just start with the very basics then. Remember Quake1 (1996)? That is one of the first (if not the first commercial) true 3D games, where the worlds were made of 3D polygon models, using textures to decorate the walls and objects... Or maybe It wasn’t the first 3D game actually. The Super Nintendo already had a couple of games using the Super FX chip, like Star Fox and Super Stunt FX. These games rendered flat (animated) pictures called "sprites", and simple 3D geometry shapes (triangles, cubes, floor-plane) with a certain color. The space ships for example were a bunch of gray and blue triangles, and an orange triangle as a booster. Not much detail of course, but some of those triangles even carried an image to give it some more detail. See the cockpit above.
And also Doom, Hexen, Wolfenstein and Duke Nukem were sort of 3D (“2.5D”) of course. Although the technology used back then is way different than Quake1 used, which is still the footprint for nowadays games. The way how Wolftenstein actually looks more like how a raytracer works; each screen pixel (and the screens didn’t have much pixels back then fortunately) would fly away from the camera, and see where it would intersect a wall, floor or ceiling. Because of the limited CPU power, the collision detection had to be fast of course, and that explains the very simple level design. But what Wolftenstein also already did, was Texturing its walls. Depending on where the screen-ray intersected, the renderer would pick a pixel from the image applied on that wall.
Texturing in the 21 century
Texturing basically means to "wrap" a 2D image over a (3D) model. Typical game data therefore contains a list of 3D-models and image files, paired together. Asides having triangles, the 3D models also tell how to map the image on them by giving so called “Texture” or “UV” coordinates. As for the textures, those are just images you can draw MS Paint or Photoshop really. Nothing special. Although games often use(d) compressed files to save memory. The SNES didn't have enough memory & horsepower to texture each and every 3D shape, so most objects in those 3D games just used 1 or a few colors only.
You can do the math; an average jpeg photo from your camera or phone already takes a few megabytes. In other words, a single photo is larger than the total storage capacity of a SNES cartridge! But if you make a tiny image, with only a few colors, you can save quiet a lot though. Save a 32x32 pixels image as a 16-color bitmap, and it will only take 630 bytes = 0.62 KB = 0,0006 MB. Now that's more like it. Only problem was, the SNES still only had super little RAM memory, a matter of (64?) kilobytes. So it could only keep a few things active in its work-memory at a time.
Quake1 fully utilized texturing though. Every wall, floor, monster, gun or other object used a texture. Of course, PC's back then still had very limited memory and data bandwidth, so the textures were small and only had 256(or less?) color palettes. But for that time, it looked awesome. Either way, since Quake1 also became "Mod-able" for hobbyists at home, we ordinary gamers came in touch with editors, 3D models, sprites, UV coordinates, textures, and whatsoever. So, in resume:
- You make a 3D model (with 3D software such as Max, Maya, Lightwave, or game-map builders. Those are often made by hobbyists, or provided with the game(engine) itself.
. A game model is a list of triangles. Each triangle has 3 corners, called "vertices". A vertex is a coordinate in 3D space, having a X,Y and Z value. It can carry more (custom) data, but we talk about that later.
- so, yhe 3D model you store is basically a file that lists a bunch of coordinates
- An image is made for the object. Since 3D objects can be viewed from all direcetions, we need to unfold (unwrap) the 3D model all over the canvas.
- Each vertex(corner) also has a texture-coordinate (a.k.a. UV coordinate). These coordinates tell where the triangles are mapped on the 2D image.
That's how Quake1 worked, and that's how Halflife3 will still work (although... HL3 might not appear this century, who knows what technology we have then). So, basically this means all 3D objects will get their own texture. We also make a bunch of textures we can paste on the walls, floors and ceilings. Like putting wallpaper in your own house. Obviously, the texture quality goes hand in hand with the artist skills, and the dimensions of those textures. A huge image can hold a lot more detail than a tiny one. These days, textures are typically somewhere between 512x512 to 2048x2048 pixels, using 16 million colors. The file and RAM usage size varies somewhere between 0.7 and 4 MB for such textures, if not compressed.
The results also depend on how the UV coordinates were made. You can map a texture over a small surface, so the small surface receives a lot of pixels relative (but also repeats the texture pattern all the time). Or you stretch the texture over a wide surface, making is appear blurry and less detailed. This typically happened on large outdoor terrains in older games. Even if you have a 1000x1000 pixel texture, a square meter of terrain will still receive only 2x2 pixels if it’s about 500 x 500 meters big. Ugly blurry results. Games like the first Battlefield often fixed that by applying a second “detail texture “ over the surface, which repeated a lot more times. Such detail texture could contain the patterns of grass or sand for example.
Next time we crank up with lighting in (modern) engines and the kind of textures being used to achieve cool effects for that. For now, just remember textures are used to give color to those 3D models. Nothing new really, but an essential part of the process. As for the next X years in rendering evolution: Textures are here to stay, so deal with them. Keep your friends close, and your enemies closer -- Sun Tzu