A long
time ago I used to write much shorter posts, reporting whatever I did that week
before on Tower22. Usually about some new shader, which are fairly quick to
implement, and generate nice screenshots to share. Especially when
"completely awesome" graphics with some fancy shader technique
weren't too common yet. Nowadays, pretty much every game looks Next-Gen
(whatever that exactly means), making it harder and harder to impress here.
So,
let's do that next time, writing a shorter report about whatever I did on this
game. Although it will be less visual appealing. For one reason, I simply
didn't do any graphics coding lately. As explained before, I shifted the focus
on making actual gameplay instead. Not that T22 will be looking worse from now
on. Eventually the rooms, assets, and also engine techniques will be upgraded.
By artists. But since I don't really have artists helping at the moment, nor
any active search for them, I'll have to do the job with placeholders.
Programmer art. Dummies.
Got racks full of dummies. On a dummy floor between dummy walls. Actually those boxes are garbage-bags. You're the Caretaker of T22, remember?
Physically
Based Rendering
There
used to be a time that, even with my limited skills, I was still able to
produce pretty decent stuff. Because the quality-bar wasn't as high in other
(commercial) games. Much simpler models, low resolution textures, and
"bumpMapping" was still a state-of-the-art thing. Nowadays you can't
get away without ultra-dramatic scenery, using PBR -Physically Based Rendering.
This changed "drawing" into science almost. This new (well, not so new
anymore) catchword "PBR" is not some specific technique to achieve
photorealism, but more like a label on your rendering-pipeline, claiming your
shaders will be using real or close-approximation physics formula's for
lighting and such. That's nothing new really, stuff like Fresnell has been
there since the start of shaders. And also, techniques like IBL, reflections or
GI are still half a bunch of hacks. It's just that hardware allows to rely on
the higher-end shadermath these days.
But what
did change, are the textures. Less cheats and more reallife based shaders,
would also require realistic input parameters. In games, much of that input
comes from textures. Whereas an old Quake2 3D asset just resembled a wireframe
model & a "texture" (technically the Diffuse texture), modern
assets require a lot of layers, describing (per pixel) properties like:
- Metalness
- Color
- Roughness (or Smoothness, if
you wish)
- Normal
And
there are more properties like translucency, emissive, height, or
cavity/ambient occlusion, but the ones above are most mandatory for PBR. Let’s
give a brief explanation. Yes there are plenty of tutorials out there, but I
found them a bit long or hard to understand at a first glance. So let’s explain
the dummy way first, and after that, you’ll click here for more:
Metalness
Most
(game) materials are either a "Conductor" (metal), or
"Insulator"/"Dielectric" (non-metal). This can be
considered a boolean parameter; either you are a metal, or you aren't. But be
aware that rusty or painted metal may not exactly be a 100% pure metal though.
So in those cases, you may want to describe this property per pixel. Anyhow,
the big difference between the two, is mainly how (much) they reflect. Metals
reflect almost everything, making them appear shiny, or "very
specular", while Dielectrics only reflect a relative small portion at
glancing angles (Fresnel). Think about asphalt; you won't see it reflecting
unless looking at a sharp angle, with lot's of light (sunny day) coming in from
the opposite side.
Older
shaders would often allow to manually slide the bars, telling how much
"specular" there would be. So the formula became something like this:
result = diffuseLambert +
(specularPhongOrBlinn x SpecIntensity)
This
slaps the law of "energy conservation" in the face though, and would
basically led to overbright surfaces. You can't be very diffuse and very
specular at the same time. Think about it, and hence the term "Physically
Based". Surfaces are specular if they are very polished, like a mirror...
or a brushed metal sheet. And otherwise they are diffuse, meaning the
microstructure of the material is rough, scattering light in all directions. So
to put it simple, the formula should have been something like this:
result = (diffuseLambert x
(1-SpecIntensity)) + (specularPhongOrBlinn x SpecIntensity)
So the
sum of the diffuse and specular components would be 100%. Nothing more, nothing
less. Obviously a surface can’t generate more light (unless it’s actually
emissive). The metal component we just mentioned can work like a switch: Metals
are up to 90% reflective, dielectrics maybe usually somewhere around 3 or 4% only.
But instead of making two different shaders with cheap tricks, all materials
would follow the same math -thus one uniform supershader-, and using a
"metalness" parameter. Either a single parameter for the whole
material, or on a per-pixel level, in case there is variation (rust, dirt,
paint, coat, objects made of multiple materials, …).
It must
be noted though, that there are still exceptions on this uniform shader.
Complex materials like human skin, velvet, ruby, liquids or other translucent
crap may still be better of using a special-case shader. Or, you can extend
this "metalness" parameter to a "surfaceType" one, and
switch rendering strategies based on that.
From
left-to-right: 1: non-metal + rough. 2: metal + rough. 3: non-metal + smooth.
4: metal + smooth. Direct light coming from left-top btw.
Roughness (the opposite of Smoothness, if you wish)
If you
got stuck in older shaders using specularity, like me, this is a confusing one.
As mentioned, in the past you would define the amount of
reflectivity/specularity. Typically encoding this in the diffuseTexture alpha
channel. And then another (sometimes hard-coded) parameter would define
"specular Power", or "Shininess", or "gloss".
Very high power factors would create a narrow, sharp specular highlight. More
diffuse materials like a woodfloor would typically use a lower power, smearing the
specular lobe on a wider area, making it appear less shiny.
This was
a somewhat close-but-no-cigar approximation. Could be used very well, making
realistic results, but could also potentially lead to an impossible combination
of factors. Although PBR does not exactly dictate how to do things in detail,
it's more common now to define metalness and roughness only. Indirectly
metalness would stand for "specularStrength", and
"roughness" for "specularPower". The roughness factor does
not add more or less specular, but is used to mix between sharp and very blurry
reflections.
Another
common aspect in PBR systems, is IBL (Image Based Lighting). Again, fancy talk
for something that existed since Halflife2 already really (but “low-end”). You
would sample light in cubemap/probes on lot's of spots in your world. Than
everything nearby that probe, can use that data for both reflections, as well
as diffuse/GI. By either blurring (downscaling/mipmapping) the probe, and/or
taking multiple samples in scattered directions, you would simulate roughness.
A very diffuse surface would sample the probe in very random directions, while
smoother ones focus their rays in a more narrow beam, the reflected vector.
But...
how the hell do you control more or less reflectivity then?! You don't, at least not the traditional way. Non
metals would typically use a fixed ~3 or 4% F0 input for their Fresnel value,
metals a variable one from a texture (see Color below). And the gloss finishes
it off. Very blurry reflections tend to dissapear. They’re still there, but kinda
appear as diffuse, making them harder to spot. Note that pretty much all
materials actually
do reflect to some extend in reallife. But anyhow, if you prefer better
control, you could still make that Fresnel factor adjustable (which is what the
Metalness map basically does really).
Color
Another
confusing term is the texture itself. I'm not even sure how to call it now...
AlbedoMap, DiffuseMap... Probably BaseColorMap would be the closest thing, as
it often is a multi-purpose texture now. Standard diffuse materials like
concrete, would translate this color to, well, a diffuseColor. As we always
did.
Metals
on the other hand have little need for diffuseColor, and require a
"Reflectance" (F0 / IOR (Index Of Refraction) / Fresnel) parameter
instead. Which is most often a grayscaled value, indirectly telling the amount
of reflectivity. But materials like gold or copper may actually want a RGB
value, to give them, well, that gold or copper colour. So, in that case, why
not use the same colorMap to encode F0 then? In fact you could store both
diffuse and F0 values in the same BaseColorMap, in case it holds both metals
and non-metals.
Of
course that is all possible. But -and this adds some extra difficulty to making
textures in general now- you can't just draw some yellow/brown/orange color to
make it look like gold. Well, you can, and you'll be close, but it's cursing in
the PBR church. *Physically Based* remember? That would mean you should draw
the exact values, the kind of numbers you would find in tables of physics
books. Gold would be {R:1
G:0.765557 B:0.336057}. And now
I'm in unfamiliar terrain so I shouldn't say too much and misinform you, but to
make life easier, artists work with sRGB colours, have to calibrate their
screens, and/or use pre-defined pallettes in their drawing software. All part
of this PBR Workflow.
Also
non-metals should use the right colour intensities by the way. To make a proper
HDR (High Dynamic Range) pipeline, all colors and intensities should be in
balance. A white paper shouldn’t be as bright as the sun. Or how about putting
paper in snow? Snow should reflect more light, so be careful with your color
values.
This is
a whole struggle, and may distract the artist from just drawing on good old
creative instincts. Then again using real data and calibrated presets (that
your engine should provide maybe), would result in more consistent results.
That's what PBR is all about really.
Good news about PBR, is that this "ColorMap" is more about colors, and less about tiny details now. This ugly dummy texture doesn't turn out to bad with some roughness / metal properties, and a cheap normalMap. Imagine what a proper artist could do with that...
NormalMap
Nothing
changed here really, but it should be noted that, thanks to increased videocard
memory & computing-power, that the NormalMap has become standard, rather
than an optional feature for more advanced surfaces. It should also be noted
that materials often have a secondary "detailNormalMap" nowadays,
which contains the smaller bumps, nerves and wrinkles, noticeable when looking
at a closer distance. It should also be noted that more bumpy surfaces on a
micro-level, may go hand in hand with specular roughness. You could chose to
encode roughness in the normalMap alpha channel, and metalness or some other
parameter in the colorMap alpha channel. So in the end you (still) have only 2
textures for most “normal” assets.
PBR = Photorealism?
So, with
PBR we finally touch photorealism? Well some games would definitely start to
qualify for that, but not necessarily thanks to PBR. A non-PBR game can look
fantastic (first Crysis anyone?), and a PBR pipeline can still look like
shit... like Tower22 in its current state.
Hey...
where did that go wrong?! Well, it didn't go wrong actually. Left is more “realistic”,
technically, as light scatters in a more natural way, and the surfaces don’t
reflect as if they were soaked in olive oil. But… hell, its boring. Of course
it must be noted that the right(old) side was more complete. The old engine had
lens flares, blur, volumetric light, dust, and sharper shadows. But also, the
scene itself contained more details, like the stains on the walls, carpet-crap,
decals, paintings, et cetera. But in other words, PBR is not an auto-magic key
to beauty.
PBR is
just a way of working really. One that leaves less room for cheats and
inconsistency errors that may follow because of cheating. Maybe more important
is the fact that the new engine relies a lot more on IBL (Image Based
Lighting), thus sampling cubemaps everywhere. But... if the surroundings are
still ugly because of lacking detail, badly used textures, or lack of a good
light-setup, then also the
sampled & reflected lighting will suck of course. Mirrors can't fix
ugliness!
So is
Tower22 "PBR"? Yes and no. The shaders are "PBR-Ready", so
to say. But my input materials (mostly programmer "art" dummies or recycled
items from the older engine) haven't been made on calibrated screens, their
metal colors are just approximated, and they were equiped with "SpecularStrength"
parameters, rather than roughness. Which is usually not that much of a difference, but still.
Do I
want it to be PBR? Not necessarily either. It's up to the artists later on, but I can imagine it over-complicates the content. Don't forget, this is still a hobby project, and eventual future artists may not be the most experienced ones. Also, a horror game
like Tower22 doesn't necessarily have to look photorealistic. It should look
better than the pics above though, but that is more a matter of giving the
scenery more love. Getting the UV-maps right to start with, adding detailed and
decals, dim the lights and put them on more interesting spots, use different
textures maybe, and then finishing off with improved shading.
A long
way to go as you can see. But as said, I'm focussing on gameplay now (read physics, scripting,
solving puzzles, inventories, ...). Which I planned to write about today actually... but
PBR took me off, damn it.
Thanks for the insight. Always interesting to read about your progress :)
ReplyDeleteMy pleasure ;)
ReplyDelete