Sunday, October 2, 2016

Physically Based Headache

A long time ago I used to write much shorter posts, reporting whatever I did that week before on Tower22. Usually about some new shader, which are fairly quick to implement, and generate nice screenshots to share. Especially when "completely awesome" graphics with some fancy shader technique weren't too common yet. Nowadays, pretty much every game looks Next-Gen (whatever that exactly means), making it harder and harder to impress here.

So, let's do that next time, writing a shorter report about whatever I did on this game. Although it will be less visual appealing. For one reason, I simply didn't do any graphics coding lately. As explained before, I shifted the focus on making actual gameplay instead. Not that T22 will be looking worse from now on. Eventually the rooms, assets, and also engine techniques will be upgraded. By artists. But since I don't really have artists helping at the moment, nor any active search for them, I'll have to do the job with placeholders. Programmer art. Dummies.

Got racks full of dummies. On a dummy floor between dummy walls. Actually those boxes are garbage-bags. You're the Caretaker of T22, remember?

Physically Based Rendering
There used to be a time that, even with my limited skills, I was still able to produce pretty decent stuff. Because the quality-bar wasn't as high in other (commercial) games. Much simpler models, low resolution textures, and "bumpMapping" was still a state-of-the-art thing. Nowadays you can't get away without ultra-dramatic scenery, using PBR -Physically Based Rendering. This changed "drawing" into science almost. This new (well, not so new anymore) catchword "PBR" is not some specific technique to achieve photorealism, but more like a label on your rendering-pipeline, claiming your shaders will be using real or close-approximation physics formula's for lighting and such. That's nothing new really, stuff like Fresnell has been there since the start of shaders. And also, techniques like IBL, reflections or GI are still half a bunch of hacks. It's just that hardware allows to rely on the higher-end shadermath these days.

But what did change, are the textures. Less cheats and more reallife based shaders, would also require realistic input parameters. In games, much of that input comes from textures. Whereas an old Quake2 3D asset just resembled a wireframe model & a "texture" (technically the Diffuse texture), modern assets require a lot of layers, describing (per pixel) properties like:
                - Metalness
                - Color
                - Roughness (or Smoothness, if you wish)
                - Normal

And there are more properties like translucency, emissive, height, or cavity/ambient occlusion, but the ones above are most mandatory for PBR. Let’s give a brief explanation. Yes there are plenty of tutorials out there, but I found them a bit long or hard to understand at a first glance. So let’s explain the dummy way first, and after that, you’ll click here for more:

Most (game) materials are either a "Conductor" (metal), or "Insulator"/"Dielectric" (non-metal). This can be considered a boolean parameter; either you are a metal, or you aren't. But be aware that rusty or painted metal may not exactly be a 100% pure metal though. So in those cases, you may want to describe this property per pixel. Anyhow, the big difference between the two, is mainly how (much) they reflect. Metals reflect almost everything, making them appear shiny, or "very specular", while Dielectrics only reflect a relative small portion at glancing angles (Fresnel). Think about asphalt; you won't see it reflecting unless looking at a sharp angle, with lot's of light (sunny day) coming in from the opposite side.
Older shaders would often allow to manually slide the bars, telling how much "specular" there would be. So the formula became something like this:
                result = diffuseLambert + (specularPhongOrBlinn x SpecIntensity)

This slaps the law of "energy conservation" in the face though, and would basically led to overbright surfaces. You can't be very diffuse and very specular at the same time. Think about it, and hence the term "Physically Based". Surfaces are specular if they are very polished, like a mirror... or a brushed metal sheet. And otherwise they are diffuse, meaning the microstructure of the material is rough, scattering light in all directions. So to put it simple, the formula should have been something like this:
                result = (diffuseLambert x (1-SpecIntensity)) + (specularPhongOrBlinn x SpecIntensity)
So the sum of the diffuse and specular components would be 100%. Nothing more, nothing less. Obviously a surface can’t generate more light (unless it’s actually emissive). The metal component we just mentioned can work like a switch: Metals are up to 90% reflective, dielectrics maybe usually somewhere around 3 or 4% only. But instead of making two different shaders with cheap tricks, all materials would follow the same math -thus one uniform supershader-, and using a "metalness" parameter. Either a single parameter for the whole material, or on a per-pixel level, in case there is variation (rust, dirt, paint, coat, objects made of multiple materials, …).

It must be noted though, that there are still exceptions on this uniform shader. Complex materials like human skin, velvet, ruby, liquids or other translucent crap may still be better of using a special-case shader. Or, you can extend this "metalness" parameter to a "surfaceType" one, and switch rendering strategies based on that.

From left-to-right: 1: non-metal + rough. 2: metal + rough. 3: non-metal + smooth. 4: metal + smooth. Direct light coming from left-top btw.

Roughness (the opposite of Smoothness, if you wish)
If you got stuck in older shaders using specularity, like me, this is a confusing one. As mentioned, in the past you would define the amount of reflectivity/specularity. Typically encoding this in the diffuseTexture alpha channel. And then another (sometimes hard-coded) parameter would define "specular Power", or "Shininess", or "gloss". Very high power factors would create a narrow, sharp specular highlight. More diffuse materials like a woodfloor would typically use a lower power, smearing the specular lobe on a wider area, making it appear less shiny.

This was a somewhat close-but-no-cigar approximation. Could be used very well, making realistic results, but could also potentially lead to an impossible combination of factors. Although PBR does not exactly dictate how to do things in detail, it's more common now to define metalness and roughness only. Indirectly metalness would stand for "specularStrength", and "roughness" for "specularPower". The roughness factor does not add more or less specular, but is used to mix between sharp and very blurry reflections.

Another common aspect in PBR systems, is IBL (Image Based Lighting). Again, fancy talk for something that existed since Halflife2 already really (but “low-end”). You would sample light in cubemap/probes on lot's of spots in your world. Than everything nearby that probe, can use that data for both reflections, as well as diffuse/GI. By either blurring (downscaling/mipmapping) the probe, and/or taking multiple samples in scattered directions, you would simulate roughness. A very diffuse surface would sample the probe in very random directions, while smoother ones focus their rays in a more narrow beam, the reflected vector.

But... how the hell do you control more or less reflectivity then?! You don't, at least not the traditional way. Non metals would typically use a fixed ~3 or 4% F0 input for their Fresnel value, metals a variable one from a texture (see Color below). And the gloss finishes it off. Very blurry reflections tend to dissapear. They’re still there, but kinda appear as diffuse, making them harder to spot. Note that pretty much all materials actually do reflect to some extend in reallife. But anyhow, if you prefer better control, you could still make that Fresnel factor adjustable (which is what the Metalness map basically does really).

Another confusing term is the texture itself. I'm not even sure how to call it now... AlbedoMap, DiffuseMap... Probably BaseColorMap would be the closest thing, as it often is a multi-purpose texture now. Standard diffuse materials like concrete, would translate this color to, well, a diffuseColor. As we always did.

Metals on the other hand have little need for diffuseColor, and require a "Reflectance" (F0 / IOR (Index Of Refraction) / Fresnel) parameter instead. Which is most often a grayscaled value, indirectly telling the amount of reflectivity. But materials like gold or copper may actually want a RGB value, to give them, well, that gold or copper colour. So, in that case, why not use the same colorMap to encode F0 then? In fact you could store both diffuse and F0 values in the same BaseColorMap, in case it holds both metals and non-metals.

Of course that is all possible. But -and this adds some extra difficulty to making textures in general now- you can't just draw some yellow/brown/orange color to make it look like gold. Well, you can, and you'll be close, but it's cursing in the PBR church. *Physically Based* remember? That would mean you should draw the exact values, the kind of numbers you would find in tables of physics books. Gold would be {R:1  G:0.765557  B:0.336057}. And now I'm in unfamiliar terrain so I shouldn't say too much and misinform you, but to make life easier, artists work with sRGB colours, have to calibrate their screens, and/or use pre-defined pallettes in their drawing software. All part of this PBR Workflow.

Also non-metals should use the right colour intensities by the way. To make a proper HDR (High Dynamic Range) pipeline, all colors and intensities should be in balance. A white paper shouldn’t be as bright as the sun. Or how about putting paper in snow? Snow should reflect more light, so be careful with your color values.

This is a whole struggle, and may distract the artist from just drawing on good old creative instincts. Then again using real data and calibrated presets (that your engine should provide maybe), would result in more consistent results. That's what PBR is all about really.
Good news about PBR, is that this "ColorMap" is more about colors, and less about tiny details now. This ugly dummy texture doesn't turn out to bad with some roughness / metal properties, and a cheap normalMap. Imagine what a proper artist could do with that...

Nothing changed here really, but it should be noted that, thanks to increased videocard memory & computing-power, that the NormalMap has become standard, rather than an optional feature for more advanced surfaces. It should also be noted that materials often have a secondary "detailNormalMap" nowadays, which contains the smaller bumps, nerves and wrinkles, noticeable when looking at a closer distance. It should also be noted that more bumpy surfaces on a micro-level, may go hand in hand with specular roughness. You could chose to encode roughness in the normalMap alpha channel, and metalness or some other parameter in the colorMap alpha channel. So in the end you (still) have only 2 textures for most “normal” assets.

PBR = Photorealism?
So, with PBR we finally touch photorealism? Well some games would definitely start to qualify for that, but not necessarily thanks to PBR. A non-PBR game can look fantastic (first Crysis anyone?), and a PBR pipeline can still look like shit... like Tower22 in its current state.

Hey... where did that go wrong?! Well, it didn't go wrong actually. Left is more “realistic”, technically, as light scatters in a more natural way, and the surfaces don’t reflect as if they were soaked in olive oil. But… hell, its boring. Of course it must be noted that the right(old) side was more complete. The old engine had lens flares, blur, volumetric light, dust, and sharper shadows. But also, the scene itself contained more details, like the stains on the walls, carpet-crap, decals, paintings, et cetera. But in other words, PBR is not an auto-magic key to beauty.

PBR is just a way of working really. One that leaves less room for cheats and inconsistency errors that may follow because of cheating. Maybe more important is the fact that the new engine relies a lot more on IBL (Image Based Lighting), thus sampling cubemaps everywhere. But... if the surroundings are still ugly because of lacking detail, badly used textures, or lack of a good light-setup, then also the sampled & reflected lighting will suck of course. Mirrors can't fix ugliness!

So is Tower22 "PBR"? Yes and no. The shaders are "PBR-Ready", so to say. But my input materials (mostly programmer "art" dummies or recycled items from the older engine) haven't been made on calibrated screens, their metal colors are just approximated, and they were equiped with "SpecularStrength" parameters, rather than roughness. Which is usually not that much of a difference, but still.

Do I want it to be PBR? Not necessarily either. It's up to the artists later on, but I can imagine it over-complicates the content. Don't forget, this is still a hobby project, and eventual future artists may not be the most experienced ones. Also, a horror game like Tower22 doesn't necessarily have to look photorealistic. It should look better than the pics above though, but that is more a matter of giving the scenery more love. Getting the UV-maps right to start with, adding detailed and decals, dim the lights and put them on more interesting spots, use different textures maybe, and then finishing off with improved shading.

A long way to go as you can see. But as said, I'm focussing on gameplay now (read physics, scripting, solving puzzles, inventories, ...). Which I planned to write about today actually... but PBR took me off, damn it.

Sunday, September 18, 2016

Tales from the Countryside

Sorry for the long waits between posts. Not only have I been busy with work trips to the U.S. (gained some quarter-pounders of weight, but worked on some cool Pea combines), it just gets harder and harder to write about something I didn't write about before - almost 200 posts further! Graphics, physics, Programming / Delphi, games, horror, complaints about this and that... I like writing, but the topic needs to be fresh, funny, or informative. Not just for the sake of writing.

So, stirring my hand in a somewhat empty "idea grab-bag”, what do we got today? Well, why not tell a bit more about that other stuff I use to program most days of the year; agricultural machinery? No, it has nothing to do with games, but for you programmers out there, it might be interesting to have a peek into the industrial/vehicle area of the wide programming spectrum. Very different disciplines and techniques on one hand, but also the usual programming/design challenges at the other hand, like making a more abstract, reusable framework as you would do with game-engines. And also for you non-programmers, the story below should give some brief technological insights of our underestimated country-brothers and sisters

That feels right, doesn't it?

Alien Farms
As for me, other than eating steak and sometimes having the heavy odours of sugar beets or pigs rolling over our village, I don't have any agricultural background. I still know milk comes from a cow, drank beer in barns, but that's about it. As a kid, I associated tractors with mud, manure, and "that other world", somewhere outside the village.

It's a bit strange that  -the Netherlands being a very small and dense packed country- having farms just around the corner, there is an invisible iron curtain between "those from the city", and "those others from the countryside".  Farms with both cattle and crop are everywhere out there, yet somehow hidden between our cities and villages. You don't see heavy equipment or John Deere dealers at every corner as you would see in America. There are very little agricultural oriented schools, and back at school I knew very few farm-kids. Usually they would blend in unnoticeable, or form their own little group of little bulky blonde hair guys with old fashioned clothes and that funny accent, making them sound a little bit dumber than the rest of us. But even more dumber are kids nowadays that actually believe milk and apples are produced in factories; they never visited a farm.

That image changed when I had to find a final school assignment. Browsing through a list of IT companies, one name jumped out directly, saying "Ploeger",  which means "Somebody plowing". 99 out of 100 IT vacancies were about testing some unknown stuff, or web-development. Hell I like programming, except the web-part of it, and neither did I vision myself in a suit, drinking coffee in some shitty office with other boring dudes. So that particular name caught my interest directly... I was just guessing it had something to do with tractors, those things I associated with poop, dirt and a certain degree of dulness. But which in all of a sudden seemed to be like a better alternative for me. Real men, dirty hands, oil, chewing tobacco, handling heavy metal. Yeah...

So I googled for that name, of course it could just as well be some little 3-men company making a database for the local supermarket. But my instinct was right; not tractors but even bigger: self-propelled (meaning not towed but self-driving) harvesters, combines. Exactly what I was hoping for. Not much later I got my assignment; update the display + "computers" that run the whole circus in such a machine, a Pea combine in this case. Ploeger would also produce Potato, Bean, Spinach, Parsley and Flower bulb harvesters by the way.

Blackbox beat
I stranded in a quite different world. Not talking about farms and manure, but the type of programming-work. Basically there are two types of boys; those who tinker their first moped, and those who don't. I definitely belong in the second category, meaning I can't hold a hammer (no I’m not a plumber), nor have knowledge about anything motorized really. Well, let's say every person
has its own talents. Yet it's a bit ashaming you enter a factory where it's all about welding, hydraulic oil, diesel fumes, and Auto CAD drawings. You'll be programming a moving vehicle, but you don't even know the basic principles of combustion in an engine.

And almost 10 years later, I still don't know honestly. Not that people never explained me, or that I'm not interested. It's just that my head doesn't store things unless I use it in practice 100 times. The good news though, you don't really have to. Every piece of knowledge is valuable of course, but in general industrial applications are perfect examples of a "Black Box". There is input (sensors), some abracadabra happening inside that computer, and then there is output (actuators). One our machines that would typically mean. I’ll explain.

Input: Electrical sensors
Dashboard, joystick, and other cabin switches or potentiometers (knobs you can turn), to control a machine. Sensors that measure speed (rotating shafts that generate pulses), (fluid)temperatures and levels, mechanical positions determined via angle or proximity (contact yes/no) sensors. Sensors usually provide an electrical "Digital" or "Analog" signal. Digital simply means On or Off (0/1), like pressing a switch. Analog signals will gradually increase or decrease a voltage, current or resistance value. For example, a laser-distance sensor could deliver a signal between 500 and 4500 millivolt. Anything lower or higher could indicate a failure (no signal / short-circuit), and anything in between can be rescaled to centimetres. 

A possible sensor scale function. A signal of 2.5Volt would mean 75 cm in this case. Very low or high signals indicate (electrical / wiring / connectiong ) issues, and we shouldn't rely on this sensor then. Never ever assume stuff keeps working forever on a machine.

Sensors are used to inform users, and/or to define a state of a (sub)system, such as a position. You can put a sensor on anything really, although it would increase the price, as well the complexity of a machine. More electronics = a smarter machine, but also more chance on failure & more searching whenever things go wrong. And things WILL go wrong. I bet you heard your uncle complaining about how he could fix his old car by bonking it with a hammer, and nowadays some expensive nerd has to attach his laptop for every fart. Well nice to meet you, I'm that nerd.

Sensors are used to inform users, and/or to define a state of a (sub)system, such as a position. You can put a sensor on anything really, although it would increase the price, as well the complexity of a machine. More electronics = a smarter machine, but also more chance on failure & more searching whenever things go wrong. And things WILL go wrong. I bet you heard your uncle complaining about how he could fix his old car by bonking it with a hammer, and nowadays some expensive nerd has to attach his laptop for every fart. Well nice to meet you, I'm that nerd.

Input: CAN-bus
Another type of input are digital messages, carrying the outcome of another system. For example, a GPS sensor that measures coordinates. Often such (advanced) data is provided via a communication interface, as an ordinary electrical signal is insufficient. You probably have heard of the good old "COMport" (remove the word "good" here), and otherwise you know about USB. An USB mouse is basically and advanced input, communicating data like a position, movement and clicks through that wire.

On Vehicles, we usually have more than one computer device. It's not that we put a desktop PC somewhere in the cab, running the whole show. Smaller, less powerful, but very robust "Controllers" are placed throughout the machine -often physically nearby their target hardware, such as the Engine, Cooling-fan, or dashboard. Because of limited hardware capabilities, as well as making reusable standard components, Controllers usually target just a few specific functions, rather than controlling the entire machine. So in other words, you'll end up with a network of multiple, complimentary modules.

Now on our machines, we make most stuff ourselves (except Engine / Powertrain units). But on cars or trucks -made by millions- the industry relies on "from-the-shelve" components; Engine could be made by Mercedes, whipers by Bosch, dashboard console by Arnold Schwarzenegger. The real challenge: how to ensure all these different nodes can talk with each other?

On vehicles, most communication between those Controllers is done via CANbus. CAN stands for "Controller Area Network", and was developed a few billion years ago. 1983 to be more precise, by Bosch, although it would take another 10 years or so before the automotive industry really started adopting CAN as a standard. What TCP/IP, WiFi and Ethernet are for modern computers, that's CANbus for vehicles. A simple, 2-wire, robust network that allows multiple nodes(also called Controllers, ECU's, Modules, Devices, --> programmable chips) to chat. A bit like a modern network, but much slower, yet also easier. It doesn't require hubs or routers, nor do nodes require an address, although practical situations usually require each node to have at least an unique number.

CAN works on the principle of broadcasting small data packets. Typically each node measures or calculates a bunch of numbers and states. Conveyor belt speed = 200 RPM, Running = True, lowSpeedAlarm = False. Those numbers are tightly packed into an "Envelope", which can carry up to 8 bytes (64 bits), which usually offers room for 8,4 or 2 numbers, and/or a bunch of on/off states. Not much, and certainly not suitable for multimedia streams or text-approached messaging (like XML), but at least you can send multiple envelopes, and give them an unique messageIDs.

Nodes will broadcast data envelopes every X (milli)seconds, or on request/event. Other nodes, such as a Display, can read those packets, and do whatever they need to do with that data. As said, envelop messageIDs, and sometimes node addresses are used so a receiver knows what to expect in envelopX from nodeY. It's truly simple, and that's exactly what we need in vehicles, where robustness (and thus simplicity + reliability) is mandatory for its safety. Nodes going offline because of some IP address conflict is absolutely unacceptable.

The lay-out and formatting of messages is up to you... but to avoid a random mess, manufacturers create their own protocols, and/or follow Industry standards, such as J1939 or ISObus.

Last but not least, there shall be Output. This will put things in motion. On the computer side, we have electric output pins. They can deliver low-power signals, again in Digital or Analog format. Digital signals could be 0 or 24Volt for example, which is just enough to drive a small light, or to switch a Relays. Analog outputs could be currents, voltages, or quick-pulsating signals (PWM) that gradually increase (or decrease) as you desire "more".

"More" translates into physical systems that can make things go faster, harder, louder.
Such systems are typically using pressurized hydraulic oil, pneumatics (compressed air), or high-power electricity. Servo motors can turn a weak signal into a powerful and precise displacement of shafts. Proportional pumps or valves can regulate air or fluid flows, to drive shafts or push/pull cylinders. On agricultural machines, most of the work is done via hydraulics. A little 12 or 24V signal is enough to power a relay, that opens a valve, letting through oil that can push many thousands of kilograms.

Finally, just like input doesn't necessarily have to be a physical, electrical signal, nor does output. Systems could send out (CAN) messages with computed results, designated for another system. For example, a Combine could broadcast its (GPS) position, state, groundspeed and settings, so the fleet manager can monitor his machines on a computer. This is what they call "Telematics" by the way. Another example is the Engine ECU. Besides regulating the actual engine components, it measures all kinds of things, like coolant temperature or oil pressure. Other systems like the Display use this to warn, to drive cooling fans, or destroke motors if the engine loads tends to get too high. The Engine doesn't know who/what/how its signals will be used, but it just drops them on the bus, just in case somebody needs them.

Once upon a time, an ox or even people did all the work. Now people's job is not to fall asleep behind an (almost) self-driving machine where hydraulics are the muscle, and computers the brain.

The Blackbox part
Between the Input and Output sections, there is the "Blackbox". The microchip & program
part. I don't know why they call it Blackbox, but it probably it has to do something with the fact that 99% of the people have absolutely no clue what happens inside that box. And even if you were a programmer, you still don't know unless you have seen the documentation or actual program code.

Your typical industral programmable Controller (see on the left) depicts the term "Blackbox" quite well. No idea what happens inside. Remember, they don't come with displays either! They emptied the Switch Cabinet (right) greatly though; lot's of relays, transistors, timers and PIDs are now in the software. 

Yep, that's where we shine girls. Whether you'll be programming phone apps, games, tractors, or a webshop, programming will always be about logic, algorithms, some math, and trying to make systems flexible enough so they can be easily adjusted or ported to another application. Yet, my experience is that industrial programming is still quite different though. For one thing, you won't be programming in a traditional style, as you would do with Java or C++. Or at least, you can, but it's just not common. And for a very good reason. Machinery often uses PLC's & Ladder diagrams. Vehicles and Hydraulic application are often using CoDeSyS, Guide Plus1 from Sauer Danfoss, or (my favourite) IQAN from Parker Hannifin. The latter two are major suppliers in hydraulic equipment, and also developed their own range of controllers, displays, and design software.

In all those cases, the design software provide a far more visual way of programming. It often looks more like an electrical drawing, instead of text based code lines. Being in a very specific segment (vehicles, hydraulics, CANbus), these tools are also far more limited and restricted. That means you can do much more in C++, but there is also much more that can go WRONG in C++. All the "building bricks" that Siemens, Danfoss, Parker, Hitachi, Omron or whatever industrial supplier provide in their software, are robust and proven. You really don't want your Boom Crane tumble over because of some rare pointer error caused by a once-in-a-million multi-threading mistake. Traditional programming languages are far more powerful, but also require far more energy and skill to develop & test your own "building bricks". The average programmer is just not capable of doing that, and even so, it would cost companies a lot of time and money.

Here three industrial programming tools. Quite different huh? They have one thing in common though; take away advanced traditional programming techniques that are bound to go wrong, and offer ready-to-go blocks that do the hard stuff for you.

It took me some time and pride to step away from advanced (Delphi) to a more simple platform,
but once you start dealing with angry clients, broken machines, confused service personnel, or 8 year old machines that suddenly want a new feature, you're more than happy with these simplified systems. Really, my student "Look at me programming cryptic hard-core shit" attitude shifted
towards "Easy doez it" over the years.

The Blackbox part: PID regulators
Focus is not so much on awesome programming techniques, but on smart design and making stabile (Idiot-proof) "Regulators", which demands knowledge of the machine and its users. A Regulator typically uses some input (sensors), parameter (user settings), to calculate an output signal that goes to a relays, pump or valve (see "Output" chapter).

Example. You may have heard of "Closed-Loop systems" or "PID Controllers". A very common task in industrial applications, is to maintain a certain level of something. That could be a thermostat, trying to keep 21 degrees C in your living room. It could also be a proportional pump, that tries to drive a shaft at 150 RPM. A very simple system would require a setpoint (user wants 150 RPM), that rescales to a certain output. 1Volt = 0 RPM, 10Volt = 200 RPM, 7.5V = 150 RPM. That's theoretical though. If Hulk Hogan strangles the shaft, it may not turn at all, 7.5Volt or not. In practice, shafts may get dirty/rusty, valves or pump may differ a little each time, cold- or hot oil fluids will result differently, or an increase of product-load on the shaft will slow it down.

The best way to compensate this, is by using a "Closed-Loop system", which can be a PID regulator. A sensor is used to measure feedback -the actual shaft speed / temperature / level / whatever-. If the measurement is below our desired setpoint, the output should increase or start. And vice-versa, if the measurement is above our desired setpoint, the output should decrease or stop. You would stop an heater as soon as the temperature rises above the desired temperature. In fact, you should have stopped a whole lot earlier, because temperatures usually change very slowly, but once they, they keep dropping or rising for a while. In other words, their feedback is very indirect. PID Regulators often have their P(orportional) I(ntegral) and D(ifferential) factors adjustable, so you can make it more or less aggressive to differences or rate-of-change. The magic trick is to configure it in such a way, that it responds quick enough. Not too early, not too late, not too much, not too little.

If very accurate speeds aren't that important, or if a slowdown is highly unlikely, we stick with just theoretical output. An extra speed sensor & PID Controller = expensive. But if it does matter, there's work for us programmers. Not just making the PID Controller itself, we also need a place where we can change the setpoint (potmeter, display, ...), and some feedback is desirable. If the speed really gets too low (shaft stuck / actuator malfunction) we'd like to see an alarm. And depending on differences and accuracy requirements, you probably need some calibration procedures as well.

So résumé, we have a network of Controllers, and usually a Display. Each Controller receives input(sensors), either via directly connected sensors, indirectly connected sensors (on I/O expansion devices), or via CANbus. Controller regulates certain functions, such as shafts, conveyors, lights, steer- or driving systems. Finally we have output; Controllers form electrical signals, going to Relays, Servo's, Pumps or Valves. But Output could also be in the form of outgoing messages, picked up by other systems.

As said, as a programmer, the good news is that I don't have to care much about the Input and Output sections. And vice-versa, nor do the handy guys care about what happens inside that Black Box. Yet the interesting part on such machines, is that many different disciplines meet. The input section is mainly designed by electrical engineering, and executed by electricians. The output part by hydraulic engineering and (CAD) drawers, as dimensions and physical rules will apply here. And somewhere in between is the programming. On a helicopter-project-management level, one would have to understand bits from all departments. Especially where they thigh together.

So to finish my little story about hillbillies stuck in mud & time, I quickly learned that, beneath the layers of dirt, there is a wonderful world of technology. Hence, the modern farmer isn't some smelly old grumpy dude with rubber boots and a sickle. The modern farmer sits behind a computer, collecting data from charts, robots, weather forecasts, satellites, and so on. The modern farmer is a manager on all fronts. Crop quality, machinery, personnel, import/export, animals, service, ... A manager, but occasionally with some dirt on his hands, still knowing what hard work means.

Combines as we make, costs hundreds thousands of dollars, have a wide network of service points spread all over the world, contain a big deal of computers and touchscreens, carry about 100 to 200 I/O (inputs/outputs) on average, communicate with clouds to upload position and data, as well as to download settings or chart data for partially automatic driving. Tractors, trucks and combines are driving forces behind communication technologies like SAE J1939 or ISObus; committees trying to standardize the messaging between Controllers, engines, tractors and additional equipment. My little story above just scratched the surface. So next time when you see a tractor or hear a funny talking guy from the countryside, just remember there is much more than pigs and poop out there. And talking about poop… just started on a fertilizer machine. Looking forward to see, feel and smell it :)

Thursday, August 11, 2016

Status update: Playable Demo

Overall there is little visual progress, mainly because I'm pretty much alone on the project once again. And I won't be actively searching for artists (or funding) at this point. I could, but likely we end up as always; artist-X thinks T22 is interesting and has six-hundred hours a day available. He or she "joins", not really knowing what to expect (where is the office? where is the rest of the team? where is the game? where are the tools? where are the documents? where are the goals? where is the compensation?). Concluding: a bit of a bummer. And a few weeks later he or she claims to be "very busy".

Being very busy is the code-word for “screw it, I have better things to do”. School, work, moving into another house, dog died, computer crashed, et cetera. I've heard it all, and many times. After some months and three half-finished assets later, artist either just disappears, or says he has to quit because of circumstances. With two kids and two jobs, I consider myself pretty busy as well. I don’t believe in “too busy”, but I understand you won’t be spending those few free hours on something… vague.

Yeah, managing a team is much more than just telling them what you want, and giving some compliments or feedback time to time. Most people, including myself, want short-term results. A quick dose of fun. Modelling 3D seats that will be used not until two years later -maybe-, isn’t fun. Drawing concepts that never come alive, isn’t fun. Recording sound for non-existing monsters, isn’t fun. So before asking again, I feel the project must be much better prepared, being in a further stage. But how to get there in the first place? Chicken Egg story. Artists will be attracted to projects with potential. But to make a project look promising, you will need artists. Difficult situation.

Made half of the assets here. It doesn't look too well, but at least the viewer understands what's going on here. Besides, making kettles or kitchens instead of programming is nice for a change.

Make it, Work it, Fix it
But what I CAN do, in contrary to what happened before, is making an actual game as much as
possible. It doesn't have to look good, hence this is where artists can shine; feed them ideas
and let them work it out to something beautiful. So what I did last months/year, is just making
the game. You may have read about physics and AI(behaviour trees) in previous posts. If you forget
about graphics for a moment, these are also (if not more important) building-blocks for a game.
Walking a player, climbing stuff, picking up keys, solving a puzzle, showing an inventory, dealing with enemies, et cetera. Plus there needs to be an engine and editor part where we can script/program all that stuff in a robust and comfortable way.

Another thing I did, is making most maps that will be used in the Playable Demo. Corridors,
rooms, that kind of stuff mainly. Note Tower22 is basically one huge map, but made of smaller “sectors”. Being limited in skill, using a half-finished renderer, and having a small asset palette only (most Tower22 objects and textures made before were more industrial oriented, rather than old building / apartments / Soviet crap), these maps are empty and ugly. BUT, at least you can walk through them now, which hopefully gives future artists a much better understanding of the road ahead. And of course, artists can replace dummies and "play" their own stuff now.

I only have to finish a few more maps. Prototype maps, I must add. If an artist thinks corridor-X should look different, or proposes a slight different routing through the playable demo, of course we can change things, although the "Demo Walkthrough" has been written already. This document describes all the events and sequences in the demo (do this, go there, do that, bla bla). So I also started implementing actual game-scripts. For example, one of the very first “puzzles” is to do a couple of things in your apartment, before the front-door unlocks. Dress, prepare food, answer the phone. Sounds easy, but it triggered me to code quite a few additional things to make those puzzles work. An inventory system, a day/night cycle, combining or using items, door mechanics, semi-dynamic lightmaps that allow to turn lights on or off, affecting their indirect light. And a lot of API functions that can be called by LUA scripts or Behaviour Trees to play sequences, set the clock, unlock doors, switch lights, and so on. In short; facilitate engine functionality to make an actual playable game.
Now that is NOT a good looking inventory. Old fashioned really, now that most games have a quick-access ingame HUD spin-dial -whatever to call them-, or no inventory at all. But anyhow, point is that we have *something* that works, and giving the artist food for improvement. 

The bigger picture
Only coded the very first minutes of “GamePlay”, but eventually it will speed up, as the tools, API and library-of-whatever-is-needed grows. But other than describing these in-depth game-details, that Walkthrough Document also states the more overall goals. What is it, that we want to show? If the demo was finished tomorrow, and people would download it, what should they experience?

Of course, a horror game. But again that is easier said than done, as I think the horror-genre is an extreme difficult one. Why? Because it’s not about making a "fun" game. If you aim for a soccer game, the focus should be on soccer, super smooth controls, and realism. If you're making a shooter, the focus is on cool weapons, challenging A.I. and maps that lend themselves for addictive battle. If you're making a horror game, the focus is on scaring people, which contradicts the fun-part usually. Tower22 won't throw you into an arena with weapons. Smiles on your face won't cooperate with uncomfortable feelings. In general, games like Silent Hill or Resident Evil aren't much fun (and no, the later RE titles aren't true horror games in my opinion). But boy, we really want to know what happens next. From one wet pants to another, sadomachochism.

To make things even more difficult, T22 is a slow-suspense game. No buckets of blood in your face, no hideous creatures leaping at you every twelve seconds. The fear-element in this game should be much more subtle. Even though you don’t directly see it, you know things are messed up. Something bad is coming. Sure there will be some jump scares, and clichés, but all in all I try to make an unique setting, avoiding the tricks, scenery and plots you have seen too many times already. In other words, there is no real reference for this game, so I can’t really predict if this game will be fun, scary, or intriguing either.

Dummies for Dummies
So, how to make sure a short demo will accomplish this vibe? We got some issues here. First of all, resources and time are very limited, and even if I could make anything, you don’t pull out the big-guns for a (teaser)demo yet. It starts gentle. Second, having these maps modelled and loaded into the game now, it's still a far cry from anything scary. What I see, is mainly a buggy, clumsy, unfinished world. Plus I had those maps in mind for quite some years now, so they don't come as a surprise either. Yes, more than most puzzle or action games, horror games rely a lot on good sound and graphics.

And notice "good graphics" doesn't necessarily mean next-gen photo-realistic super graphics. But the contents have to consistently follow a certain style. Having re-programmed the whole engine, with a lot of features still disabled, I feel my re-newed rooms still miss that "personal touch" my previous results had. That engine wasn't perfect, but being a bit dark, blurry and noisy, rooms certainly had a certain vibe, in contrary to the cold fabricated renders I have now. But hey, let's not drift too far away into graphics again. My point was that I'm trying to setup an actual game now, following the locations and puzzles/scripts/events the documents describe.

Again I find this pretty hard though. One of the very first things that will happen, is you hearing an old phone ringing, and picking it up. But obviously pretty much all resources for even such a simple
event are still lacking. I actually do have a phone model, but no table or furniture to put it on. No ring-ring sound, no pickup animation, and definitely not a weird voice that speaks over the phone.
Since I can't wait for artists to make all that stuff (because I practically have none), my answer
should be in "dummies". Just make a simple ugly table. Pick up the mic and say "ring. ring.". Use
Microsoft Sam or deform your own voice for the conversation that follows. Of course it sucks, but the artist now knows exactly what to do, and it functions as a temporary place-holder so we can proceed with the next event.
So there you have your dummy telephone on a dummy table in a dummy room, accompanied by "Hints" that can show additional info, pictures, internet-links.

Only little problem is that I hate making ugly stuff or hearing my own voice. I'll quickly end-up putting way too much energy into trying doing it good (but probably with mediocre success only). So instead of making a boring desk object, I'd rather spend my energy on programming something else... like a day/night cycle skybox + cascaded shadowmaps for the sun like I did few weeks ago. Well, that's something I have to learn, and making a list of "stuff to finish" with relative simple jobs will probably help me doing it, no matter how silly or boring.

Too bad this drives me even further away from a scary game though. Picking up a fake phone and hearing Microsoft Sam isn't scary, it's ashaming. Of course an artists will polish it one day, hopefully, but with that thought still in the back of your mind, you just can't test and judge your own horror game. Somebody else needs to do that ;) 

Lost Fantasies
I also noticed making horror-scenario's requires constant training. As I'm getting older, I play less
games, and my fantasies are becoming more mature (read more boring). As a kid, your brain flies through space, candyland, naked girls, war-torn cities, silly jokes, hellish creations, castles and drunk nights. Or even better, a non-logical combination of all of that. As an adult, your brain flies through taxes, work, annoyances about the neighbours, making your kids eat broccoli, and how the garden should be restyled. I hate to say it, but creativity slowly drains away, and putting your mind 200% into an horrific Soviet style skyscraper gets more difficult. I should be reading more books, movies and play more games again (but Silent Hill with an 8 year old daughter...). But also, I should make time to fantasize again. Sounds childish, but really. I shouldn't have swapped my bicycle for a car few years ago, because those 60 minutes each day were great for drifting away into a T22 scenario or whatever fantasy, while cycling to work.

So we have a kitchen here. Next step; make it look better. Final step -and the real challenge; make it contribute to a terrifying horror atmosphere one way or another.

Well anyhow, that's my status update. So basically I'm making a simplistic, dummy variant of the
playable demo, and when I feel its "ready enough", I'll ask artists to replace those dummies with
cool stuff. And being a bit more finished programming-wise by then, I hopefully have more time to
direct them. Doing on-request engine features, improve the tools, review their work, compensate with a bit of money, ensuring the goals remain clear, et cetera. Not that artists wouldn’t be welcome today, but I just want to make sure I do a better job keeping them in, before asking again.

Final stage -when the demo is "beautified" and functional- is to poke some famous Youtube dude(s), to walk through it. A good way to reach far more people. If people like it and want to see more, that would be an appropriate time to launch a Kickstarter campaign. I could do the same now as well, but with only a few thousand people knowing about this project... Timing and patience, my friends.

Saturday, July 2, 2016

Behavior Trees - Implementation

Oh dear, the BehaviorTree article asked for some additional (coding) explanation. Normally I avoid code snippets as much as possible, for various reasons. First of all, it usually doesn't make a fun-article for non-programmers to begin with. We're here to fool around a bit, not to teach people :) And second, I'm not so sure if I'm a good teacher anyway. When it comes to programming, I know a bit about everything, in a lot of languages. But I don't have a true expertise. And certainly not in the BehaviorTree or A.I.-in-general area. Coding articles are inherently followed by wise guys asking why I'm not doing X, that I shouldn’t be doing Y, claiming Z is better, and telling I'm a douchebag.

Another reason is the size of the article. Even though my BehaviorTree code is still minimal -and I tend to keep all my code as small as possible (don't forget I'm doing this in my spare time)- it already covers about 3200 lines. Way too much for an article. Sure you don't have to see every bit to get a good understanding, but I find it difficult to make a compact comprehensible, yet "complete-enough" tutorial. Write too much and nobody will read and understand. Make it too short and people still know nothing. And of course there is the lazy type of programmer who just wants a download link with plug & play *working* code.

Last, writing coding articles is boring. For me at least. But ok ok ok. Here we go. Don't say I didn't warn you! And in case you missed it, it's Delphi/Pascal code, plus its written for Engine22 so mind the names and quirks here and there.

Oh, and one more reason - Syntax Highlighting never works properly for me on Blogspot. Managed to get it a few times, but each time they change something so the chain snaps again. If you're not seeing highlighting below, I'm trying to fix it.

  1. Short refresh about BT's
  2. Base Nodes - base code used for all nodes
  3. Composite Nodes
  4. Decorator Nodes
  5. Condition Nodes
  6. Action Nodes
  7. Ticks - "Looping"
  8. Blackboard - Custom memory storage container for trees
  9. Loading a Tree
  10. Registrating Node classes

1. Brain Refresh

Before starting, really read the first article again (and those others I linked to) to get a global understanding what a BT does. Grasp the "node" concept (composites, decorators, conditions, actions). But since those nodes are the core of everything, I'll repeat the four main types once again. A BT is made of nodes, which come in the following main flavours:

1. Composites

 "Flow regulator" nodes. They execute one or (usually) more child-nodes. This can be performed in a certain order (a "Sequence") or randomly. It can be done conditional, meaning it stops as soon a child-node FAILED or is still RUNNING. Or it just executes them all regardless.
2. Conditions
A check on something, returning SUCCESS or FAIL. Is it 12 o clock? Does player have the Blue-Key in his inventory? Are we within 5 meters from our target? Did we get hit by a flamethrower?
3. Actions
Stuff that actually does something in your game/program/robot. Pick a target, move to X,              rotate, animate, play sounds, give 10% health, perform a karate kick, et cetera.
4. Decorators
A couple of handy logic blocks that can be placed in front of Composites or Conditions to invert, delay or manipulate their results in some way. "NOT" Player-is-Cool.

And as told, nodes can return one of these three states:
  • SUCCESS - Condition met or Action completed. Onto the next thing.
  • FAIL - Condition not met, Action cannot complete.
  • RUNNING - Task not done yet / pending. Still moving towards X, pie needs to be 10 more minutes in the oven.

Plus, for debugging purposes, you could add an ERROR result, for nodes that couldn't get executed because you gave them wrong parameters or something, or the code within raised an exception. Shouldn't happen, but it does happen when you develop things. Can help tracing faults.

It's up to us to implement these four types of nodes, and then to override them with our own stuff, because obviously, Conditions and Actions are very different for each application. Welding robots have rotate, coordinate, and well, welding actions, while a boxing games is more in terms of dodging and beating the shit out of that single opponent.

Right, got the basics? If not, go back to start and do not receive 20.000$. In Engine22, this is my basic file lay-out, for now:
                * E22_AI.BehaviorTrees.pas
                *** E22_AI.BehaviorTrees.Blackboard.pas
                *** E22_AI.BehaviorTrees.Nodes.pas
                ***** E22_AI.BehaviorTrees.Nodes.Composites.pas
                ***** E22_AI.BehaviorTrees.Nodes.Decorators.pas
                ***** E22_AI.BehaviorTrees.Nodes.Conditions.pas
                ***** E22_AI.BehaviorTrees.Nodes.Actions.pas

Since there will be a LOT of Conditions and Actions mainly, those files may branch further probably.
You could split into movement, combat, idle behaviour, et cetera. As for the "Blackboard", that's not a pirate name but a memory storage container we can use for our tree(s) to read/write custom data to. But let's begin coding them pesky nodes then.

2. Them pesky nodes – BASE NODE

Delphi being OOP, we should start with an abstract "base-node" that can be used by all other nodes that will follow. Here, bang, Turbo Pascal time:

      eAI_BT_BaseNode     = class
          isOpen          : eBool;        // Node has been evaluated this tick, or in a previous tick & still running
          // Information
          nodeTitle       : aString32;    // Custom title
          nodeDescription : aString128;   // Custom description, by artist
          nodeCoords      : eVec2;        // X Y, for editor views
          nodeGUID        : aString48;    // Unique ID
          parentNode      : eAI_BT_BaseNode;  // Decorator or Composite node that links to us

          procedure   initialize( parentNode : eAI_BT_BaseNode );  virtual; // Set initial vars and such
          destructor  destroy(); virtual;

          // Execution
          function   _execute(  tick : peAI_BT_TickInfo ) : eAI_BT_Result;
          procedure   enter(    tick : peAI_BT_TickInfo ); virtual; // called every time a node is executed
          procedure   open(     tick : peAI_BT_TickInfo ); virtual; // called only when the node is opened (when a node returns RUNNING, it will keep opened until it returns other value or the tree forces the closing);
          function    tick(     tick : peAI_BT_TickInfo ) : eAI_BT_Result; virtual;
          procedure   close(    tick : peAI_BT_TickInfo ); virtual; // called when a node return SUCCESS, FAILURE or ERROR, or when the tree forces the node to close;
          procedure   exitNode( tick : peAI_BT_TickInfo ); virtual; // called every time at the end of the node execution.

          // Editing
          function    getTitle() : uString;
          procedure   settitle( const title : uString );
          function    getGUID() : uString;
          procedure   setGUID( const GUID : uString );
          function    GUIDequals( GUID : uString ) : eBool;
          function    getDescription() : uString;
          procedure   setDescription( const desc : uString );
          function    getCoords() : eVec2;
          procedure   setCoords( const x,y : eFloat ); overload;
          procedure   setCoords( const v : eVec2 ); overload;

          // Child management
          procedure   addChild( node : eAI_BT_BaseNode ); virtual;
          procedure   removeChild( node : eAI_BT_BaseNode ); virtual;
          function    getChildrenCount() : eInt; virtual;
          function    getChild( const index : eInt ) : eAI_BT_BaseNode; virtual;

          // Property management
          function    getPropertyCount() : eInt; virtual;
          function    getProperty( const index : eInt ) : eAI_BT_NodeProperty; virtual;
          procedure   setProperty( const index : eInt; const value : uString ); overload; virtual;
          procedure   setProperty( propName : uString; const value : uString ); overload; virtual;
          procedure   copyFrom( otherNode : eAI_BT_BaseNode );  // Copy props from another node

          // Visualizer (editor)
          procedure   draw(); virtual;
      end; // eAI_BT_BaseNode

I hope this header-code is somewhat self-explanatory. Plus you can forget about 75%, as most methods are for (future)editing purpose. If you use an external editor, you don’t have to define coordinates, descriptions or drawing functions. More important are the Execute, Open, Tick, and Close functions:
·         Tick          This runs the actual node evaluation code.
·         Open         Called when the node is called for the first time since closed last time
·         Close        Called when the node is “done” (SUCCEEDED or FAILED, not RUNNING)

·         _execute   Calls all the enter/open/tick/close/exit functions in the right order

      function eAI_BT_BaseNode._execute( tick : peAI_BT_TickInfo ) : eAI_BT_Result;
      var status    : eAI_BT_Result;
          listIndex : eInt;
            // Add to "Entered" list
            tick.evaluatedNodeCnt := tick.evaluatedNodeCnt + 1;
            listIndex             := tick.openedNodes.count;
            tick.openedNodes.Add( self );  // Keep track of the evaluated nodes.
                                           // Can be intreesting for debugging, visualizing,
                                           // or optimizing later on.
            // Open
            self.enter( tick );
            if not ( self.isOpen ) then begin
                self.isOpen := true;
       tick ); // not opened before

            // Execute logic
            status := self.tick( tick );

            // Close
            if ( status <> eAI_BT_RUNNING ) then begin
                self.close( tick );
                // Remove ourselves & children nodes
                while ( tick.openedNodes.count > listIndex ) do begin
                    eAI_BT_BaseNode( tick.openedNodes[tick.openedNodes.count-1] ).isOpen := false;
                    tick.openedNodes.Delete( tick.openedNodes.count-1 );
            self.exitNode( tick );

            result := status; // Report our result to our parent node
      end; // _execute

Note that this “execute” function is potentially called every cycle, for every node (in the worst case). Games or Realtime robotic applications tend to cycle through their program many times per second, evaluating their logic. BehaviorTrees refer to this as “ticks”. More about that later.

2.2 Custom Node Properties

The nodes that we’ll make later, will mainly override the Tick and Open functions, doing your magic. Also not unimportant, are the “Property” functions. In many cases you want to feed your Actions or Conditions with some background info. A “setTarget” action also wants to know “WHAT TARGET?!”. The player? The closest foe? The toilet bowl? And a “check clock” function should know what time to check in terms of hours and minutes. Each node has a fixed number of properties, each with a name, type (int, bool, float, string, vector(coordinate)), unit and default value. When loading trees from files later on, those names are important for matching. Other info is mainly interesting if you plan to make your own editor.

      eAI_BT_NodePropertyType = ( eAI_BT_PropBOOL   = 0,
                                  eAI_BT_PropINT    = 1,
                                  eAI_BT_PropFLOAT  = 2,
                                  eAI_BT_PropVEC3   = 3,
                                  eAI_BT_PropVEC4   = 4,
                                  eAI_BT_PropCOL3   = 5,
                                  eAI_BT_PropCOL4   = 6,
                                  eAI_BT_PropSTRING = 7,
                                  eAI_BT_PropENTITY = 8,
                                  eAI_BT_PropSOUND  = 9 );
eAI_BT_NodeProperty = record
          idName          : aString16;
          value           : aString128;
          defaultValue    : aString128;
          unitName        : aString8;
          propType        : eAI_BT_NodePropertyType;

          procedure make( name : string; value, defaultValue : eInt; const unitName : string ); overload;
          procedure make( name : string; value, defaultValue : eFloat; const unitName : string ); overload;
          procedure make( name : string; value, defaultValue : eBool );   overload;
          procedure make( name : string; value, defaultValue : uString; const typ : eAI_BT_NodePropertyType ); overload;
          procedure make( name : string; value, defaultValue : eVec3   ); overload;
          procedure make( name : string; value : eES_EntityID ); overload;
      end; // eAI_BT_NodeProperty

3. Composite Nodes

So far, abstract meaningless code. Let’s override that abstract node and turn it into a real node we could use, starting with composites. There aren’t many types of composites, and although you are completely free in giving them names and logic, you should try to follow the standard types of composites. Some very common ones are:

·         (Memorized) Sequence
Loop through children, until one returns SUCCESS or FAIL (abort the sequence). If it’s a memorized sequence, resume the child-node we evaluated previous tick.

·         Priority or Selector
Basically the IF THEN ELSE. Stop looping though the children as soon as one returns SUCCESS or RUNNING.

·         Parallel
Executes all children, regardless their outcome. Eventually return SUCCESS if more than X children succeeded.

And then there is the START or ENTRY node. Which doesn’t do shit, but connected to a single child. This where the tree starts. Keep in mind a Tree could execute sub-trees, starting at their entry points (and eventually returning an overall result as well).
Probably you will be using Sequence to begin with. We can code them as follow:

      eAI_BT_NodeComposite    = class( eAI_BT_BaseNode )
          children            : TList;

          procedure   initialize( parentNode : eAI_BT_BaseNode ); override;
          destructor  destroy(); override;
          procedure   addChild( node : eAI_BT_BaseNode ); override;
          procedure   removeChild( node : eAI_BT_BaseNode ); override;
          function    getChildrenCount() : eInt; override;
          function    getChild( const index : eInt ) : eAI_BT_BaseNode; override;
      end; // eAI_BT_NodeSequence

      // Execute all children until one NOT returns SUCCESS
      // Return SCCUESS if all childen succeeded, FAIL if any of the children FAILED
      eAI_BT_NodeSequence     = class( eAI_BT_NodeComposite )
          function  tick(  tick : peAI_BT_TickInfo ) : eAI_BT_Result; override;
      end; // eAI_BT_NodeSequence

      // Same as Sequence, but keep track of the position so earlier succeeded children
      // won't be re-executed until the parent node was closed.
      // Return SCCUESS if the last child succeeded, FAIL if any of the children FAILED
      eAI_BT_NodeMemSeq       = class( eAI_BT_NodeComposite )
          runningChildIndex   : eInt;
          procedure open(  tick : peAI_BT_TickInfo ); override;
          function  tick(  tick : peAI_BT_TickInfo ) : eAI_BT_Result; override;
      end; // eAI_BT_NodeMemSeq

{ eAI_BT_NodeSequence }

      function eAI_BT_NodeSequence.tick(tick: peAI_BT_TickInfo): eAI_BT_Result;
      var i : eInt;
            // Loop through children until one FAILED or RUNS
            for i:=0 to self.children.count-1 do begin
                result := eAI_BT_BaseNode( self.children[i] )._execute( tick );
                if ( result <> eAI_BT_SUCCESS ) then
                    exit;  // FAIL or RUN
            result := eAI_BT_SUCCESS; // All children executed with SUCCESS
      end; // tick

{ eAI_BT_NodeMemSeq }

      procedure  tick : peAI_BT_TickInfo );
            self.runningChildIndex := 0;
      end; // open

      function eAI_BT_NodeMemSeq.tick(tick: peAI_BT_TickInfo): eAI_BT_Result;
      var i       : eInt;
          child   : eInt;
            // Start where we ended last time (was running previously)
            child := self.runningChildIndex;

            // Loop through children until one FAILED or RUNS
            for i:=child to self.children.count-1 do begin
                result := eAI_BT_BaseNode( self.children[i] )._execute( tick );

                // Wait until current child finished
                if ( result <> eAI_BT_SUCCESS ) then begin
                    if ( result = eAI_BT_RUNNING ) then
                        self.runningChildIndex := i;  // For next Tick
                    exit; // FAIL or RUN

            result := eAI_BT_SUCCESS; // All children executed with SUCCESS
      end; // tick

So here we showed how a (memorized) sequence can be implemented. As you see, it still doesn’t do much other than executing children. Those children could be other Composites, Decorators, or eventually Conditions and Actions. Quite often a sequence will first check one or more Conditions:
                Sequence           à           IF health < 25                    (condition)
                                                               Find medkit                       (action)
                                                               Move to medkit              (action)
                                                               Pick up medkit                 (action)
                                                               Boedha time                     (action)

If those first conditions aren’t met, there is often no need in executing any further actions. Be aware with Memorized Sequences though, that those conditions aren’t re-checked every tick. If the dog eats the medkit in the meanwhile, our NPC still continues his procedure, unless there is some exit strategy implemented.

4. Decorator Nodes

A very basic, but useful decorator is the Invertor or “NOT” node. Decorators always have a single child, and manipulate their results. The invertor turns SUCCESS into FAIL, or vice-versa.

      eAI_BT_NodeDecorator    = class( eAI_BT_BaseNode )
          child               : eAI_BT_BaseNode;

          procedure   initialize( parentNode : eAI_BT_BaseNode );  override;
          destructor  destroy(); override;
          procedure   addChild( node : eAI_BT_BaseNode ); override;
          procedure   removeChild( node : eAI_BT_BaseNode ); override;
          function    getChildrenCount() : eInt; override;
          function    getChild( const index : eInt ) : eAI_BT_BaseNode; override;
      end; // eAI_BT_NodeDecorator

      eAI_BT_NodeInverter     = class( eAI_BT_NodeDecorator )
          function  tick(  tick : peAI_BT_TickInfo ) : eAI_BT_Result; override;
      end; // eAI_BT_NodeInverter

{ eAI_BT_NodeDecorator }

      procedure eAI_BT_NodeDecorator.initialize( parentNode : eAI_BT_BaseNode );
            inherited initialize( parentNode );
            self.child    := nil;
      end; // initialize
      destructor eAI_BT_NodeDecorator.destroy();
            // Do not destroy children, must be done by owner tree
            inherited destroy();
      end; // destroy

      function eAI_BT_NodeDecorator.getChild(const index: eInt): eAI_BT_BaseNode;
            result := self.child;
      end; // getChild
      function eAI_BT_NodeDecorator.getChildrenCount() : eInt;
            if ( self.child = nil ) then
                result := 0 else
                result := 1;
      end; // getChildrenCount

      procedure eAI_BT_NodeDecorator.addChild( node : eAI_BT_BaseNode );
            self.child := node;
      end; // addChild
      procedure eAI_BT_NodeDecorator.removeChild( node : eAI_BT_BaseNode );
            self.child := nil;
      end; // removeChild

{ eAI_BT_NodeInverter }

      function eAI_BT_NodeInverter.tick(tick: peAI_BT_TickInfo): eAI_BT_Result;
            if ( self.child = nil ) then
                result := eAI_BT_ERROR
            else begin
                result := self.child._execute( tick );
                if ( result = eAI_BT_SUCCESS ) then
                    result := eAI_BT_FAIL else
                if ( result = eAI_BT_FAIL ) then
                    result := eAI_BT_SUCCESS;
      end; // tick

Got that? Fine, onto the real interesting nodes: Conditions and Actions.

5. Condition Nodes

There are no default Condition nodes, as they really depend on your needs. But let’s come up with something practical; a node that checks if a certain entity (could be the player, but also a hamburger) is within range. We will also provide some custom properties to this node. The desired distance in meters, and an entity idName – the target to check. Note by the way that Condition (or Action) nodes do not have children, so their “getChildrenCount()” should always return 0.

      eAI_BT_NodeCondition        = class( eAI_BT_BaseNode )
      end; // eAI_BT_NodeCondition

      eAI_BT_Node_cEntInRange     = class( eAI_BT_NodeCondition )
            entity                : eES_EntityAbstract;
            entityIdName          : uString;
            distance           : eFloat;
          procedure   initialize( parentNode : eAI_BT_BaseNode ); override;
          procedure   open(  tick : peAI_BT_TickInfo ); override;
          function    tick(  tick : peAI_BT_TickInfo ) : eAI_BT_Result; override;

          function    getPropertyCount() : eInt; override;
          function    getProperty( const index : eInt ) : eAI_BT_NodeProperty; override;
          procedure   setProperty( const index : eInt; const value : uString ); override;
      end; // eAI_BT_Node_cEntInRange

{ eAI_BT_Node_cEntInRange }

      procedure eAI_BT_Node_cEntInRange.initialize( parentNode : eAI_BT_BaseNode );
            inherited initialize( parentNode );
            self.distance := 5;
            self.entityIdName:= '';
            self.entity := nil;
      end; // initialize

      function  eAI_BT_Node_cEntInRange.getPropertyCount() : eInt;
            result := 2;
      end; // getPropertyCount
      function  eAI_BT_Node_cEntInRange.getProperty( const index : eInt ) : eAI_BT_NodeProperty;
            case (index) of
                0: result.make( 'entity' , self. entityIdName, ‘Player’, eAI_BT_PropENTITY );
                1: result.make( ‘distance’ , self.distance , 5, 'meters' );
      end; // getProperty
      procedure eAI_BT_Node_cEntInRange.setProperty( const index : eInt; const value : uString );
      var id : eUInt64;
            case (index) of
                0: self.entityIdName := value;
                1: self.distance := strToFloat( value );
      end; // setProperty

      procedure peAI_BT_TickInfo);
            if ( self.entity = nil ) then begin
                // Find our target
                self.entity := _ES.getManager().getEntityByName( self.entityIdName );
      end; // open

      function  eAI_BT_Node_cEntInRange.tick(tick: peAI_BT_TickInfo): eAI_BT_Result;
      var dist : eFloat;
            if ( self.entity = nil ) then begin
                result := eAI_BT_FAIL;
            end else begin
                    // Get distance between our parent entity, and our target
                    dist   := self.entity.getPos().distanceTo( tick.myEntity.getPos() );
                    result := eAI_BT_FAIL;  // Maybe entity got unloaded in the meanwhile

                 if ( dist < self.distance ) then
                        result      := eAI_BT_SUCCESS else
                        result      := eAI_BT_FAIL;
      end; // tick

Be aware that this node is sensitive for some faults. Maybe the entityName was spelled wrong. Maybe we found the entity, but it get destructed later on. Also, when setting properties, you may want some exception checking on top of that, in case we give invalid numbers. The whole purpose of BehaviorTrees is to provide (the artist / mapper / designer) a robust tool to create A.I.. And people make mistakes, so be prepared.

6. Action Nodes

The last type of node we show; Action-Man. Actions actuate something. We could use them to write some custom data into our memory “Blackboard”, to pick a target, to throw grenades, and so on. Typically we want to split up into simple actions that can be reused for a lot of different procedures. Moving is an excellent example, though a difficult one because movement contains a ton of deeper (engine) logic. Picking a target, moving to it, physics, gravity, collision detection, animation, inverse kinematics while climbing a stair, and so on. You could deal with each of those sub-actions via your BehaviorTree, but it might be easier for the A.I. designer to let the engine take care of that automatically.

For demo purposes, I picked a simpler action: doing nothing. A delay. After an adjustable amount of seconds, it will return SUCCESS. But as with many actions, this takes a while. In the meanwhile the node turns “RUNNING”. This affects the way how parent composites deal with it. Memorized Sequences will remember the current action, so it can be called again next tick, proceeding.

      eAI_BT_NodeAction       = class( eAI_BT_BaseNode )
      end; // eAI_BT_NodeAction
eAI_BT_Node_aWait      = class( eAI_BT_NodeAction )
            elapsed          : eFloat;
            waitTime         : eFloat;
          procedure initialize( parentNode : eAI_BT_BaseNode ); override;
          function  getPropertyCount(): eInt; override;
          function  getProperty(const index: eInt): eAI_BT_NodeProperty; override;
          procedure setProperty(const index: eInt; const value: uString); override;

          procedure open(  tick : peAI_BT_TickInfo ); override;
          function  tick(  tick : peAI_BT_TickInfo ) : eAI_BT_Result; override;
      end; // eAI_BT_Node_aWait

{ eAI_BT_Node_aWait }

      procedure eAI_BT_Node_aWait.initialize(parentNode: eAI_BT_BaseNode);
            inherited initialize( parentNode );
            self.elapsed := 0;
            self.waitTime:= 1;
      end; // initialize

      function  eAI_BT_Node_aWait.getPropertyCount(): eInt;
            result := 1;
      end;// getPropertyCount
      function  eAI_BT_Node_aWait.getProperty(const index: eInt): eAI_BT_NodeProperty;
            result.make( 'time' , self.waitTime  , 1, 'sec' );
      end; // getProperty
      procedure eAI_BT_Node_aWait.setProperty(const index: eInt; const value: uString);
            case ( index ) of
                0: self.waitTime  := strToFloat( value );
      end; // setProperty

      procedure peAI_BT_TickInfo);
            self.elapsed := 0; // Reset timer when we got re-opened
      end; // open

      function eAI_BT_Node_aWait.tick(tick: peAI_BT_TickInfo): eAI_BT_Result;
            self.elapsed := self.elapsed + tick.deltaSecs;
            if ( self.elapsed >= self.waitTime ) then
                result := eAI_BT_SUCCESS else
                result := eAI_BT_RUNNING;
      end; // eAI_BT_NodeWait

7. Watch out for Ticks

All right, so far the nodes. The only way to really get comfortable with them, is just by doing. My advice, start with a simple scenario, like the “Seat-2D2” video showed, and model it in a free tool, just to get a hang on it. As you goi, you’ll figure out what kind of nodes you’ll be needing, and what kind of parameters they should use. And very likely, you will rethink your whole node toolset at some point, generating a more logical, easier to use set. Don’t be afraid to take some missteps. The beauty is that you can relative easily remove and introduce new nodes to your package; the code above it – that runs the tree- won’t be affected.

The next thing to do, is making a “Tree” class. The BT itself is a collection of nodes, and provides the logic to run them.

      eAI_BT_TickInfo       = record
          deltaSecs         : eFloat;             // Elapsed time between 2 cycles
          entity            : eES_Entity;         // Parent entity (NPC)
          blackboard        : eAI_BT_Blackboard;  // Custom Read/Write Memory
          navigator         : eAI_Navigator;      // For movement, pathfinding

          // Evaluation
          evaluatedNodeCnt  : eInt;
          openedNodes       : TList;              // Tracker of evaluated nodes
      end; // eAI_BT_TickInfo
      peAI_BT_TickInfo  = ^eAI_BT_TickInfo;

      eAI_BT_BehaviorTree  = class
          tick          : eAI_BT_TickInfo;  // Arguments to pass to the nodes when executing a tick
          blackboard    : eAI_BT_Blackboard;// Memory container
          navigator     : eAI_Navigator;    // For movement

          inUse         : eBool;
          instanceGroup : eAI_BT_BehaviorTreeInstanceGroup;
          root          : eAI_BT_BaseNode;  // Start evaluation here
          allNodes      : TList;            // All (sub)node instances used in this tree

          constructor create();
          destructor  destroy();
          procedure   clear();

          procedure   execute( entity : eES_Entity; const deltaSecs : eFloat );
          function    addNode( const nodeClassIdName : uString;
                                     parentNode      : eAI_BT_BaseNode ) : eAI_BT_BaseNode;
          function    getNode( const GUID : uString ) : eAI_BT_BaseNode;
          procedure   removeNode( node : eAI_BT_BaseNode );

          // Loader
          procedure   copyFrom( otherTree : eAI_BT_BehaviorTree );
          procedure   loadFromFile_B3JS( const filename : uString );  // Online editor:
          procedure   loadFromStream_E22( str : TStream );  // Engine22 build-in format
      end; // eAI_BT_BehaviorTree

{ eAI_BT_BehaviorTree }

      constructor eAI_BT_BehaviorTree.create();
            inherited create();

            // Blackboard
            self.blackboard     := eAI_BT_Blackboard.create();

            // Navigator
            self.navigator      := eAI_Navigator.create();

            self.allNodes       := TList.Create();
            self.inUse          := false;
            self.instanceGroup  := nil;

            // Root
            self.root           := eAI_BT_NodeRoot.create( );
            self.root.setTitle( 'root' );
            self.root.initialize( nil );
      end; // create

      destructor eAI_BT_BehaviorTree.destroy();

            inherited destroy();
      end; // destroy

      procedure   eAI_BT_BehaviorTree.clear();
      var i : eInt;
            for i:=0 to self.allNodes.count-1 do begin
                eAI_BT_BaseNode( self.allNodes[i] ).destroy();
      end; // clear

      procedure eAI_BT_BehaviorTree.execute( entity : eES_Entity; const deltaSecs : eFloat );
            self.navigator.update( entity, deltaSecs );

            // Init tick arguments
            self.tick.entity     := entity;
            self.tick.deltaSecs  := deltaSecs;
            self.tick.blackboard := self.blackboard;
            self.tick.navigator  := self.navigator;

            // Tick root-node, and everything beyond...
            self.root._execute( @tick );
      end; // tick

      procedure eAI_BT_BehaviorTree.copyFrom(otherTree: eAI_BT_BehaviorTree);
            self.root.copyFrom( otherTree.root );
      end; // copyFrom

      function  eAI_BT_BehaviorTree.addNode( const nodeClassIdName : uString;
                                                   parentNode      : eAI_BT_BaseNode  ) : eAI_BT_BaseNode;
            result := _ES_MakeBehaviorTreeNodeInstance( nodeClassIdName, parentNode );
            self.allNodes.add( result );
      end; // addNode
      procedure eAI_BT_BehaviorTree.removeNode( node : eAI_BT_BaseNode );
            // Detach
            if ( node.parentNode <> nil ) then begin
                node.parentNode.removeChild( node );

            self.allNodes.remove( node );
      end; // removeNode

      function  eAI_BT_BehaviorTree.getNode( const GUID : uString ) : eAI_BT_BaseNode;
      var i : eInt;
            for i:=0 to self.allNodes.count-1 do
                if ( eAI_BT_BaseNode( self.allNodes[i] ).GUIDequals( GUID ) ) then
                    result :=  eAI_BT_BaseNode( self.allNodes[i] );
            result := nil;
      end; // getNode

Typically each Entity / Agent / NPC has its own Tree. This brings a slight difficulty… What if we have 200 soldiers, all using the same tree? You can’t directly share the same tree-instance, because internal variables like delay-timers, target coordinates or the actual node states (running, failed, …) can be different for each soldier.

The tutorial I linked to in my previous post solves this by NOT storing any instance-dependant variable into the node objects. Instead, everything is written to a “Blackboard”. This blackboard contains the run-state of each and every NPC that uses the tree, as well as a global section so variables can be shared amongst multiple NPC’s. This can be useful in particular when your army or squads share tactical info. A commander NPC could set global goals for a whole group of soliders.

However, I chose not to do it like that. Because adding, overwriting, removing and getting all those variables via lists is slow and painful, I’d say. Instead, I’ll make a copy of the entire tree for each instance. Now, Tower22 won’t have 200 enemies. Yet it doesn’t sound like a very performance-friendly method either. To partially fix that, Engine22 does a lot of recycling. Yes, we are very green. If a tree is released (entity went to Hell), it will be available for another instance.

So, when I need a tree of typeX, (say file “monkey_BT.txt”), a manager will first check if there is an unused tree available. Ifso, give that one –and reset it before usage. If there is no tree available, a new one will be created. But instead of loading the whole file again, it copies its content from another tree. Engine22 does this for a lot of memory-eating resources by the way.

7.2 Even-Driven?
One more thing I should mention about Ticks & Tricks, are events. Right now, a NPC has a single tree and just checks everything, always. That introduces a few problems. How about high priority stuff like getting killed? It would suck pretty much if your opponent doesn’t die because his faulty BT skipped the “Die-Hard” section, as the “eating cookies” branch got a higher priority. And also in general, polling if something happened (every cycle) just isn’t very performance friendly.

You could decide to run different trees instead, based on events. “OnHitByBullet”, “OnCollision”, “OnPlayerInSight”, “OnTargetReached” or “OnClicked” are beautiful examples of that. It will result into multiple, but smaller “to-the-point” behaviortrees. It may run more efficiently, and reduces modelling faults. Then again it will also reduce flexibility, as your model relies on the available engine events. Yet I’m seriously considering this for Engine22.

8. Captain Blackboard

Blackboards are memory containers. Although BehaviorTrees do not store each and every state value into a Blackboard, we still use them for custom variables. Those are either per-NPC variables. Like “hunger” which could be different for each instance. But there is also a global Blackboard, containing shared variables that can be accessed by all NPC’s. We all want to know who the player is. And for a soccer team, we could write the score as a single number, rather than maintaining the score number for each NPC individually.

From an engine design perspective, custom data is always tricky. We’re trying to make Tower22 in Engine22, but we could just as well make Pac-Man with it. Both games have very different BehaviorTrees, and thus also very different data behind them. In other words, the engine should not make a whole list of “expected” variables. It doesn’t know which variables there will be, neither should it care about. Game specific code, which includes BehaviorTrees should manage that.

Yet for performance reasons, the E22 Blackboard does have some pre-defined variables, like a “PrimaryTarget”. Whether it’s a Pac-Man ghost, Tower22 monster, Black-Ops trooper or racing car, they (almost) always have a goal; something to engage, pick-up or move over to. So, there are some Set/Get primaryTarget functions. But other than that, custom variables are simple tuples with a key(id name) and a value.

      eAI_BT_BlackboardVar= class
          key             : uString;
          value           : uString;
          defaultValue    : uString;  // Reset
      end; // eAI_BT_BlackboardVar

      // Use a blackboard to Read/Write data via a BehaviorTree
      // One board assigned per Tree - thus per NPC
      eAI_BT_Blackboard   = class
          // Primary target
          targetEntity    : eES_EntityAbstract;   // Primary target
          targetLocation  : eVec3;                // Fixed target location - if there is no entity
          targetAssigned  : eBool;                // True whenever set. Set false once reached or lost.

          // Custom values
          variables       : TStringList;          // Sorted list of 
          constructor create();
          destructor  destroy();
          procedure   reset();

          // Primary target
          procedure   setPrimaryTarget( targetEntity    : eES_EntityAbstract ); overload;
          procedure   setPrimaryTarget( targetLocation  : eVec3 ); overload;
          procedure   setPrimaryTargetNone(  );
          function    primaryTargetIsEntity() : eBool;
          function    hasPrimaryTarget() : eBool;
          function    getPrimaryTargetEntity( ) : eES_EntityAbstract;
          function    getPrimaryTargetPos( var targetLost : eBool ) : eVec3;

          // Custom values
          procedure   writeVar( const key : uString; const value : eInt    ); overload;
          procedure   writeVar( const key : uString; const value : eFloat  ); overload;
          procedure   writeVar( const key : uString; const value : eBool   ); overload;
          procedure   writeVar( const key : uString; const value : eVec3   ); overload;
          procedure   writeVar( const key : uString; const value : uString ); overload;
          function    readVar(  const key : uString ) : uString;
      end; // eAI_BT_Blackboard

In order to Write those variables, you could make an Action node for that: “writeVar”. Either pick a global or NPC blackboard as a target, and give it a name + value.

Reading and using them is a bit more tricky. Of course you can read, write and do math internally in your overrided Node code, using the functions above. But it would also be interesting if we can replace fixed values with variable references, when defining properties in our BT modeller. I didn’t code anything for this (yet), so I won’t dive into this further, but it can be something to keep in the back of your mind, when coding your BT engine.

9. Plant and Load a Tree

As promised, we close this article with some code that reads the JSON file, produced by this online BT editor:

And, let me just warn you, the code below doesn’t do anything truly smart. No JSON libraries or whatsoever used, just good old dumb string parsing. You see, I want to have an editor build into the engine (so you can check & change stuff on the fly, and automatically access all the available nodes plus their parameter information). But since that will be quite beefy, I used an external editor to begin with, and made a quick & dirty reader. If you need something more fancy for Delphi, commenter Dennis gave us a link to his work:

All right:

      procedure eAI_BT_BehaviorTree.loadFromFile_B3JS(const filename: uString);
      var sFile       : strTextFileReader;
          line        : uString;
          values      : TStringList;
          key,val     : uString;

          cGUID       : uString;
          cNode       : eAI_BT_BaseNode;
          child       : eAI_BT_BaseNode;

        "90282381-684c-461e-b61c-11684778b0e5": {
            "id": "90282381-684c-461e-b61c-11684778b0e5",
            "name": "Priority",
            "title": "Priority",
            "description": "",
            "display": {
                "x": -320,
                "y": -160
            "parameters": {},
            "properties": {},
            "children": [
            procedure trimValueString();
            begin // Remove tail comma, if there is one
                  if ( length(val) > 1 ) then
                  if ( val[ length(val) ] = ',' ) then begin
                      val := val.subString( 0, length(val)-1 );
            end; // trimValueString
            procedure trimKeyString();
            begin // Remove tail comma or :, if there is one
                  if ( length(key) > 1 ) then
                  if ( key[ length(key) ] = ':' ) or ( key[ length(key) ] = ',' ) then begin
                      key := key.subString( 0, length(key)-1 );
            end; // trimKeyString

            procedure readChildren();
            begin // Read node "children" references (GUIDs) sub-block
                  while ( sFile.readRawLine(line)) do begin
                      strReadLine( line, key, values );
                      if ( key = ']' ) then exit;

                      child := self.getNode( key ); // Get node from list via GUID
                      if ( child <> nil ) then begin
                          cNode.addChild( child );  // Fill children list
                          child.parentNode := child;
                  end; // while
            end; // readChildren

            procedure readProperties();
            begin // Read node "properties" sub-block
                  while ( sFile.readRawLine(line)) do begin
                      strReadLine( line, key, values );
                      key := upperCase(key);
                      if ( key = 'PROPERTIES:' ) then exit;
                      if ( key = '},' ) then exit;
                      if ( values.count < 1 ) then continue;
                      val := values[0];

                          cNode.setProperty( key, val );
                          // No such property
                          showMessage( 'Warning: cannot set property '+key+' = '+val );
                  end; // while
            end; // readProperties

            self.clear(); // Clean up old crap first
            values := TStringList.create();

                sFile := strTextFileReader.create( filename, 'eAI_BT_BehaviorTree.loadFromFile_B3JS' );
                sFile.destroy();  // Cant open file, fuck you

            // Loop through file, create all nodes
            // BUT DO NOT LINKED THEM WITH EACH OTHER YET (nodes can refer to uncreated subnodes)
            cNode := nil;
            while ( sFile.readRawLine(line)) do begin
                strReadLine( line, key, values );
                key := upperCase(key);
                if ( values.count < 1 ) then continue;
                val := values[0];
                trimValueString();    // Remove tail character from string

                if ( key = 'CUSTOM_NODES:' ) then break;
                if ( key = 'ID:') then cGUID := val else
                if ( key = 'NAME:') then begin
                    cNode := _ES_MakeBehaviorTreeNodeInstance( val, nil );
                    if ( cNode = nil ) then continue;
                    cNode.setGUID( cGUID );
                    self.allNodes.add( cNode );
                    // don't know the parent yet - do that later
                end else
                if ( cNode <> nil ) then begin
                    if ( key = 'TITLE:') then cNode.setTitle( val ) else
                    if ( key = 'DESCRIPTION:') then cNode.setDescription( val ) else
                    if ( key = 'X:') then cNode.setCoords( strToInt(val), cNode.getCoords().y ) else
                    if ( key = 'Y:') then cNode.setCoords( cNode.getCoords().x, strToInt(val) ) else
                    if ( key = 'PROPERTIES:') then begin

                    end else
                    if ( key = 'PARAMETERS:') then begin
                    end else
                    if ( key = 'CHILDREN:') then begin
                        // Skip for now
                    end else

            // Repeat, now read children
            cNode := nil;

            while ( sFile.readRawLine(line)) do begin
                strReadLine( line, key, values );
                key := upperCase(key);
                if ( values.count < 1 ) then continue;
                val := values[0];
                trimValueString();    // Remove tail character from string

                if ( key = 'ROOT:') then self.root.addchild( self.getNode(val) ) else
                if ( key = 'CUSTOM_NODES:' ) then break;
                if ( key = 'ID:') then cNode := self.getNode( val ) else
                if ( key = 'CHILD:'   ) and ( cNode <> nil ) then begin
                    // Single child
                    trimValueString();          // Remove tail character from string
                    child := self.getNode(val); // Get via GUID
                    if ( child <> nil ) then begin
                        cNode.addChild( child );
                        child.parentNode := cNode;
                end else
                if ( key = 'CHILDREN:') and ( cNode <> nil ) then begin
                    // Multiple children
                end else

            // Clean up crew
      end; // loadFromFile_B3JS

This code won’t run straight away, because it uses quite a lot Engine22 string functions, but hopefully you get the point. One important aspect here though, is the “_ES_MakeBehaviorTreeNodeInstance” function. Given a Node classname (such as “Sequence” or “aMoveToTarget”), it will create a node instance from that class.

10. Registrate node classes for usage

Each node class is registered during startup, like this:

      _ES_RegisterBehaviorTreeNodeClass( 'cEntInRange'    , eAI_BT_Node_cEntInRange );
      _ES_RegisterBehaviorTreeNodeClass( 'cHasTarget'     , eAI_BT_Node_cHasTarget );
      _ES_RegisterBehaviorTreeNodeClass( 'cTargetRaycast' , eAI_BT_Node_cTargetRaycast );
      _ES_RegisterBehaviorTreeNodeClass( 'cRaycast'       , eAI_BT_Node_cRaycast );

      _ES_RegisterBehaviorTreeNodeClass( 'cClockLaterThan', eAI_BT_Node_cClockLaterThan );
      _ES_RegisterBehaviorTreeNodeClass( 'cClockBetween'  , eAI_BT_Node_cClockBetween );
      _ES_RegisterBehaviorTreeNodeClass( 'cClockLaterThan', eAI_BT_Node_cCalenderCheckDay );

Note that code is placed at the bottom section, and will be executed right away when the program starts. The _ES_RegisterBehaviorTreeNodeClass function maintains a list of nodes and classes, so we can create an instance of those classes later on, giving the name of the class.


      eES_BehaviorTreeNodeSpecs = record
          nodeClass             : TClass;
          idName                : uString;
      end; // eES_BehaviorSpecs

      _ES_RegisteredBehaviorTreeNode      : array[0..255] of eES_BehaviorTreeNodeSpecs;
      _ES_RegisteredBehaviorTreeNodesCnt  : eInt  = 0;

      procedure _ES_RegisterBehaviorTreeNodeClass(  nodeIdName   : uString;
                                                    nodeClass    : TClass );
            if ( _ES_RegisteredBehaviorTreeNodesCnt > 255 ) then begin
                showMessage( 'ERROR: Cannot register more than 255 different BehaviorTree Node Classes!' );
            _ES_RegisteredBehaviorTreeNode[ _ES_RegisteredBehaviorTreeNodesCnt ].nodeClass := nodeClass;
            _ES_RegisteredBehaviorTreeNode[ _ES_RegisteredBehaviorTreeNodesCnt ].idName    := nodeIdName;
            inc( _ES_RegisteredBehaviorTreeNodesCnt );
      end; // _ES_RegisterBehaviorTreeNodeClass

      function  _ES_MakeBehaviorTreeNodeInstance(   nodeIdName  : uString;
                                                    parentNode  : eAI_BT_BaseNode ) : eAI_BT_BaseNode;
      var i : eInt;
            for i:=0 to _ES_RegisteredBehaviorTreeNodesCnt-1 do begin
                if ( nodeIdName = _ES_RegisteredBehaviorTreeNode[i].idName ) then begin
                    result := eAI_BT_BaseNode( _ES_RegisteredBehaviorTreeNode[ i ].nodeClass.Create() );
                    result.initialize( parentNode );
            showMessage( 'WARNING: BehaviorTree Node Class "'+ nodeIdName +'" does not exists!' );
            result := nil;
      end; // _ES_MakeBehaviorInstance

Well, I hope this LOOONG-ASS article suited your BT needs boys and girls. Hopefully the code snippets were somewhat readable and understandable. Next time we talk about boobs, beer and games again, easier for me.