Tuesday, November 22, 2016

Walk the line; Smooth this time



Grandpa, tell us another story about pathfinding! Well kids, grandma is sleeping in the kitchen while the gas stove is on, so why not?! Hop on children, but don't sit there, grandpa had a little accident.

I already told you a bit about navigation-meshes and A.I. decission making; Where are we? Where to go? What's my purpose in this universe? Finally, there is also the art of actually walking the walk. Like an Egyptian, 4-legged monster, robot-tank on tracks, Scud Missile, or whatsoever. Basically I have four more problems to tackle. Yes, quite a bit for some stupid movement, but don't forget (smart) motion makes up about 50% of the A.I. already, so it was expected to be tough. Those four challenges are:

  •  Walking a curvy, smooth, "natural" line
    •  Not in a robotic zig-zag pattern... unless you are a robot of course...
  •  Physics
    • keeping our boots on the ground, dealing with gravity, making turns, et cetera.
  •  Dodging dynamic obstacles
    •  Doors, a crowd, electrified walls, garbage trucks, ...
  • Animating our body
    •  Swinging booty swagger, putting feet on stairsteps using Inverse Kinematics, ...

As for the latter, I didn't do that yet. My friend still hovers like a frozen Rio de Janeiro Jesus statue over the floor. Like Michael Jackson but then frozen in carbonite. The other issues got their attention though. I just "finished" (stuff is never finished, "better" is always around the corner) the "Smooth path" feature for instance.


The Zig-Zag man
Pathfinding algorithms often use a grid to perform their search on. Or a navigationMesh, which again is often grid-shaped, more or less. Then the centers of those cells/polygons are used as nodePoints; the result of a pathfinding query is primarily a list of coordinates - if a path was found. If we simply fly point-to-point, we will reach our target without crossing walls or cheating. But it will be an utterly clumsy ride, due the nature of the point lay-point:


The not-so-good method: Cheating 
 I admit, when leaving the bar, I go home in a ZigZag pattern, but those 90 degree turns are deadly. Even though a pathfinding algorithm is supposed to find the shortest path, this doesn't seems to be the shortest. One way to get a more logical result, is cheating. We pedestrians are supposed to cross the road in a straight line, preferably taking the Zebra Crossing. But we all know a diagonal line is shorter...
You can achieve the same by optimizing your resulting nodeList. Loop through all points, and see to which points you can shortcut WITHOUT ever leaving the navigation-mesh. Starting at Node1, let's check if we can reach the last node8 without leaving the mesh... nope. Node 7? No. Node6? Yes we can. So we can delete Node 2|3|4|5 from our list right away. Notice we cross the very corner at Node6. That's why its important to "shrink" the navigation-mesh inwards, away from the walls, to avoid intersections. In other words, make sure your guy can stand anywhere, in any navigation-mesh polygon, without bumping the walls or ceilings.

This will get rid of zome Zig-Zags, but it also introduces a bunch of problems. First of all, the last 2 nodes still look a bit ackward; we have to go via (red)#3, because we couldn't shortcut from 2 to 4. And another problem is that we're leaving our comfort zone (crossing reddish quads). Diagonally crossing the road might save some meters, but its also more dangerous. Pathfinding isn't always about finding the *shortest* route! Maybe those green quads in this picture provide the most covered/stealthy/safest/easiest/bonus-item-filled/awesome/scenic route... So we spend a lot of effort finding that, then to throw it all away by skipping nodes? Disgrace! You don't visit London for sight-seeing, then take the Metro only.


Another way: No idea how to call it, but it works
So I thought of something else, a bit different than you may have red about smoothing your pathlines with beziers/splines. We move our nodes –within their original polygons(!)- to generate a optimized path. A shorter, more natural path, but without ever leaving the original found (green) polygons. Yeah.

I'm not a math-miracle, so it's all based on simple common-sense, which is a good thing, I think. We start at Node1. But instead of moving to Node2, we already try to shortcut, directing towards Node #3:
To avoid skipping the Quad Node2 was placed on, we have to respect the blue Edge and cross it, thus getting through that quad. So Node2 gets placed on that blue edge, but as close as possible to Node3 (and note I shift it a bit back from the very corner). Now repeat the step; we displace Node3, as we try to find the shortest route from Node2 to #4:
Note how the turns get a bit more curvy at the quad corners as well. At some points, the arrow is going to intersect forbidden territory (open space or reddish quads we didn't enlist in our original search). Whenever that happens, shift the intersection point back on the edge. That's all really. I didn't try it yet, but possibly you can smooth a bit further by repeating the whole procedure 1 or more times. So in pseudo code:

for i=0 to nodeCount-3
    // Get the quad our original node was placed on
    quad = nodes[i].quad 

    // Find out via which of the 4 edges we are leaving this quad, to the next node
    edge = nodes[i].crossingEdgeToNextNode
    e1 = edge.cornerA
    e2 = edge.cornerB

    // Make a virtual line between the current point, and the node AFTER the next one
    p1 = nodes[i].pos
    p2 = nodes[i+2].pos

    // Check out where we (virtually) intersect - be aware this point may be off the edge-ends!!
    // Big Thank you Darel Rex Finley for some math & code: http://alienryderflex.com/intersect/

    if ( linesIntersectExtrapolate( e1,e2, p1,p2, @intersectionPoint ) then
        // In case the intersectionPoint was "outside" the edge, move it back inwards,
        // and stay a bit away from the outer corners

        shiftedPoint = lineShiftPointBetweenEnds( intersectionPoint, e1,e2, 0.01 )

        // Store in the NEXT node pos - so the next loop iteration will calculate from taht point onward
        nodes[i+1].pos = shiftedPoint
    end
next

The final result is a lot better than the original, though it may still take some sharp turns here and there. I “fixed” that dynamically, by letting the character "physics" turn with some delay and speed decreasement, instead of stopping and taking a perfect turn. Like a car on mud, getting off-road a bit. Only on very sharp turns the character actually stops, takes a different "rotation animation", and turns. As you would do yourself when taking a sudden turn. In other words, the line is not a 100% strict guider.




Sunday, November 6, 2016

The Unpredictables



Last time we talked about a Navigation Mesh; basically telling your A.I. where the dancefloor is. Pathfinding algorithms and navigationmeshes are all tools to help finding your way in a big bad game world. All in all, very common Game-Programming stuff.

Less common in the tutorial newspapers, is the decission-making part. We have a car, we have asphalt, we have a navigator, but yet – destination unknown. Whether its David Hasselhoff, KITT, or captain Picard, something has to drive that car. This is where we enter the fuzzy parts of A.I. And also personal unknown territory. I programmed a lot of machine regulators, graphics, and “axuiliary” game-stuff (read engine architecture, loaders, physics, scripting systems, navigation algorithms, and so on). But not so much of an actual game-logic itself so far. Like building a new house, but never finished it till the point we can do the interior and start living in it.

Game-logic typically covers the rules of a game. What are the goals? When do you win, when do you lose? And on a more detailed level, it covers puzzles, interaction, scoreboards, inventories, pre-programmed scripts, player-controls/physics, and also foe-controls; A.I. Varying from Pac-Man ghosts chasing you, to Carmageddon race-cars trying to trash you. From Goomba’s walking from right-to-left, to bionic supersoldiers trying to pin you down. Obviously, deciding where to go, when to go, and how to get there, is a crucial part of A.I. And like I said, a rather fuzzy one.


Predictable patterns
When making a piece of machine-logic, the word “random” is forbidden. Depending on the given parameters, the outcome of a subsystem should be 100% predictable. It would suck pretty hard if your car randomly decides to start yes or no, when turning the ignition key. When not doing so, there should be a damn good reason for it, not just for the fun of it.

Well, common sense, but game-A.I. breaks the rule here. If your opponent would always react exactly the same, the game would become, well, very predictable. Not only would it make the game too easy, it also makes your opponents appear very “synthetic”. But hey, thank God we have a random() function! But - be careful! Too much randomness will spoil your diner as well. You don’t wake up, put on a single green sock, shit out the window, step naked in the car, drive backwards through the garage-door, call your neighbor a lunatic, boil applejuice with fish, then get back in bed again. Our human behavior is, more than you may like, pretty predictable. I for instance always put on 2 socks, shit out the window, drive naked to work in reverse, and drink my first coffee at nine A.M.. However, there are subtle “random” differences in the execution. Sometimes I mix up socks accidentally, and sometimes I drink my coffee at 9:05 instead.

You’ll need some predictable (pre-programmed) patterns to make your NPC’s act human. And some predictableness also helps making you feel good as a player, learning the responses, flaws and weaknesses of your foes. Personally I’m not a fan of MultiPlayer gaming, because the lack of that “Damn I’m good” feeling. I keep dying, as some asshole caught me from behind, or randomly crossed paths and reacted slightly faster. Just like the real deal… One moment you’re storming upon a beachhead, next moment Heinz Panzerfaust floors you with a headshot. One moment Nancy Sinatra sings your boots are made for walking, next moment you tumble in a hole with Punji Sticks, pissed on by Ho Chi Minh. One moment you’re humming in a Humvee, next moment an IED shreds it apart. Fun. Not.


When I play a game, I generally don’t want a superrealistic experience, dying ingloriously. I want to kick ass. Doesn’t mean the game should be easy, certainly not. But dying should feel fair, not like random bad luck. And yeah, that sounds a bit ridicously, as real-life war is verything but fair. But hey, we’re playing a game. To have fun. Getting killed means I made a mistake; Lick my wounds, learn, and try again. If your NPC logic is very unpredictable (read random), you can’t really learn and act on it.

So, A.I. needs to be fuzzy and unpredictable, but only to a certain extend (unless you’re making bots for Battlefield1… anyone?! Please!). In programming terms, that feels a bit unnatural. Always doing the same won’t work, completely random picking doesn’t work either. What also feels unnatural is that your code has to pretend “as if”. Game NPC’s don’t have real eyes or ears, and they can cheat all the time. They always know exactly where the player is, they can aim their guns 100% accurate, dodge lightning bolts, and respond within a millisecond. We cover that up with some random inaccuracies and delay timers, but that has little to do with how a brain really works.

No. Picture has nothing to do with this topic. Just some empty rooms, and I'm afraid they'll stay empty for a little while now that noone is working on graphics or texture/3D content at the moment.

 
Are we Human? Or are we Monsters?
Well, my job last week(s) was to program some of those “Behavior Patterns”. Thanks to A* and a navigationmesh, getting from A to B was relative easy. Deciding what “B” would be on the other hand… I mean, where does a monster go? In a Soviet-like skyscraper? To the bathroom? To the conference room? To his friends? And then doing what?

Back in the day, monsters just had to growl a bit and wait until the player showed up. Then jump at him and probably getting shot while doing that. Not exactly a brilliant role in the school’s play. But nowadays we want realism, meaning foes have to make themselves useful in this virtual world. Whether sleeping, cooking soup, repairing the generator or just being on the move – at least do something. But obviously that’s pretty hard for a monster. They have to be scary, not driving their cars to work, drink beer and watch football, or repair the generator. Hell, some of my monsters don’t even have arms for that!

The good news though, you will only see a little bit of that “Idle” behavior. More likely you should avoid monsters, or end up either running or fighting with them. Unless your monster carries an AK47 or plasma projectiles, the second best choice is probably to just charge at you, meaning their destination location will be you. So far, so good; Monster angry, monster runs at you, Benny-Hill music, player flees, and then… Player out of sight. Now what?


Room service
Let’s say the player ran a few rooms further, and hides under bed, as I would do. This is a perfect scenario to show predictibility, artificial smartness, or straight dumbness of your foes. For one thing, he could keep walking straight to the player position. But that would be cheating of course. Slightly less cheating would be to enter at least the bedroom, then act as if we’re sniffing and searching the room. But if our monster would do that every time, it would be –exactly- predictable.

Then again picking a random target-point as soon as the player runs out of sight, would be plain stupid. Chances that he’ll find you are nihil then, plus it doesn’t feel like a logical response at all. In fact, I’ve been training my pet-Monster in Tower22 a lot of times last two weeks, doing exactly this. I would run away about 15 meters, hide under a table, then *hope* to see my monster-friend searching nearby rooms. But of course he would take a wrong pick, and I had to come from under the table, and start making noise and yell “Come here dumbass! I’m here! Look!”.

I won’t reveal the exact methods, as this pre-knowledge would ruin the fun of future players, but let’s say our guy finally learned some better “Room-Sweeping” methods. Picking the right directions (at the start of the search at least), searching room by room, not using cheap cheats but “real” vision and acoustic information. Only problem is… I trained my pet so many times now, that his behavior is still preditable. To me at least. But that’s why they invented “Beta Tasters”…

And if hiding doesn't work, I just finished some code to throw this waterkettle at somebody's head.

Sunday, October 23, 2016

The Boyscouts



Back in the day, I wasn’t a very good pathfinder at the “Jagers” (our local cheap variant of the Scouts). Map reading? Fooling around (and drinking beer or rolling cigarettes on a later age) you mean. Usually we took the third turn left instead of right, got lost, and made it a six hour walk instead of four. But much fun we had, sneaking through ditches and backyards, getting chased by cows or talking about boobs in the woods. And always bringing back a surprise for mom… mud-caked-cloth.

I wish our kids will join the Scouts or alikes one day, instead of wasting all their time on Youtube. But it seems the oldest girl is more into girl things, like flying on unicorns. We’ll see. Until then, my pathfinding skills are used on Tower22 A.I.

Pretty much every game with foes + a world more complex than a simple side-scroller, requires pathfinding. The word is self-explanatory, but still I’d like to point out there is a strict difference between deciding where to go, and figuring out how to get there. Pathfinding techniques usually focus on the latter, although the decision-part is maybe at least as important in order to make your foes smart, tactical or humanlike. Anyhow, pathfinding usually consists of path-data and a (fast) algorithm that calculates the shortest or “best-quality” path between point A and B. Like configuring your car navigator.
Now the real question is; man or woman?



I won’t dive into algorithms such as A* (yes, A-star), a very common pathfinding algorithm. Let’s just chat about what I did last summer, last week I mean; making a “Navigation Mesh” for the playable demo world.

Any pathfinding algorithm will need data to work with. Like the Boyscouts had their map (hm, map?). In a 3D game, we have world-geometry, but it’s too complex to directly use. World Geometry is basically a bunch of polygons, which could be anything really. Road, pavement, but also walls, roofs, pipes, windows, wires, chimneys, lava-pits, horse manure, and so on. Polygons too small, turned upside down, or made of nuclear acids aren’t suitable for walking. They just litter our algorithm with extra load. Neither do decorative polygons, somewhere in a far background. What you would need is an extra “yesYouCanWalkOnit!” property or something… letting the artist mark “floor” polygons you can walk on, in the editor.

That still stinks though. Games tend to tessellate their geometry in smaller chunks. A soccer field might be made of 100 x 50 quads, while just a single rectangle is enough for pathfinding purposes maybe. Pathfinding algorithms are pretty heavy shit, in calculation terms. Less connections / cells / polygons / grid-points = faster results. So you really want to filter our noise. When the Boyscouts are reading their map (IF they are reading their map…), they don’t want to know the location of each manhole, lamppost, bench, trashbin, dogturd, or liquor store… I think.

What you want, is a simplified version of the world, the (walkable) floor shaped out of as little polygons as possible. Engines like Unreal offer tools to create those meshes for you… How does that work?
T22 demo floor has about 21.000 triangles. The simplified mesh about 1.500. Notice the NavMesh doesn't have to follow the exact stairs. And also notice not every floor polygon is necessarily walkable. Ceiling too low, obstacles placed, or the NPC jusn't not having any business there.

NavigationMesh generator: Auto versus Handmade
First of all, they define a boundary-box where to create that navigation mesh. Anything outside that box is skipped right away. Perfect to get rid of backgrounds or stuff that might be walkable, but isn’t always useful (like roofs on houses). Next, if we forget about Spiderman for a second, you could help a hand by filtering out any polygon that is NOT facing upwards (thus potential ceilings / walls). Then maybe have a look at its material. If it says “boiling tar”, you probably want to skip it as well.

Still doesn’t mean those left-over polygons are actually walkable though. If your monster weighs 400 kg, chances are he may not fit through a normal door portal, unless he’s made out of jelly. Basically the floor polygons need to shrink a bit, away from the walls, and also testing the height. If your guy is 2 meters tall and can’t dug, any polygon with a low ceiling will definitely suck.

I’m not sure if those auto-generate algorithms are even using the existing (floor) geometry like that though. I’m more thinking in terms of floodfilling; make a (3D) grid inside an artist defined bounding box. Then click a piece of floor (“inject”) has is walkable, and check all its neighbour cells:
·         Put a cylinder there. Dimensions are based on NPC radius / height parameters.
·         Check what surrounding polygons it intersects?
o   Does it intersect “floor” polygons (upwards facing)? Good.
o   Does it intersect non-“floor” polygons (ceil, wall, obstacles)? Not good.
·         If passed, repeat the check for neighbours

That’s the easy part. More tricky is to generate an actual, optimized, mesh out of those cells, which basically represent a voxelized version of your map-floor. And no, I didn’t figure out how to do that (yet).


Not being blessed with grandiose math skills, and seeing some other possible issues, I decided to go for a good old hand-made NavigationMesh instead. Issues? Well, what if we do have Spiderman? Tower22 dudes may not necessarily stick to the floor. Hell, some may even cheat and fly through walls, or “warp” to certain spots. Second issue is that it’s a bit hard to define a bounding box and generate a mesh within that area. Tower22 has a roaming world, the navigationMesh is much bigger than the actual loaded parts of the world. Unless you generate multiple smaller meshes, and weld them together. But again, it introduces some tricky bits.

And there is something else. In previous NavigationMeshes (made manually for the older engine), I had some more information stored in the mesh. Like the type of floor. Pedestrians can walk the highway for sure, but they should avoid it as much as possible. To put it more abstract, polygons need some sort of QualityRating, which is relative to its target audience. Pigs prefer mud, cars asphalt, and Kanye West Gayfish people water.



Battlefield
I can still hear you thinking, an auto-generated mesh could do that as well right? Right, but here is yet another excuse; storing Battle-Data in the NavMesh. Now the irony is that Tower22 dudes don’t fire machine guns, toss grenades over walls, and run into cover. So I could do myself a favour and NOT bother all of this. But it’s just to cool to not implement, and actually it does give some interesting A.I. capabilities to even dumb guys like my T22 scum.

Like pointed out at the top, there is pathfinding and decision making. You can give the Boy-Scouts a map, but without a clear goal, they still don’t know what to do. Get back home, go to the liquor store, find a bench and smoke weed, whatever. Practical, humanlike tasks. Sims with blatter problems will look for toilets. Soldiers under fire will look for cover. Or “assault-spots” with sight on the target, preferably still as much covered as possible. Tower22 monsters will… I don’t know what monsters do. Play billiards, read the newspaper, find Boy-Scouts to eat them, I don’t know.

Anyhow, again just a map doesn’t help, other than you could have objects with a special tag, like “isFood, isMedkit, isToilet, isBed, …”. But to make your NPC’s feel natural, they need some more advanced tasks, requiring knowledge of their surroundings. One common way is to provide (artist-placed) Info-Nodes. In Tower22, I can place nodes and tell what we can do here. Restroom, kitchen, workplace, et cetera. Then NPC’s can search for such nodes based on their needs, a certain range, randomness, or a pre-programmed schedule.
Rather than just randomly ghouling around, I made a bunch of "interesting locations" for my monstrous friends. Depending on the context, they can eventually search more specfically, filtering out nodes based on groups, required skills or type.



Nice & Simple, but also predictable. If you pre-define cover/assault/sniper nodes, you will know where the Bots will hide or go after a few rounds. Unless you place many, many nodes.

Now here is something the makers of Killzone did in their world, creating a somewhat more dynamic solution. A lot of nodes are placed, and each node measures global “sight” distances in radial segments. So you can quickly tell NodeX has proper sight on the North, and is covered from the East. By measuring on both gun and ballsac height, you can also tell if your lower body is covered yes or no. So if we know the enemie(s) are somewhere North, and you’re a fanatic, you’ll search for nodes with sight on the north, but coverage from other (potential threat) directions as much as possible.

 Based on the node data, yellow-badguy distance is larger than we can see from there, meaning we can't him, nor can he hit us. Be aware the data is somewhat global/inaccurate though (just 8 distances stored here). More slices provides better data, but also increases memory and pathfinding complexity.



So I figured, why not automatically generating such a node at each center- and/or corner of the NavigationMesh? Should offer a rich set of BattleData. And not just that, it also provides tactical pathfinding information –and now it gets interesting for T22!. If you are busy saving private Ryan, you should know to get off the beaches ASAP, as well as to stay out of lines of fire as much as possible. While running a (A*) pathfinding algorithm, you can check your visibility on each node, and make a balance between shortest / most covered route. If you plan to flank your enemy, you’ll need a path that is invisible for your primary target, and finally offers sight on its flank, at a certain preferred distance. If you're a Ninja, you may prefer the shady route. If you store “lightFlux” in your nodes, and also avoid open-spaces as much as possible, you’ll be picking a path that is not necessarily the shortest, but gives you stealth advantages.
 
So… what’s the problem with auto-generated meshes then? The lack of tessellation. Yeah, first I said NavMeshes should have as little polygons as possible –which is still true. But at “tactical” spots, where light/cover/sight varies, you want additional polygons so you can quickly hop between a wall(cover) or window(fire). T-junctions, windows, doors, spots with lamps – all interesting places that may need a few more polygons to get more accurate BattleData about your surroundings. Or if your NPC likes to hug walls, he can follow polygons at the corridor sides.
We placed two "Cover" polygons just around the corner. An (auto) optimized NaviMesh wouldn't have those.


So, summing that up, I decided to just manually make the NavMesh. Which sucks, but doesn’t take many hours either. It’s doable, unless you have “Just Cause” sized levels. To help myself, I made an exporter in the Map Editor that filters out (potential) floor polygons only, giving me a background to work on. The NavMesh itself is just a 3D model, and I’ll be using the polygon names to tell what they are: Floor, Stairs, LeapOverWall, Water, GhostCheatingThroughWall, and so on. Then when importing the mesh, additional BattleData can be generated. Its nodes would be placed on NavMesh polygon centres and corners, giving a pretty detailed datagrid –but only dense at interesting points. The Boyscouts would be proud on me.