AI Kinematic Movement

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
Immortius,

The current inputs to the Kinematic Movement system don't work well for AIs.

Players go in a direction. AIs go to a specific location.

Trying to determine the CharacterMoveInputEvent parameters to reach a specific location requires a lot of information that's already (and only) available in the PhysicsEngine and CharacterStateEvent. Thus this seems best computed using the engine's KinematicCharacterMover rather than some other mechanism, as it is directly affected by the exact algorithm that the KinematicCharacterMover is using.

The first movement problem I was trying to solve is moving a short distance -- one block. This is currently the only movement distance for the behavior system, although I'm sure we will adjust this as we add more behavior blocks. It will always be a possible movement situation.

It is a problem because moving one unit is generally too small of a distance. A default Terasology Character is normally going about 5 units at walking speed. So we're going to overshoot the target. Because we are already moving at a certain velocity and are affected by all kinds of other inputs, trying to compute the correct movement vector and yaw requires as much work as moving the object in the first place.

In real life, we have feedback to tell us what sort of yaw and movement vector we need, plus we can vary our deceleration as we get closer to our target. It's an all or nothing approach here -- you have to get it right the first time -- because the distance is so small compared to normal movement.

At least, that's how I'm seeing this first movement issue. I could be wrong and frequently am on new topics :)

The solution would seem to be some new movement modes. In addition to walk/swim/climb, we also need walk_to, swim_to, climb_to where instead of giving a vector and a yaw, we instead give a target. The kinematic system calculates what yaw and movement (within our current movement parameters) will get us closest (or at least close) to our goal -- we still may overshoot because we're going too fast or turned the wrong way, but at least we're not doing so blindly.

And the prediction system to determine the best vector and yaw can easily be kept in sync with the actual movement system, since they both reside in the same place. The optional parameters for this type of movement input could include what kinds of movement feedback to ignore for the calculation, or how fuzzy the feedback should be. As an example, consider or don't consider friction, or consider friction as a random value from a set range around the real friction value. Same with current velocity, yaw, pitch, gravity, rotation, slope factor, and whatever else we might be considering.

Right now, only current velocity, yaw, and pitch are available from what I can determine, but I still wouldn't want to try to calculate the rest of it even if the others were available.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Ok, firstly the Kinematic Character Movement system is already a reasonably complex system with a clear scope. I would be happy exposing more information out of it (such as maintaining the current movement mode in a the CharacterMovementComponent), but would be loathe to shove a move to system in there as well - particularly because I'm pretty sure such a thing can live outside of that system.

Addressing your specific concerns:

It is a problem because moving one unit is generally too small of a distance. A default Terasology Character is normally going about 5 units at walking speed. So we're going to overshoot the target. Because we are already moving at a certain velocity and are affected by all kinds of other inputs, trying to compute the correct movement vector and yaw requires as much work as moving the object in the first place.

In real life, we have feedback to tell us what sort of yaw and movement vector we need, plus we can vary our deceleration as we get closer to our target. It's an all or nothing approach here -- you have to get it right the first time -- because the distance is so small compared to normal movement.
The AI should be capable of varying its deceleration as it gets closer to its target too. It can always tell how close it is, and can scale the input vector accordingly. It should have some idea of what movement is coming up from the path it has determined - the need to jump, what terrain it is passing over. This is one of the reasons it was event driven and not in a component - I was expecting the the AI's movement controller to be determining what it needed each frame and sending it, rather than just putting some direction into a component and leaving it until some future state. Basically the AI should be proactive in following its path, not reactive. The movement system is only ever reactive. This likely should be handled by a low level behavior or system such that you tell it where to go and it handles the translation, but I believe that this should still be above the kinematic movement system itself

The solution would seem to be some new movement modes. In addition to walk/swim/climb, we also need walk_to, swim_to, climb_to where instead of giving a vector and a yaw, we instead give a target. The kinematic system calculates what yaw and movement (within our current movement parameters) will get us closest (or at least close) to our goal -- we still may overshoot because we're going too fast or turned the wrong way, but at least we're not doing so blindly.
This is really mixing two distinct concepts. A movement mode basically determines the physics of how the character moves, and nothing else.

Hmm, maybe the problem here is that the KinematicCharacterMovement needs to be broken into two distinct parts. There would be a low level system that just does movement, and then a higher level system that applies input to do that movement. For AI you may choose to forgo the input and work directly with the movement. You would need to do your own client side prediction for the movement though, so maybe not wonderful.
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
I am sorry that it was not clear enough from my initial problem description, but this already is a system where the AI sends events each movement frame. The AI system already does the long-term calculations to get us to our spot. In fact, it has calculated where to move with such a fine granularity that it is causing this problem every movement frame since every move input is to reach a block which is within a single unit of where we are :) But even when we fix this AI problem, we are still going to have single-frame problems. There is no deceleration over time in the single-frame problem.

The problem is that for a single frame, even if we know that we should be able to reach a spot based on our location and velocity, it's not possible to calculate how to set inputs to get to that spot, at least not without exposing more Character movement state.

Yes, we could try to cheat and just skip the kinematic movement system entirely and directly set our location, but even that is not possible since we're already in motion in the CharacterStateEvent. And even if it were possible, it's going to look flawed if we move kinetically until we get close to a turning point or ending point, and instantly stop or change direction. Or maybe it won't since we're talking about a single frame. Was I wrong about this?

Futhermore, as I already said, the ability to calculate where we will end up at the end of this frame depends on intimate knowledge of the exact workings of the Kinematics Movement system -- how it is going to calculate friction, gravity, rotation, slope factor and apply them to this movement input to produce a final location. This means we have to duplicate the entire Kinematic system in our prediction system. So that certainly seems to indicate that these systems are tightly integrated.

While I agree that adding this greatly complicates things, this seems to me to be the general spot in Terasology where the problem should be handled for the above reasons. Whether that means creating KinematicMoverPredictor.java or sticking it directly in KinematicMover.java is just semantics -- it still seems like it should be part of the same engine system.

I am very open to alternate solutions -- I just don't see one, so I could be missing something obvious here. You know that's not the first time I've done that with movement systems :)
 

synopia

Member
Contributor
Architecture
GUI
I see two alternatives here:

1. A system mostly equal to what we have. Here, the AI sends movement events (basically the same that a normal player could do) and the physic engine (aka world) react and moves the minion if possible. To make things easier for AI developers, there should be of course a small piece of code (preferable a behavior tree node), that gets a block adjacent to the minion's current and calculate what movement events should be sent.

2. A non kinematic system. Such a system would receive the target block from the AI (like above) and move the minion by itself along a curve/line to the destination. This would simplify movement a lot in general, since the minion moving code is fully aware of the current position, velocity, etc.

Both ways have pros and cons. One big pro for 1) is, its already there with some bugs. Also, I personally like the idea, that the AI is totally separated from the physic. So, a AI can never cheat and changes in the physic/movement system would directly influence the AI.

Mike Kienenberger: I still did not understand, what exactly is happening with your minions. I assume, its basically a timing problem. Currently, the AI runs each 100ms. So the AI can only fire one movement change per 100ms. For my Terasology this works and is enough to make minions walk from one block to the next in about 2 seconds. However, it seems you have much slower ticks which will lower the movement resolution to something totally unacceptable. This needs to be fixed - independently of the minions move code.

If we assume, this would work properly, the main open issue with moving is turning. Right now, a minion can rotate around in zero time (using yaw value). This should be fixed - to match a more natural movement behavior.

Also done is automatic jumping. Whenever a minion tries to get to a position above a certain maximum height, it jumps while moving into that direction. I built a single jump node, too.

What values you think we need in addition to the current location, target location and, to some extend, the current/target velocity?

MoveAlongPath: This node is a decorator node. It takes a path and sets MinionMoveComponent.target to the first position of the path. Then it starts its decorated child node. When the child finishes, the next path step is set as target and child is called again. You are right, this is not perfect and there are many tweaks possible here (for example, scan the path from the beginning, and select the position that is nearest to current position).

Most commonly you will use a MoveTo node as a child. This will form the basic movement to a target block far away. Of course, this could and should be fine tuned in several ways. For example, I use a timeout node to stop the MoveTo child node after some seconds and make the minion jump. This hopefully solve some stuck problems.

You see, what I try is to move weird state based code into behavior nodes. But I am really open for better solution to all I said. Best is, to try different ways - so it would be cool to see, if alternative 2) will work, but I don't think, I actually have the time to do that in depth.

Its even possible to have several movement systems but different MoveTo nodes to access them. So we could compare solutions much easier than just thinking about it :-D

Ok, run out of time, need to stop now :p
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
1. I think you overestimate the movement system if you think it has enough information to calculate what input will take a character to a location. I think this isn't even possible - until the movement is actual calculated through the physics engine only estimations could be done. It doesn't otherwise have an advantage over anything else doing the estimate.

Perhaps something that would help is the ability to try an input and get back where it would take you, without actually moving the character.

2. If you are trying to move the AI to an exact location I think that is not the correct way to go about pathing the ai. The goal should be to get sufficiently close. In particular when following a path you should probably be looking to move to the next block before you're even over the preceding block.
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
I think this isn't even possible - until the movement is actual calculated through the physics engine only estimations could be done.
I am aware of that, and that is why I said "close" and "closer" in the original message.

1. I think you overestimate the movement system if you think it has enough information to calculate what input will take a character to a location.
It doesn't otherwise have an advantage over anything else doing the estimate.
Why do you say that? The kinematic system knows exactly what formulas it is going to apply for the areas I gave above. Any module system will not know those formulas unless the module author actively tracks changes to the kinematics system and copies them into his own code.

Perhaps something that would help is the ability to try an input and get back where it would take you, without actually moving the character.
I considered this approach. An estimation method seems like a reasonable thing to provide in any case. But I still think that without the encapsulated knowledge available in kinetics, the caller will just be making guesses, and it may take a lot of guesses.

2. If you are trying to move the AI to an exact location I think that is not the correct way to go about pathing the ai. The goal should be to get sufficiently close. In particular when following a path you should probably be looking to move to the next block before you're even over the preceding block.
I am in agreement with both of these statements, and already had posted those ideas earlier today, although my posts were hidden in the behavior thread, so you may not have seen them.

http://forum.movingblocks.net/threads/behavior-trees.882/page-2

While we do not need to hit an exact location, we should at least be able to get within half-a-block or a fifth-of-a-block of a location.
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
I will go ahead and just use the existing CharacterMovementComponent information and do my own estimates. Maybe I'm overestimating the effect of the other attributes on the result, and they are insignificant.
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
I see two alternatives here:
1. A system mostly equal to what we have.
2. A non kinematic system.
I don't think we want a non-kinematic system.

Currently, the AI runs each 100ms. So the AI can only fire one movement change per 100ms. For my Terasology this works and is enough to make minions walk from one block to the next in about 2 seconds. However, it seems you have much slower ticks which will lower the movement resolution to something totally unacceptable. This needs to be fixed - independently of the minions move code.
Mine appears to run about ten times slower than that. But even if it didn't, what you are saying makes no sense! 2 seconds to walk the distance of one block? That's far too slow. The default movement for a minion is 5 blocks per movement frame, so you're saying that you're seeing a minion move one fifth of its normal movement over the course of 20 movement frames. That's like 100 times slower that you would expect. Perhaps what you are seeing is the minion overshoot the block, then overshoot back, each time getting closer, and it's happening fast enough that it looks like it's working right? Obviously I am speculating as I see it in slower motion :)

MoveAlongPath: This node is a decorator node. It takes a path and sets MinionMoveComponent.target to the first position of the path. Then it starts its decorated child node. When the child finishes, the next path step is set as target and child is called again. You are right, this is not perfect and there are many tweaks possible here (for example, scan the path from the beginning, and select the position that is nearest to current position).
Yes, I have been so focused on debugging the MoveTo behavior that I haven't looked to see what other blocks are available. I knew there was something that was queuing MoveTo tasks, one for each node in the path. Since MoveAlongPath already has the right name and placement in the behavior tree, I'll look at reworking that to be smarter.
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
I have submitted a patch for pathfinding to adjust the movementDirection input event by several non-vertical factors so that Kinematic's final moveDelta is the original desired position change. Doesn't yet consider gravity, non-zero y input, nor running.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Hmm. At the very least moving some of the calculations into the movement mode enum would allow you to use them too.
 

Mike Kienenberger

Active Member
Contributor
Architecture
GUI
Mine appears to run about ten times slower than that. But even if it didn't, what you are saying makes no sense! 2 seconds to walk the distance of one block? That's far too slow. The default movement for a minion is 5 blocks per movement frame, so you're saying that you're seeing a minion move one fifth of its normal movement over the course of 20 movement frames. That's like 100 times slower that you would expect.
Correction: the movement for a minion is 5 blocks per second, not per movement frame. So it moves one block per frame at 200ms. So it's only 20 times slower than what you would expect.

Immortius,

Now that I've done the work for adjusting for kinematics (although only partially so far), I would really like to eventually move this code (as well as a prediction method) into the engine. The adjustment code is effectively running the movement code in reverse to start at a moveDelta and arrive at an input. This code has to be kept in sync with kinematics at all times. I propose to do so by creating a new system which will provide a method to adjust the input (I don't think events can pass data back to the caller, can they?) and I will factor the code that calculates moveDelta from input to a separate method, so that a second prediction method can call the shared Kinematics method and return a result to a module.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Events can be used to pass data back to the caller (although network events cannot do this across the network).

Sound good. I'll look out for the PR. I guess I'll put some thought into how best to support characters with non-standard locomotion in the meantime - like a frog that walks slowly but jumps far, or a crab that strafes fast but only moves slowly forwards.
 

synopia

Member
Contributor
Architecture
GUI
I fixed several problems. First, the timing was totally wrong. The behavior system was ticking at a fixed speed of 100ms (each 100ms the update methods of all relevant nodes was called). However, this actually did not happen each 100ms. In addition, the move input event took its delta time value from the current tick delta. Which obviously confused the kinematic mover a lot ;)

With Mike Kienenberger move adjustment code the movement looks really strange right now, so I "skipped" the code. Please review this with the new timings. Even without it, minions movement looks much better now.
 
Top