Immortius,
The current inputs to the Kinematic Movement system don't work well for AIs.
Players go in a direction. AIs go to a specific location.
Trying to determine the CharacterMoveInputEvent parameters to reach a specific location requires a lot of information that's already (and only) available in the PhysicsEngine and CharacterStateEvent. Thus this seems best computed using the engine's KinematicCharacterMover rather than some other mechanism, as it is directly affected by the exact algorithm that the KinematicCharacterMover is using.
The first movement problem I was trying to solve is moving a short distance -- one block. This is currently the only movement distance for the behavior system, although I'm sure we will adjust this as we add more behavior blocks. It will always be a possible movement situation.
It is a problem because moving one unit is generally too small of a distance. A default Terasology Character is normally going about 5 units at walking speed. So we're going to overshoot the target. Because we are already moving at a certain velocity and are affected by all kinds of other inputs, trying to compute the correct movement vector and yaw requires as much work as moving the object in the first place.
In real life, we have feedback to tell us what sort of yaw and movement vector we need, plus we can vary our deceleration as we get closer to our target. It's an all or nothing approach here -- you have to get it right the first time -- because the distance is so small compared to normal movement.
At least, that's how I'm seeing this first movement issue. I could be wrong and frequently am on new topics
The solution would seem to be some new movement modes. In addition to walk/swim/climb, we also need walk_to, swim_to, climb_to where instead of giving a vector and a yaw, we instead give a target. The kinematic system calculates what yaw and movement (within our current movement parameters) will get us closest (or at least close) to our goal -- we still may overshoot because we're going too fast or turned the wrong way, but at least we're not doing so blindly.
And the prediction system to determine the best vector and yaw can easily be kept in sync with the actual movement system, since they both reside in the same place. The optional parameters for this type of movement input could include what kinds of movement feedback to ignore for the calculation, or how fuzzy the feedback should be. As an example, consider or don't consider friction, or consider friction as a random value from a set range around the real friction value. Same with current velocity, yaw, pitch, gravity, rotation, slope factor, and whatever else we might be considering.
Right now, only current velocity, yaw, and pitch are available from what I can determine, but I still wouldn't want to try to calculate the rest of it even if the others were available.
The current inputs to the Kinematic Movement system don't work well for AIs.
Players go in a direction. AIs go to a specific location.
Trying to determine the CharacterMoveInputEvent parameters to reach a specific location requires a lot of information that's already (and only) available in the PhysicsEngine and CharacterStateEvent. Thus this seems best computed using the engine's KinematicCharacterMover rather than some other mechanism, as it is directly affected by the exact algorithm that the KinematicCharacterMover is using.
The first movement problem I was trying to solve is moving a short distance -- one block. This is currently the only movement distance for the behavior system, although I'm sure we will adjust this as we add more behavior blocks. It will always be a possible movement situation.
It is a problem because moving one unit is generally too small of a distance. A default Terasology Character is normally going about 5 units at walking speed. So we're going to overshoot the target. Because we are already moving at a certain velocity and are affected by all kinds of other inputs, trying to compute the correct movement vector and yaw requires as much work as moving the object in the first place.
In real life, we have feedback to tell us what sort of yaw and movement vector we need, plus we can vary our deceleration as we get closer to our target. It's an all or nothing approach here -- you have to get it right the first time -- because the distance is so small compared to normal movement.
At least, that's how I'm seeing this first movement issue. I could be wrong and frequently am on new topics
The solution would seem to be some new movement modes. In addition to walk/swim/climb, we also need walk_to, swim_to, climb_to where instead of giving a vector and a yaw, we instead give a target. The kinematic system calculates what yaw and movement (within our current movement parameters) will get us closest (or at least close) to our goal -- we still may overshoot because we're going too fast or turned the wrong way, but at least we're not doing so blindly.
And the prediction system to determine the best vector and yaw can easily be kept in sync with the actual movement system, since they both reside in the same place. The optional parameters for this type of movement input could include what kinds of movement feedback to ignore for the calculation, or how fuzzy the feedback should be. As an example, consider or don't consider friction, or consider friction as a random value from a set range around the real friction value. Same with current velocity, yaw, pitch, gravity, rotation, slope factor, and whatever else we might be considering.
Right now, only current velocity, yaw, and pitch are available from what I can determine, but I still wouldn't want to try to calculate the rest of it even if the others were available.