ReleeSquirrel
New Member
Cervator edit: Moved to Suggestions and marked inactive. Other AI threads have come since, but this one still has a lot of neat stuff!
Skaldarnar edit: Merged with Esereja 's AI thread, moved to incubator, added header
Name:Three Tier AI System
Summary: The Three Tier AI system is a concept for hierarchical AI design. Due to the hierarchical structure and the three tiered structure it can be used for animals, monsters and humanlike AI equally. AI behavior can be customized via json configuration.
Scope: Core
Current Goal: Working and customizable low level (tier 1) AI system (?), add social behavior
Phase: Design/Implementation
Curator: ReleeSquirrel, Esereja
Related: Miniions
One of the key systems I want to experiment with in Terasology is my idea of a three tiered AI system for person simulation.
The idea here is to simulate an AI complex enough to have humanlike interactions between AIs, and potentially with players as well. Ideally without breaking the bank on processor resources!
From what I've read about the mind, it's largely split between three levels. Animals that haven't changed much since ancient days still only have the first level or two, and humans are especially in the third. These are instinct, emotion, and reason. I think that these can be simplified in order to produce some really engaging artificial people.
Instincts are your basic actions. The simplest animals operate purely on instinct, like reflex. Pure input-output, event and response. Very simple. A fly doesn't dodge your hand because it thinks, "OH SNAP! A HAND!" it feels the air pressure and the shadow and moves automatically. That's why they're so hard to catch. Most AI operates like this too. Event, reaction. It makes them predictable and ultimately very controlable.
Instinct is easily modeled with a standard state-based AI system.
On top of instinct is emotion. Emotion is what you feel. They say there are six basic emotions. Anger, Disgust, Fear, Happiness, Sadness and Surprise. http://en.wikipedia.org/wiki/Emotion
Different basic emotions combine together, and with instincts, to form sub-emotions. When I first read about emotional theory I was surprised that Love isn't a basic emotion; it's actually a form of Happiness or Joy, combined with reproductive and familial instincts.
Emotions inform and influence our choices. We're less likely to trust someone who has hurt us before, and more likely to trust someone we like. If we're afraid of something, we'll be more on guard. We avoid what disgusts us, and try to get rid of it.
Emotions are often attached to things, so I feel the best way to model emotions is to use a target-emotion pair list. Now, keeping a list of how you feel about EVERYTHING would be kind of nuts, so I would trim this into two lists. One is your short-term immediate feelings, what you felt about the last ten things you interacted with perhaps, to guide your immediate actions. You might be mad at a person, but you don't have an everlasting hate for them. You might be afraid of a rock that nearly fell on you, but you wouldn't fear that rock or all rocks forever. Then there'd be a long-term emotion list, for things that are most important to you. How you feel about, say, the twenty or fourty most important people to you, like friends and family, as well as long-term grudges and fears.
Emotions would primarily inform the state machine. You act this or that way depending on your emotions. It becomes less reflex and more reaction, but it's still pretty simple. The third tier is where things really take off.
Reason is a special ability humans and some animals have, which allows us to predict the future and plan for it. Humans go way above and beyond any other animal though. We have really complex ideas like beliefs and plans and math and ethics. Effectively, Reason allows us to consider our situation and change our plans accordingly. Now, modeling that accurately would be kind of nuts. That's practically a turing level AI right there, isn't it? I don't think they've even got that working at MIT yet, and anyways it would take obscene amounts of processing power to handle even one AI at that level.
My idea for modeling reason in the three tier AI system is to use a collection of pre-scripted 'beliefs' along with a measurement for how firmly held that belief is, and to have the AI determine its course of action based on those beliefs. These beliefs will include things like ethics and personal goals. An ethic might be something like "Killing is Bad", and a personal goal would be something like "Own a House".
AIs would communicate with eachother to try and convince eachother of their beliefs. An example I gave someone while explaining this system before was an environmentalist and a litterbug. An environmentalist is hanging out by the river, and they see another NPC throw something away on the ground. The environmentalist gets angry because they saw someone violate one of their beliefs, and they go to talk to the litterbug. They share their belief with the litterbug, coloured by their emotion. The litterbug then compares the environmentalist's belief to their own beliefs. In this case, they will either accept the environmentalist's belief and feel ashamed, or they will find the environmentalist's belief to be against their current beliefs, or they just don't like his attitude, and they end up with a negative opinion of the environmentalism belief. These choices are informed by a person's beliefs, and their built-in 'persona', which would be something like a short stat tree of emotional variables like patience and empathy.
The top tier AI is pretty complex, which can be difficult. If the AI is constantly checking a list of values and reorganizing their own state machine based on them, I expect that will be pretty processor intensive. Especially if you have dozens or hundreds of these guys running around at once. So, I figure they'll only check their reason level once and a while, to make sure they're still on track, as well as when they have an ethical dilemna.
One thing I was thinking about, in order to minimize the impact of sudden reason checks, is to do them in extra processing room. Stardock did that with their Galactic Civilizations games. While the game is running, even if it's your turn, the AI machine is running in the background, thinking about what to do next in other threads. As a result, the AI is smarter the faster your computer is, and the slower you are. By drawing out reason checks over several frames, it could become a constant process without having a noticable impact on the game's standard processing.
Anyways that's the theory. I had been intending to test this AI theory in an application specifically designed for it; something like Animal Crossing where the player moves into a village and interacts with complex AIs. But I think it could work in Terasology too, giving us interesting NPCs to interact with, and letting them have more desires than work.
What do you guys think?
I have another system that I think would tie in well with this. I'll write that in another post when I have enough time.
Skaldarnar edit: Merged with Esereja 's AI thread, moved to incubator, added header
Name:Three Tier AI System
Summary: The Three Tier AI system is a concept for hierarchical AI design. Due to the hierarchical structure and the three tiered structure it can be used for animals, monsters and humanlike AI equally. AI behavior can be customized via json configuration.
Scope: Core
Current Goal: Working and customizable low level (tier 1) AI system (?), add social behavior
Phase: Design/Implementation
Curator: ReleeSquirrel, Esereja
Related: Miniions
One of the key systems I want to experiment with in Terasology is my idea of a three tiered AI system for person simulation.
The idea here is to simulate an AI complex enough to have humanlike interactions between AIs, and potentially with players as well. Ideally without breaking the bank on processor resources!
From what I've read about the mind, it's largely split between three levels. Animals that haven't changed much since ancient days still only have the first level or two, and humans are especially in the third. These are instinct, emotion, and reason. I think that these can be simplified in order to produce some really engaging artificial people.
Instincts are your basic actions. The simplest animals operate purely on instinct, like reflex. Pure input-output, event and response. Very simple. A fly doesn't dodge your hand because it thinks, "OH SNAP! A HAND!" it feels the air pressure and the shadow and moves automatically. That's why they're so hard to catch. Most AI operates like this too. Event, reaction. It makes them predictable and ultimately very controlable.
Instinct is easily modeled with a standard state-based AI system.
On top of instinct is emotion. Emotion is what you feel. They say there are six basic emotions. Anger, Disgust, Fear, Happiness, Sadness and Surprise. http://en.wikipedia.org/wiki/Emotion
Different basic emotions combine together, and with instincts, to form sub-emotions. When I first read about emotional theory I was surprised that Love isn't a basic emotion; it's actually a form of Happiness or Joy, combined with reproductive and familial instincts.
Emotions inform and influence our choices. We're less likely to trust someone who has hurt us before, and more likely to trust someone we like. If we're afraid of something, we'll be more on guard. We avoid what disgusts us, and try to get rid of it.
Emotions are often attached to things, so I feel the best way to model emotions is to use a target-emotion pair list. Now, keeping a list of how you feel about EVERYTHING would be kind of nuts, so I would trim this into two lists. One is your short-term immediate feelings, what you felt about the last ten things you interacted with perhaps, to guide your immediate actions. You might be mad at a person, but you don't have an everlasting hate for them. You might be afraid of a rock that nearly fell on you, but you wouldn't fear that rock or all rocks forever. Then there'd be a long-term emotion list, for things that are most important to you. How you feel about, say, the twenty or fourty most important people to you, like friends and family, as well as long-term grudges and fears.
Emotions would primarily inform the state machine. You act this or that way depending on your emotions. It becomes less reflex and more reaction, but it's still pretty simple. The third tier is where things really take off.
Reason is a special ability humans and some animals have, which allows us to predict the future and plan for it. Humans go way above and beyond any other animal though. We have really complex ideas like beliefs and plans and math and ethics. Effectively, Reason allows us to consider our situation and change our plans accordingly. Now, modeling that accurately would be kind of nuts. That's practically a turing level AI right there, isn't it? I don't think they've even got that working at MIT yet, and anyways it would take obscene amounts of processing power to handle even one AI at that level.
My idea for modeling reason in the three tier AI system is to use a collection of pre-scripted 'beliefs' along with a measurement for how firmly held that belief is, and to have the AI determine its course of action based on those beliefs. These beliefs will include things like ethics and personal goals. An ethic might be something like "Killing is Bad", and a personal goal would be something like "Own a House".
AIs would communicate with eachother to try and convince eachother of their beliefs. An example I gave someone while explaining this system before was an environmentalist and a litterbug. An environmentalist is hanging out by the river, and they see another NPC throw something away on the ground. The environmentalist gets angry because they saw someone violate one of their beliefs, and they go to talk to the litterbug. They share their belief with the litterbug, coloured by their emotion. The litterbug then compares the environmentalist's belief to their own beliefs. In this case, they will either accept the environmentalist's belief and feel ashamed, or they will find the environmentalist's belief to be against their current beliefs, or they just don't like his attitude, and they end up with a negative opinion of the environmentalism belief. These choices are informed by a person's beliefs, and their built-in 'persona', which would be something like a short stat tree of emotional variables like patience and empathy.
The top tier AI is pretty complex, which can be difficult. If the AI is constantly checking a list of values and reorganizing their own state machine based on them, I expect that will be pretty processor intensive. Especially if you have dozens or hundreds of these guys running around at once. So, I figure they'll only check their reason level once and a while, to make sure they're still on track, as well as when they have an ethical dilemna.
One thing I was thinking about, in order to minimize the impact of sudden reason checks, is to do them in extra processing room. Stardock did that with their Galactic Civilizations games. While the game is running, even if it's your turn, the AI machine is running in the background, thinking about what to do next in other threads. As a result, the AI is smarter the faster your computer is, and the slower you are. By drawing out reason checks over several frames, it could become a constant process without having a noticable impact on the game's standard processing.
Anyways that's the theory. I had been intending to test this AI theory in an application specifically designed for it; something like Animal Crossing where the player moves into a village and interacts with complex AIs. But I think it could work in Terasology too, giving us interesting NPCs to interact with, and letting them have more desires than work.
What do you guys think?
I have another system that I think would tie in well with this. I'll write that in another post when I have enough time.
Last edited by a moderator: