Implementation gestalt-entity-system

Immortius

Lead Software Architect
Contributor
Architecture
GUI
I am not doing anything that should prevent either approach. I... am not sure about the approach itself (not sure why you would be classifying different components in that way) though.
 

Josharias

Conjurer of Grimoires
Contributor
World
SpecOps
In this particular case it is for preventing items from merging together in a stack that differ by a something other than the item component's stack ID. A good example is the durability system where you can stack items with no durability damage, but not if they have been damaged.
Is there a better way to do this that I am unaware of? One could generate stackid values that just append some value to the base string value, but then that would get messy when compositing item differentiating components together.
 

Josharias

Conjurer of Grimoires
Contributor
World
SpecOps
@Immortius, something I stumbled across while tweaking the performance of the physics system is that there is a concept of uncommitted local changes that happen to LocationComponent in response to physics (like gravity). These changes are not explicitly saved back to the entity so that each client can simulate smooth physics and reduce the amount bandwidth to each connected client from the server.

I am not seeing anything in your new ES or @Marcin Sciesinski's implementation that would suggest any ability to do this. Do you guys have any thoughts on how we could accomodate this?

[Edit: some key combination submitted this before I had done typing]
 

Josharias

Conjurer of Grimoires
Contributor
World
SpecOps
Regarding saving location every so often, we do already have a LocationResynchEvent which is called every 500ms which broadcasts the server location for the entities to each of the clients.

Possibly if we did a little refactoring we could better use the @Replicate(initialOnly = true) to our advantage? That way we could continue saving LocationComponent without increasing bandwidth requirements. I tried this and I believe there was still lots of data being sent over the wire because the entity system was sending updates even if some of the values had not changed. If comparison of changed values improves in the next version, it could be a viable solution that does not involve shinanigans like not saving back to the entity.

(also... you guys and your IRC conversations. ;) Collaboration is hard when not all parties are present)
 

Josharias

Conjurer of Grimoires
Contributor
World
SpecOps
Instant feedback from sending lots of data across the wire? For most location stuff, all animation style stuff is done at the client side, the server resynch is just to ensure consistency with the server. <confused, but I suspect there is a good point you are driving after>
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Slack can be handy too, easier to ping people to see if they're around :coffee:

Probably we should be more on top of transcribing what happens on IRC into a forum thread if there are some new notes of interest? We have the IRC log in Slack.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
I have been making some progress on the event system,which is visible in my fork of gestalt.

First thing of note is that core event processing makes full use of transactions. The transaction an event is running in acts as a cache of the state of the target entity, providing all the components of for the event handlers. I tweaked how transactions worked a touch in the current reference implementation to go with this - whenever an entity is touched all of its components are pulled into the transaction. This implies that later when I the entity system is being optimised that obtaining all the components of an entity at once is an important operation to focus on. It fits well with the revision number tracking being at the entity level too.

Beyond that, the core event processing has been cleaned up a lot. In terasology the event system tries to filter out event handlers at the beginning of processing an event, based on available components. This has been thrown away and the checking is now done for each event handler as the processor works through the list. Pulling events handler methods out of an object has been moved into a helper class separate from the core processing class too. Networking stuff is all removed - this would be added at a higher level as desired.

Event handler priority has also been removed, in favor of explicit @Before and @After annotations that allow ordering before or after the class providing an event.

Moving out of core processing, events can now either be Synchronous or Asynchronous (with Asynchronous being the default). Synchronous events are run immediately and within the transaction sending them (if any). Asynchronous events are only run after the transaction sending them has been successfully committed, and can run at any time depending on how the EventSystem running them is implemented - the only guarantee is when EventSystem::processEvents() is called it will block until all outstanding events have been processed (and any events generated processing them). Two implementations are currently done - an EventSystem that runs all events immediately, and an EventSystem that queues up Asynchronous events until processEvents() is called.

For Asynchronous events, if the transaction fails to commit at the end due to a ConcurrentModificationException (conflict with another, completed transaction) then the event will automatically be re-run.

The event receiver signature is also a little different:

Code:
EventResult handler(Event event, long entityId, Transaction transaction [, ComponentA a, ComponentB b...)
EventResult is an enum which is either CONTINUE, COMPLETE or CANCEL. Continue tells the event processor to continue with the next handler. Complete tells the event processor to stop calling handlers and treat the event as successful. Cancel tells the event processor to stop calling handlers and treat the event as failed - the transaction will rolled back.
 

Florian

Active Member
Contributor
Architecture
I have been making some progress on the event system,which is visible in my fork of gestalt
Gratulations then for reaching this milestone.

First thing of note is that core event processing makes full use of transactions. The transaction an event is running in acts as a cache of the state of the target entity, providing all the components of for the event handlers. I tweaked how transactions worked a touch in the current reference implementation to go with this - whenever an entity is touched all of its components are pulled into the transaction. This implies that later when I the entity system is being optimised that obtaining all the components of an entity at once is an important operation to focus on. It fits well with the revision number tracking being at the entity level too.
Do you plan on adding something that prevents big entities to being copied unnecessary each time you look at them? Also what is with entities with a lot of data, that have very frequently some of it's small components modified. e.g. a character might have a ton of components from different modules that add their own character specific static meta data(e.g. descriptions of the armor, quests the player has in progress, etc.) but has on the other side a lot of frequently updated data like (location, mana bar etc).


Beyond that, the core event processing has been cleaned up a lot. In terasology the event system tries to filter out event handlers at the beginning of processing an event, based on available components. This has been thrown away and the checking is now done for each event handler as the processor works through the list.
That sounds like you remove an optimization. So I am curious what your reasoning where.

Pulling events handler methods out of an object has been moved into a helper class separate from the core processing class too. Networking stuff is all removed - this would be added at a higher level as desired.
Sounds reasonable.

Event handler priority has also been removed, in favor of explicit @Before and @After annotations that allow ordering before or after the class providing an event.
With "class providing an event." I guess you mean the system that defines a handler for it. Sounds good, was long overdue.

Moving out of core processing, events can now either be Synchronous or Asynchronous (with Asynchronous being the default). Synchronous events are run immediately and within the transaction sending them (if any). Asynchronous events are only run after the transaction sending them has been successfully committed, and can run at any time depending on how the EventSystem running them is implemented - the only guarantee is when EventSystem::processEvents() is called it will block until all outstanding events have been processed (and any events generated processing them). Two implementations are currently done - an EventSystem that runs all events immediately, and an EventSystem that queues up Asynchronous events until processEvents() is called.

For Asynchronous events, if the transaction fails to commit at the end due to a ConcurrentModificationException (conflict with another, completed transaction) then the event will automatically be re-run.
Having the option to still use synchronous events is good, as I guess most of the existing code relies on that.

The event receiver signature is also a little different:

Code:
EventResult handler(Event event, long entityId, Transaction transaction [, ComponentA a, ComponentB b...)
entityId? What happened to EntityRef? Maybe it should be allowed to still formulate event handlers in the old format and have the old format default to synchronous event handling.

EventResult is an enum which is either CONTINUE, COMPLETE or CANCEL. Continue tells the event processor to continue with the next handler. Complete tells the event processor to stop calling handlers and treat the event as successful. Cancel tells the event processor to stop calling handlers and treat the event as failed - the transaction will rolled back.
I like the idea of having a return value instead of a consumable event.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Do you plan on adding something that prevents big entities to being copied unnecessary each time you look at them? Also what is with entities with a lot of data, that have very frequently some of it's small components modified. e.g. a character might have a ton of components from different modules that add their own character specific static meta data(e.g. descriptions of the armor, quests the player has in progress, etc.) but has on the other side a lot of frequently updated data like (location, mana bar etc).
That is a good question. I'm leaving optimisation to the side for the moment to focus on the desired behavior, but this probably should be set to copy the component when the component is first requested, not when the entity is pulled into the transaction.

That sounds like you remove an optimization. So I am curious what your reasoning where.
Partially correctness of behavior, partially an actual optimisation. The incorrectness of behavior was around what would happen if an earlier handler added a component that would make a later handler valid - in terasology because the filtering occurs before processing the chain the later handler would already be removed. The optimisation is because the correct components were being checked again before each call anyway. Just taking the entire chain and then checking each handler as it is reached is simpler and more correct.

With "class providing an event." I guess you mean the system that defines a handler for it. Sounds good, was long overdue.
In general, yes - it would be the component system class in Terasology.

Having the option to still use synchronous events is good, as I guess most of the existing code relies on that.
It is necessary for query-style events at the very least.

entityId? What happened to EntityRef? Maybe it should be allowed to still formulate event handlers in the old format and have the old format default to synchronous event handling.
Currently EntityRef is not a thing in the new entity system. Even if it was, it would lose all its methods, pretty much - I don't have a good answer for how the methods would work in the presence of transactions and multiple threads. Also I don't want to eat the potential memory usage and churn of essentially wrapping a long in a class without thinking it through. I may well end up reintroducing it, we'll see how things turn out.

Other than that, I think the Transaction is important enough to include, otherwise there is no way to do a lot of actions.
 

Florian

Active Member
Contributor
Architecture
That is a good question. I'm leaving optimisation to the side for the moment to focus on the desired behavior, but this probably should be set to copy the component when the component is first requested, not when the entity is pulled into the transaction.
In a lot of cases components are only obtained for reading data. That might be also a point where a optimization could be added.

Partially correctness of behavior, partially an actual optimisation. The incorrectness of behavior was around what would happen if an earlier handler added a component that would make a later handler valid - in terasology because the filtering occurs before processing the chain the later handler would already be removed. The optimisation is because the correct components were being checked again before each call anyway. Just taking the entire chain and then checking each handler as it is reached is simpler and more correct.
Hmm, seems to me like a corner case that doesn't make any difference in common scenarios. While on the other hand I thought the optimization to filter out relevant systems at start is very imporant for generic events like OnChangedComponent which will propably have a ton of handlers and a lot of calls.


Currently EntityRef is not a thing in the new entity system. Even if it was, it would lose all its methods, pretty much - I don't have a good answer for how the methods would work in the presence of transactions and multiple threads. Also I don't want to eat the potential memory usage and churn of essentially wrapping a long in a class without thinking it through. I may well end up reintroducing it, we'll see how things turn out.
hmm, I don't really see the man power in terasology (which I mainly mean motivated people to do just boring porting work) to do all the necessary adjustments to adjust to a big API change. I for example have a student to mentor and want to get some content (and thus example code) done which I think is currently more important for terasology.

If we really want event transactions with multi threading for terasology then we could put the transaction in a thread local variable (https://docs.oracle.com/javase/8/docs/api/java/lang/ThreadLocal.html) and then access it from the method of EntityRef.

Other than that, I think the Transaction is important enough to include, otherwise there is no way to do a lot of actions.
What actions do you want do with transactions?

Maybe just a "auto save components at end of event processing" feature would be enough for terasology.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
In a lot of cases components are only obtained for reading data. That might be also a point where a optimization could be added.
True.

Hmm, seems to me like a corner case that doesn't make any difference in common scenarios. While on the other hand I thought the optimization to filter out relevant systems at start is very imporant for generic events like OnChangedComponent which will propably have a ton of handlers and a lot of calls.
Events with triggering components is probably the once space where this could help. But... the structure used by terasology to make it efficient to do the initial filtering loses order, so the handlers have to be sorted afterwards. Which is also a cost. Any way, without metrics we're just arguing about airy supposition which is pointless.

hmm, I don't really see the man power in terasology (which I mainly mean motivated people to do just boring porting work) to do all the necessary adjustments to adjust to a big API change. I for example have a student to mentor and want to get some content (and thus example code) done which I think is currently more important for terasology.

If we really want event transactions with multi threading for terasology then we could put the transaction in a thread local variable (https://docs.oracle.com/javase/8/docs/api/java/lang/ThreadLocal.html) and then access it from the method of EntityRef.
To make things very clear, I don't care whether Terasology ends up using gestalt-entity-system or not. If the design is too much work for Terasology to integrate, don't. It isn't useful for Terasology in its current state anyway.
Yes, I could have some static singleton that track the current transaction per thread, if I'm willing to assume only one transaction per thread. EntityRef would only work if there *is* a current transaction in that case though.


What actions do you want do with transactions?

Maybe just a "auto save components at end of event processing" feature would be enough for terasology.
Without a transaction you cannot addComponents, removeComponents or send events. Or work with other entities.



Edit:

I guess thinking further on this...
* EntityRefs are likely going to be valuable in dealing with prefabs, and could also allow entities created in a transaction to only be given an id during commit.
* Assuming entity refs are reintroduced, I wouldn't have any issue with supporting handlers that did not receive the transaction.
* On components only being obtained for reading... Certainly components only being interfaces allows for cleverness around how the internal data is managed. That data could be a reference to shared data until a modification occurs, which would trigger a copy. The complication is mutable values such as vectors - as there is no way to track changes to these values the copy would have to occur whenever a mutable value is accessed.
 
Last edited:

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Another update.

I have (re)introduced EntityRef and reworked the API around this. All adding/removing/obtaining components is done through entity refs, which automatically hook into the current transaction on the thread (an exception is thrown if no transaction is running). There is now no longer any way to mutate entities outside of transactions as a result. In general the way the methods behave within a transaction (changes to components being live within the transaction and automatically applied when it is committed) vs outside of a transaction (retrieved components being detached from entities) is just too different to support both with the entity ref methods. Could always add an interface over short lived transactions to fetch and update components I suppose.

This change allowed the transaction API to be largely streamlined, so there is no longer any Transaction class. Event handlers parameters are down to the event and entity (plus any desired components), as in Terasology.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Prefab support has been added now.

Unlike in terasology, prefabs are now the recipes for one or more entities. This allows for better use of multiple entity structures.

A fully featured entity prefab looks like this:

Code:
{
  "inherit" : "core:parent",
  "root" : "person",
  "entities" : {
    "person": {
      "player": {
        "name": "Fred"
      },
      "inventory" : {
        "items" : [
          "core:torch", "core:torch", "extension:axe", "journal"
        ]
      }
    },
    "journal": {
      "book": {
        "cover": "Journal"
      }
    }
  }
}
Firstly, this prefab inherits from a parent prefab "core:parent". It will contain all the entity recipes from this parent, and they will have all the components and the settings for those components from that parent. If an entity with the same id is defined in this prefab, then any components defined for it will add to or update the inherited components for that entity.

Next the prefab has a root. This is the entity that is returned when the prefab is instantiated, unless a map of all the entities created is requested. Additionally, if there is a reference to the prefab this is the entity that the reference will point to - but more on that later. If no root is defined, then the inherited root is used, followed by an entity called "root", followed by the first entity in the file.

Next is the entity recipes. Each is identified by a name - in this case "person" and "journal". They then contain component definitions. Like terasology, the name of each component is used to look up a Component class, either with the same name, or the name with "Component" appended. Within each components the values of any properties of those components can be supplied.

Of special note is how the EntityRef properties are handled. In this prefab, the person's inventory has a list of items, as a List<EntityRef> (well, I just realized this isn't supported yet, but will do for describing expected behaviors). Three of these references are to other prefabs - two to "core:torch", and one to "extension:axe". When the prefab is instantiated, each external prefab reference is instantiated, and the EntityRef property populated with a reference to the instantiated entities. So in this case two torch entities and an axe entity will be instantiated.

The final reference to "journal" is an internal reference. This will point to the journal entity that is part of the same prefab. No matter how many references there are to an internal entity, only one will be generated - in contrast to external reference where one will be generated for each reference.


Some shortcuts are allowed in this - for a simple, single entity prefab the minimum structure is:

Code:
{
  "entity": {
    "sample": {
      "name": "Test Name"
    }
  }
}
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Very good to see more progress :)

I figure this might be interesting to @Flo and @Josharias, maybe particularly to @xtariq as multi-entity setups have come up in relation to topics like Anatomy, yet exactly how it might work has been up in the air. Even though this wouldn't be available for a while even best case it is probably something interesting to think about. We've also talked a bit about how we might "append" to an existing prefab rather than purely override or delta (imagine a module easily adding a new drop to a block from another module that already has custom drops)

Maybe it also makes a difference to something like DynamicCities with @Cpt. Crispy Crunchy @msteiger and @Skaldarnar - although it certainly won't be usable during the GSOC period. Again though, food for thought!

I figure the main/only thing that marks the "journal" as internal is the lack of a qualifier in front? Any chance of mixups between internal references and shorthand references to unambiguous prefabs?
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
I figure the main/only thing that marks the "journal" as internal is the lack of a qualifier in front? Any chance of mixups between internal references and shorthand references to unambiguous prefabs?
That is correct, and a fair comment. Yes, it is expected all external prefabs referenced are fully qualified. Would it be better to add a bit more syntax here, such as:

Code:
    "ref" : ["Prefab(core:torch)", "Prefab(torch)", "Local(journal)", "journal"]
Where the first two are external prefabs, the last two are local entities?
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
I do like how clean the first version was. Could "internal:something" work just with internal as a keyword meaning to look elsewhere in the prefab?

Although if you have more stuff in mind with the extra syntax maybe that'd be superior.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
That will work, as long as no one uses "internal" as a module id.
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Could we add support for a minimal module blacklist to either gestalt-module or whatever uses it? Then if a module is encountered during scanning that is on the list then skip it and log a warning. That could help safeguard against such keywords.

Maybe that could also be a tool for future hosting providers if for some reason (resource utilization?) they'd want to prevent server admins from spinning up a server with some particular modules enabled. Not that it would necessarily be hard to rename a standalone module, but if they at all are referenced by other modules then a rename wouldn't go very far anyway.

Probably a bit of an edge case, but that's one way to guarantee "internal" will remain reserved.
 
Top