Skysphere

manu3d

Active Member
Contributor
Architecture
Being interested in astronomy issues #94 to #97 and in particular issue #96 caught my eye.

As I begin getting acquainted with the codebase and all things Terasologicals, I thought I'd start a discussion on the issue so that (ideally) by the time I can write some code, we are all happy with what is supposed to be implemented.

The first thing I thought is that all these issues, in particular #94 to #97, are really just one: we need a system to show things in the sky that are fundamentally unrelated to the geometry of the world. Of course we could just have a single texture mapped on a rotating sphere and just live with that. But that isn't particularly flexible, is it?

A Skysphere module that allows separate (user-generated?) sub-modules sounds more flexible to me. One sub-module could be responsible for solar system objects (sun(s), planets, moon(s), comets, asteroid belts), one submodule could be responsible for the background stars and other modules could take care of meteors and northern lights as mentioned in issue #97. If necessary, features of each submodule could be disabled, to allow for a more specialized submodule to do its job. I.e. the solar-system submodule might get sun and moon disabled so that a more specialized submodule can better deal with things such as eclipses.

I don't know enough of Terasology's rendering pipeline but the way I think of it is that each module should define a mini-scene to be rendered in a context that is completely separate from the world's rendering context. For example the solar system submodule would provide an actual 3d, animated solar system ready to be rendered, the camera positioned on the appropriate planet and oriented according to the player's look vector. Similarly the stars submodule would provide a number of stars positioned in a 3D space, ready to be rendered. Interestingly, with this kind of 3D-driven approach, a few things would occur "for free", i.e. eclipses and the subtle parallax effect that allows near star distances to be calculated (why would you want to do that is a different matter). Also, if traveling to other planets was to be allowed (even by just using portals), providing an updated planetary view would be just a matter of changing the position of the camera in the miniature solar system.

How each submodule defines the data and its appearance would be up to the submodule. I.e. a submodule could define just planet position as a function of time, while asset-only submodules would provide the actual models and textures for planets, stars and whatelse. Another submodule could provide an all-in-one package instead. What's important is that they all provide a scene to be rendered, the rendering order of each scene being defined in the skysphere module.

Submodules could also send events if it is warranted. I.e. a meteor module could send an event detailing the characteristics of a fireball crossing the sky so that the world can be suddenly lit up accordingly. Modules could also detect events such as alignments and eclipses and inform whatever subscribing system would like to be notified. Also, submodules should provide their own API, so that things such as flaring a star on demand or perhaps accelerating time so that planets move at dizzying speed or even stopping time so that it's always daytime, become possible.

It's 3:27am and -perhaps- it's time I go to bed. I'll start to detail some of the submodules in a further post in this thread over the next few days. What would be interesting is to get some early feedback. Is the overall picture a reasonable/feasible one? Especially the idea of multiple rendering contexts? Also, besides nearby objects (suns/planets/moons/comets/asteroid belts), deep space objects (stars/galaxies/nebulaes) and upper-atmospheric events such as northern lights and meteors, what else would you like to see up there?

By the way. Who said Terasology should be a planet? Could it be a large moon orbiting a gas giant instead? Might the giant have rings, like saturn? Or might Terasology itself have rings? I guess for once the sky -isn't- the limit! :)

Ciao!

Manu
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Thanks for getting this post in!

This area is wide open for interpretation - there is no set rule for where any single world in Terasology actually exists, it could be a planet, a moon, a strange extra-dimensional plane, a cubic planet rather than a sphere, etc :)

Pretty much everything you say would be awesome and cool, it just comes down to somebody being willing to spend the effort while keeping it performing well. I am very eager to eventually get a system for arbitrary celestials and events that would come with it, like support for a magic system impacted heavily on alignment of planetary bodies and such. It isn't high priority either, but then that just makes it even more contributor friendly as there's no time constraint on when we need it.

Immortius has been talking about maybe working in multi-world support sometime so he might have some ideas on that topic. Would be neat with a "real" solar system powering the skysphere, although that's probably one of those things that sounds really cool but may be better "faked" for performance reasons

begla probably knows the most about the skysphere, so I'm going to poke him about this thread too :) He added some neat sky shadows that sweep over the ground, although I think they're fully simulated so they might be out of sync with actual clouds (also affected lighting inside caves and such, heh - unsure if that's still a thing)

Adeon also did some skysphere stuff, but it was a long time ago - back when we had actual procedural clouds and such, although I think that resulted in some performance issues (now it is just a static texture + two shaders to power the sun and moon)

Also fun fact: There's currently a second (small, weak) "sun" baked into the current skysphere texture by accident - you can see it at the horizon at all times :D
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
My thoughts:

At the moment we have a single camera based rendering system. What we should do is add support for multiple cameras, with each rendering different entities based on some sort of label/filter system. These cameras would be given a rendering order.

With that in place the skysphere becomes simple - you have a camera at the origin, with its rotation synched with the player's camera. Then around the origin you have the skysphere scene, with everything tagged to be rendered by the skysphere camera. Extension is simple - a module would just add new entities with the skysphere tag, with whatever relevant behavior.
 

manu3d

Active Member
Contributor
Architecture
Thank you Cervator and Immortius for your replies (and Adeon for the support!)

Cervator: neat gameplay idea about planetary alignments affecting a magic system. It might turn out to be the only thing that will ever justify me looking into astrology! :laugh: Which kind of makes me wonder if it'd be possible to also add a component of "belief" to the system. The more you believe the more it affects the player/s or the more it -looks- like it affects the player/s. ;)

My hint about other planets is not meant to be pushing in that direction but just a natural extension of the concept of dimensions from minecraft. I imagine at some point this project would move in that direction too. Once dimensions are implemented using them to go to other planets would require mostly world-generation/game design efforts rather than serious programming effort. So, I hope nobody took it as "I want to go in that direction". It's more of a "It'd be nice to go in that direction"! :rolleyes:

In this context, faking a solar system instead of rendering one would certainly be better for performance. But I suspect that the system can be made flexible enough to allow the user to set a reasonable level of detail for his/her machine or simply load one module or another. I.e. statically or semi-statically textured quads for planets for the low-end and actual spheres with some detail for the high-end. What's the current average rendered triangle-count? Can a few hundreds OR thousands of triangles be added to it without serious performance loss?

Concerning procedural clouds, since you mentioned them, I'm keeping them out of the picture because I see them as something very distinct and potentially to be handled in-world [like|better then] minecraft does rather than through the skysphere. Of course with the idea of submodules effectively being conceptually concentric layers in the skysphere a cloud submodule would be possible, with anything from a textured sphere to 3D procedural clouds if your machine has the muscles. Better keep the focus on celestials though at this stage, there's already enough to think about for me!

Immortius: I managed to use an inappropriate term (rendering context, which is about windows and viewports rather than layers, I gather) but you still managed to understand what I meant! Pheeeew! :oops: Strictly speaking sometimes it'd be useful to render the same scene from the same camera again using different attributes or a different list of objects. I.e. a star/deep sky layer can be rendered by the exact same camera as the player's, it just need to be rendered before the world and before the objects in the solar system. Similarly, an augmented reality overlay would be rendered by the same camera again, but after everything else with the exception of the HUD. For the solar system solution I mentioned indeed a different camera would be needed as it would be best positioned on the surface of a planet as it rotates on itself and revolves around the object it is orbiting. This way things such as moon/planet phases and eclipses happen automatically and setting the whole system's characteristics remains intuitive.

In this context I like the idea of tagging objects so that they are rendered by different cameras, each camera rendering one layer. I wonder if it'd be possible to also tag an individual camera to render multiple layer, each characterized by different rendering attributes and object list. This way the number of cameras would be kept to a minimum as there wouldn't be copies. Admittedly your solution might be more easy to implement though. Also, can shaders help at all with this? Can a compositing shader receive framebuffers generated for each layer and stack them or should everything be achieved in opengl? I have no idea performance-wise which would would be more reasonable. Conceptually a compositing shader might be easier and give more post-processing freedom.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Immortius: I managed to use an inappropriate term (rendering context, which is about windows and viewports rather than layers, I gather) but you still managed to understand what I meant! Pheeeew! :oops: Strictly speaking sometimes it'd be useful to render the same scene from the same camera again using different attributes or a different list of objects. I.e. a star/deep sky layer can be rendered by the exact same camera as the player's, it just need to be rendered before the world and before the objects in the solar system. Similarly, an augmented reality overlay would be rendered by the same camera again, but after everything else with the exception of the HUD. For the solar system solution I mentioned indeed a different camera would be needed as it would be best positioned on the surface of a planet as it rotates on itself and revolves around the object it is orbiting. This way things such as moon/planet phases and eclipses happen automatically and setting the whole system's characteristics remains intuitive.
I think it would get kind of tricky having multiple passes from different cameras interleaved. Probably simpler to keep each camera for a single rendering pass, and then attach multiple cameras to the player/whatever for different things. The cost of having multiple cameras would be minimal - the real cost is in the rendering.

In this context I like the idea of tagging objects so that they are rendered by different cameras, each camera rendering one layer. I wonder if it'd be possible to also tag an individual camera to render multiple layer, each characterized by different rendering attributes and object list. This way the number of cameras would be kept to a minimum as there wouldn't be copies. Admittedly your solution might be more easy to implement though. Also, can shaders help at all with this? Can a compositing shader receive framebuffers generated for each layer and stack them or should everything be achieved in opengl? I have no idea performance-wise which would would be more reasonable. Conceptually a compositing shader might be easier and give more post-processing freedom.
It would be nice to have cameras able to render to texture, then that texture could then be used in future rendering. Terasology currently uses a deferred rendering pipeline, which means there are already multiple framebuffers in the mix - I'm not sure how this would all fit together off the top of my head.
 

manu3d

Active Member
Contributor
Architecture
Whoopsie. I got a bit inspired on this one. In the next few days I should have one diagram and 7-8 (A4) pages of draft design text to publish (I currently have one diagram and 5 pages). I can imagine only few would like to go through it all. No worries: the overview section is the important bit and it's only a couple of pages long. Everything else is more in-depth but optional, and it is divided in easily skipped sections, i.e. stars, planets, aurorae, etc.

Now. Does it make sense to write two long posts here (one overview, one describing individual layers), should I write some pages on the wiki, should I attach some text-documents to a post or... something else?

And yes, of course I'm keeping performance in mind.
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Fancy sounding! :)

If it is meant as a point-in-time working draft that'll go obsolete as implementation progresses then I'd say the forum makes sense. Maybe put an overview / summary here then link to a GDoc if it is too big for one post?

Wiki is more for general how tos that remain useful over time. So any sort of instruction manual could go there, but better to keep that in code IMHO so it is less likely to go out of date. Code and Javadoc with maybe a small primer in the wiki.

I'd caution against anything too heavy and unwieldy though, better to then start with test-driven development or something so you're coding a framework as you go along. We had somebody build a huge game design document an age ago and it was too big to ever be any use :(
 

manu3d

Active Member
Contributor
Architecture
Fancy sounding!
More like... fancy-able... ;)

The level of actual fancyness is up to the implementation and could vary considerably, i.e. mobile vs PC and according to the user's preferences. Speaking of which, I must write a couple of posts, one of which will ask about user preferences...

Thanks for your reply!
 

manu3d

Active Member
Contributor
Architecture
Greetings everybody.

Please take a look to the diagram attached to this post first and keep it at hand. It provides a draft overview of a possible Sky System architecture. As you read through the overview section below and perhaps through the Layer Descriptions document, you might want to keep in mind the following issues, as I could use inputs and leads on them.
  • What should be responsible of rendering a layer and eventually the whole sky?
  • What should be responsible of triggering the update of a layer if necessary?
  • How could the whole system and its components fit within the existing Entity System?
  • Any other idea/feature that should be added/considered?
Now, let's dig into the details.

OVERVIEW

ISky implementations

Responsibilities:
  • Provide accessor methods to manage a list of objects implementing the ISkyLayer interface.
  • Provides accessor methods to retrieve API objects provided by the ISkyLayer implementations. (Events to/from the entity system might replace these entirely)
  • Provides an EventManager that interested parties can use to publish sky-related events or be notified about them. (The entity system already has one?)
  • Requests the update of each layers with the appropriate, set frequency.
  • Mantain a register of sky-based lights and their characteristics.
In-depth description:

ISky implementations maintain a list of objects implementing the ISkyLayer interface. Their public methods allows for the addition, insertion, replacement and deletion of layers. Broadly speaking, the first layers in the list get rendered first. For example, a Deep Sky layer featuring fixed, far away objects might be the very first layer in the list and act as the overall background.

(DISCLAIMER: the next two paragraph might be replaced completely by an entity-based approach, I just need your help to understand how the entity system might help in this context)

Individual ISkyLayer implementations may provide API objects that may be used by portions of the software outside of the Sky System. For example a Star Layer object might provide an API object to let a star blink at a specific time or even go supernovae. Or a Planetary Layer API object might provide methods to calculate the time until the next planetary alignment. ISky implementations allow the retrieval of these API objects rather than the ISkyLayer implementations themselves.

ISky implementations also provide an EventManager that can be used to subscribe to sky-related events or broadcast them. For example, the Planetary layer might broadcast events at the beginning and at the end of an eclipse.

ISkyLayer implementations provide information on how often they need to be updated, if ever. It is the role of the ISky implementations to schedule and trigger the update whenever appropriate. (Very arbitary decision. Could/should the ISkyLayer implementations themselves schedule their own updates?)

Finally, ISky implementations maintain a register of sky-based light sources (sun, moon, meteors, etc) and their characteristics, i.e. position in the sky, color and intensity. These can be consumed by other modules, i.e. to provide lighting to the world the player inhabits.

ISkyLayer implementations

Responsibilities:

  • May be enabled or disabled.
  • Draw the sky layer they are responsible for, if appropriate querying an ISkyDataProvider implementation and using any required art asset.
  • Provides a default update frequency enforced or refined by the ISky implementation.
  • Respond to update triggers from the ISky implementation they are registered with.
  • May registers light sources with the the ISky implementation they are registered with.
  • May broadcast or subscribe to events through ISky implementation they are registered with.
  • May provide API objects so that other software components may interact with them programmatically.
In-depth description:

An ISkyLayer implementation might or might not be enabled. If it is enabled it is updated and rendered. If it is disabled it should have no CPU/GPU footprint and have the smallest possible footprint in RAM.

An ISkyLayer implementation is largely responsible for the “presentation” of some sky-based content, i.e. stars. This might involve creating a 3D scene and rendering it or perhaps delegating the rendering to another part of the code. Simple implementations might generate content on their own or read it from disk. More complex one might focus on the presentation aspects and delegate content reading or generation to an ISkyDataProvider implementation.

A layer implementation might be static, requiring only rendering once is setup. Others might require updating on a regular basis, but not necessarily every frame. Each layer provides a hint-like default value detailing how often it needs updating. The ISky implementation (or the user through it) can override this value for performance or quality reasons, as it is the ISky implementation that triggers the actual update of each layer.

Objects portrayed by a layer might be significant light sources. A significant light source is one that should (if performance allows it) affect the lighting of the world the player inhabits. A sun and a moon are typical significant light sources. A layer can register any significant light source with the ISky implementation, so that the code rendering the landscape can take advantage of them.

A layer implementation might also subscribe to events broadcasted through the ISky implementation or broadcast events of its own. It might also provide API objects to interact with it programmatically (See disclaimer above: these aspects might be dealt with through the entity system instead)

ISkyDataProvider implementations

Responsibilities:
  • Provide data to ISkyLayer implementations.
  • May provide only initial data, dynamic data or up to date data on request.
  • May provide user-generated data, random data or a mixture of the two.
  • May broadcast or subscribe to events through ISky implementation they are registered with.
  • May provide API objects so that other software components may interact with them programmatically.
In-depth description:

An ISkyDataProvider implementation is responsible for providing data to one or more ISkyLayer implementations. For example it might provide the dynamic positions and other characteristics of the celestial bodies in a planetary system.

An ISkyLayer implementation doesn't have to take advantage of an external ISkyDataProvider implementation nor does it have to implement one. However, it can be advantageous to separate “presentation” and “content”. For example multiple layers might take advantage of the same data but visualize it in different ways. Or in a multiplayer scenario the data provider gets configured by a game-master/server admin for game-play purposes while the layer-provided visuals are configured by the player, for performance purposes.

A data provider might provide only initial data. Perhaps because the consumer layer is static or perhaps because the ISkyLayer implementation itself will then take care of updating the data as necessary. Alternatively the data provider can provide fully dynamic data, i.e. sharing one or more objects it keeps up to date independently of any consumer object. Finally, it can provide up to date data only on request, i.e. returning a delta from a previous state but only when interrogated, perhaps via an API call or via an event-triggered response.

A data provider might provide hand-crafted data, i.e. reading from a file, from the network or set via an appropriate user interface. For example, this data could refer to existing objects such as our solar system's planets, their orbits and their characteristics. Alternatively a provider might generate entirely random data, its algorithm more or less customized through user-accessible parameters.

Like the ISkyLayer implementations before it, an ISkyDataProvider implementation may broadcast events or subscribe to them through the ISky implementation. Likewise, it can also publish API objects to allow for programmatic interaction with it. (disclaimer above valid also for the next paragraph)

An in-depth description of each layer mentioned in the diagram and cursorily mentioned in the text above is available here as a google document. Anybody with the link can comment to specific paragraphs directly on it. Just make sure to sign your contribution somehow. Alternatively, you can provide general feedback here in this thread. As this document is 4.5 pages long you might want to focus on one or two layer that interest you or haven't received much feedback already.

That's all for now! Looking forward to your thoughts. Meanwhile I'll continue my investigations on how to make it all (or at least in part) happen.

Ciao!

Manu

 

Attachments

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Nice write-up, manu3d ! And good news, I think mostly everything needed to support all that is already available, making it a lot less work to get started. You don't actually need to get anywhere near that deep into the architecture to do something awesome, even if it looks very fundamental / low-level :)

Immortius can provide much better details here, but I'm pretty sure you'll be looking at integrating heavily with the ES for pretty much everything (did you read the ES intro in the wiki yet? I forgot). I'll give a stab at getting some starting feedback. Your diagram likely will translate something like this:
  • Sky + ISky -> SkySystem - iterates over different entities with SkyComponent and renders them appropriately (in order and everything) onto the Skysphere. We don't need ISky as the SkySystem will extend or implement one of the existing ES classes/interfaces. I'm not sure how the existing Skysphere class would relate - this whole thing may replace it entirely.
  • ISkyDataProvider -> SkyComponent - a pure data container with no logic, contains details on what to render, how
  • ISkyLayer -> A .prefab that defines a single sky layer entity (the Comet layer should use texture x with any special instructions y in layer order z) - so all the data that gets saved into an instance of SkyComponent, and processed per pass in SkySystem
  • Events -> Existing ES for sure
  • Solar system simulation stuff -> CelestialSystem + CelestialComponent, that together handle logic + data for everything being simulated in a solar system but not how to render anything at all. Probably the SkyComponents would base their characteristics on the backing Celestials through event-based communication
We don't preface classes with their type (ISomething being an Interface), by the way, as per our code standard, IDEs highlight them well enough :)

The split between SkySystem (render stuff) and CelestialSystem (simulate celestial locations and travel) is a classic case for the ES. They might be in the same module, but don't necessarily have to be. You can render stuff on the Sky that isn't a Celestial (high flying birds for ambiance, for instance - speaking of, somebody should implement that), and you could have a Celestial that isn't rendered (a moon orbiting a planet too far away to be visible)

Sky layers could be enabled/disabled as simply as removing the SkyComponent on the relevant entity. For a goofy example if the CelestialSystem suddenly decides comets are invisible (via divine intervention or a changed graphics option), but still exist you simply remove the SkyComponent from the comet entity - yet it retains its CelestialComponent and its location continues to be simulated in case it should be made visible again later. In slightly more detail you may actually also have some sort of "Renderable" component involved, rather than rely on SkyComponent alone, but I'm not sure of the details there.

Likewise multiple comets could be rendered separately simply by having several comet entities

Timing options are already available in the ES, such as having a CelestialSystem that implements UpdateSubscriberSystem with a update(float delta) method for per-tick logic, although we could use better arbitrary timing support (only execute once per second, once per minute, etc)

Lighting ... that one I'll leave entirely to Immortius as he's rewriting that these days anyway, to go with vertically stacked chunks :)

I'm sure there's more feedback available but I'll take a break for now. In short: you can happily focus more on content (celestial simulation, skysphere texturing, etc) than architecture, as most of that is already in place and very powerful :geek:

Let me know if there are any simple follow-up questions!

Edit: On re-reading this I can see potential confusion in how I describe a sky layer entity vs individual celestial entities (one or more comet entities)- I'm not sure exactly which approach would be used, maybe celestial entities would be baked together in some fashion to then only render together once (per layer). Details details! I was mainly trying to put together a primer, some stuff will be inaccurate :)
 

manu3d

Active Member
Contributor
Architecture
(did you read the ES intro in the wiki yet? I forgot)
Yes, a few times already. I am familiar with the concept of components as I've toyed with Unity3D enough to understand the basics of this approach. It's a good approach I think but I haven't had the chance to use it long enough to appreciate it in full nor to wrap my mind completely around it. More specifically the intro you are referring to could use some more examples for a reader to generalize enough and reconnect with the extensive but fairly abstract descriptions of entities, components and system. For example, as I was reading it having in mind the draft high-level structure of the Sky System, I couldn't quite figure our how coarse or how granular the entities could be. Should a layer be an entity? I thought it might. Should a planet be an entity? I thought: probably. Should (one of many) stars be an entity? Not so sure, as their potential numbers might impact on performance. Thanks for your reply though. It is already clearing things up.

  • Sky + ISky -> SkySystem - iterates over different entities with SkyComponent and renders them appropriately (in order and everything) onto the Skysphere. We don't need ISky as the SkySystem will extend or implement one of the existing ES classes/interfaces. I'm not sure how the existing Skysphere class would relate - this whole thing may replace it entirely.
I understand. Is there a complete/functioning system that has been developed in the contest of the ES and that people are happy with, so that I can use it to see how things are done in practice?

  • ISkyLayer -> A .prefab that defines a single sky layer entity (the Comet layer should use texture x with any special instructions y in layer order z) - so all the data that gets saved into an instance of SkyComponent, and processed per pass in SkySystem
I had to re-read the description of prefab for this one. A prefab, if I understand correctly and describing it with different words, is a bit of a prototype, from which entities are generated by cloning with the option for customization, as necessary. Correct?
Entities defined by the prefab have components attached and you are saying that they'd have a SkyComponent. And the SkySystem would have a list of all entities that have a SkyComponent attached to iterate over. Can this be more hierarchical? So that when it's time to iterate over planets the list of stars doesn't have to be iterated over? Perhaps I'm misunderstanding something.

  • ISkyDataProvider -> SkyComponent - a pure data container with no logic, contains details on what to render, how
  • Solar system simulation stuff -> CelestialSystem + CelestialComponent, that together handle logic + data for everything being simulated in a solar system but not how to render anything at all. Probably the SkyComponents would base their characteristics on the backing Celestials through event-based communication
Hmmm... I need to wrap my mind around this bit. Hopefully Immortius will provide some additional considerations/description on this.

We don't preface classes with their type (ISomething being an Interface), by the way, as per our code standard, IDEs highlight them well enough :)
Neither do I, but I'm used to other languages where interfaces are handled differently while I now remember that in Java they are treated as types, like classes. My apologies, won't happen again.

Timing options are already available in the ES, such as having a CelestialSystem that implements UpdateSubscriberSystem with a update(float delta) method for per-tick logic, although we could use better arbitrary timing support (only execute once per second, once per minute, etc)
It might be useful. I.e. the whole Planetary Layer or maybe some of its bodies might be essentially static for seconds at the time. Perhaps it's unnecessary. I guess if it does it will emerge as a need in the optimization phase.

Edit: On re-reading this I can see potential confusion in how I describe a sky layer entity vs individual celestial entities (one or more comet entities)- I'm not sure exactly which approach would be used, maybe celestial entities would be baked together in some fashion to then only render together once (per layer). Details details! I was mainly trying to put together a primer, some stuff will be inaccurate :)
And I thank you for that. We don't need to have everything figured out right away. This is a good start. We'll just have a few more rounds of posts about it all and then things we'll get much clearer. Thank you again.
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
No problem, it definitely does take a little time to wrap your head around our different way of doing things - and yes, especially since the doc isn't quite at 100% yet :)

There are tons of working System/Component pairs all over the place, pretty much in any module with code - they're all hosted under the Terasology org on GitHub

One example is the Hunger system which is pretty basic

Prefabs are recipes for entities - you can list components and their initial data values that way. The prefab is the template and each entity generated from it makes up a unique instance that can be changed later. Every time you have two things that are different enough you make two Components, rather than use a hierarchy - so maybe you'd have a PlanetComponent and a StarComponent. Note that you don't even need any data at all in a component, it can simply be used as a tag on an entity to drive logic in systems. Currently everything is iterated through and different systems subscribe to the combination they care about. It may sound like a performance challenge but it is working well so far :)
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
I have to admit I didn't really grok Entity Systems for quite some time myself - there probably needs to be an online course or tutorial focused on the matter, although I'm not really sure the best way to really teach this sort of thing. I'll try to explain how I'ld approach bits of this - specifically the solar system simulation - given your requirements. This will leave questions around more complex stuff like the starfield and handling eclipses, but hopefully that will help clarify how to leverage ES.

Some general things to keep in mind about entity systems: Components are both the sole domain of persistent data, and a determinator of the behavior of an entity. The ideal is to have a palette of components that each define a behavior, and at the very least a core set that are highly reusable. Interfaces and inheritance give way to composition - if you want something to behave differently, you create a new component (and system) for that behavior.

Existing components that are relevant for this are:
  • LocationComponent - This is everything about the location of the entity, including rotation, and whether it is "attached" to another entity (exists in its frame of reference)
  • MeshComponent - This makes the entity renderable with a mesh
  • BlockParticleEffectComponent - This makes the entity emit particles (we probably need to work on this though)
  • Possibly LightComponent - This makes the entity emit a dynamic light, either a point light or directional.
  • AutoCreateComponent - This causes a prefab to automatically be created if there is not an existing entity with that prefab on startup.
These should already handle a lot of what you need - positioning elements, rendering of sky elements, and such. Let's also assume proper entity driven multi-camera support. With multiple cameras and filtering of what entities are rendered per camera, you could use different 'tags' for entities belonging to different layers to turn them on and off. You could have multiple camera entities rendering different layers to ensure correct ordering. A camera would merely be an entity with a LocationComponent and CameraComponent, although it could have other things.

So already you can achieve a lot with a skysphere camera entity, and an entity for each visible object including an entity for the outer skysphere (an insideout sphere mesh).

Focusing on the solar system simulation for a moment. With location component you can already create a nice, hierarchical model for the solar system out of a set of nested entities with attached locations, with Mesh components added to the entities that actually hold the celestial bodies to render them. To actually move these entities you will probably want two components: OrbitComponent and RotateComponent (although with some clever attachment of entities you could just have a RotateComponent - a circular orbit is merely a rotation in the correct frame of reference). These components, along with LocationComponent, would trigger the OrbitSystem and RotateSystem respectively to move the entities. If some other movement style was desired, another movement component/system combo could be created. I am assuming here that we're not doing full newtonian gravitational motion simulation, as that is overkill.

Then by attaching a camera entity to the main planet (which the player is on) you will get the correct view for the current position and rotation of the main planet.

Does this help give a feel for the more ES approach? Notably, out of all of this there is no classes relating to sky or planet or anything like that - entities are planets because of how they are set up and how they behave. This is where the true potential of ES lies - by creating new things by reusing and mixing up existing behaviors.

Probably the weak end of things is how the solar system is set up. Ideally you could have a prefab that specifies the entire solar system, and then mods could extend or override this, but at the moment prefabs don't support nested entities - this is an area that needs future work. So instead systems will need to set up the various sky entities. This could be driven from an auto-created prefab with a SkyComponent, which could also contain some key information on the sky and references to the various entities involved in the sky. Discovery of the various entities to allow modification and injection in the correct place in the sky model would also be important, and this is where further components may be useful.

  • Sky + ISky -> SkySystem - iterates over different entities with SkyComponent and renders them appropriately (in order and everything) onto the Skysphere. We don't need ISky as the SkySystem will extend or implement one of the existing ES classes/interfaces. I'm not sure how the existing Skysphere class would relate - this whole thing may replace it entirely.
I feel that there should not be a need for a custom renderer for the sky - rather we should move to support the necessary features for sky rendering from the existing Mesh, Particle and SkeletalMesh components, and a new Camera component. I do agree that we would be looking at removing the existing Skysphere class.

  • ISkyDataProvider
This would just be usage of the entity model - the entity model is essentially the data model for pretty much everything.

  • ISkyLayer -> A .prefab that defines a single sky layer entity (the Comet layer should use texture x with any special instructions y in layer order z) - so all the data that gets saved into an instance of SkyComponent, and processed per pass in SkySystem
I'ld suggest the concept of a sky layer gets replaced a camera filter tag.

  • Solar system simulation stuff -> CelestialSystem + CelestialComponent, that together handle logic + data for everything being simulated in a solar system but not how to render anything at all. Probably the SkyComponents would base their characteristics on the backing Celestials through event-based communication
I would tend towards autonomous behavior (rather than centrally driven) where possible - it lowers coupling and makes it easier to introduce new behavior. If a module wants to add elliptic orbits, for instance, this is far easier if you can slap a new movement driving component on pluto rather than have to override and replace a central system. It also means multiple mods can add multiple behaviors - multiple mods cannot all override the same central system.

Lighting ... that one I'll leave entirely to Immortius as he's rewriting that these days anyway, to go with vertically stacked chunks :)
I'll touch on this briefly. Effectively there are two bits of data driving this - a single color for the current value of celestial light (sun or moon or otherwise), and a single direction for that light.I don't know what it would take to support multiple directions, or if that is even sensible with the current approach to things like shadow casting and light shafts - that would be a question for begla. So the main work for lighting would be calculating a value given the current celestial state, and putting it somewhere to be picked up by world rendering - possibly the world entity, will need to think a little about this.

Should (one of many) stars be an entity? Not so sure, as their potential numbers might impact on performance. Thanks for your reply though. It is already clearing things up.
Working with individual stars is likely to be problematic regardless of how it is approached. Possibly an entity with a StarFieldComponent, that stores information on stars, which drives the creation of a texture that then feeds into a MeshComponent to render the field. But not sure it is worth it for the gameplay value it would provide. It might be better to have a simple prebaked background starfield, and particle effects or billboards for important constellations.
 

manu3d

Active Member
Contributor
Architecture
I'll try to explain how I'ld approach bits of this - specifically the solar system simulation - given your requirements. This will leave questions around more complex stuff like the starfield and handling eclipses, but hopefully that will help clarify how to leverage ES.
Thanks Immortius. You and Cervator are certainly helping to get on the right track, which is most of what I need. But ultimately it's ok that I'll need to do some homework too!

Some general things to keep in mind about entity systems: Components are both the sole domain of persistent data, and a determinator of the behavior of an entity.
Does this mean that -Systems- do not hold any data? Or to put it another way, where does one hold system-wide data, or how do you configure the parameters of a system? I think this is something you briefly mention below.

Existing components that are relevant for this are:
Thank you for this list. Now I feel i have some landmarks on my code map! :)

These should already handle a lot of what you need - positioning elements, rendering of sky elements, and such. Let's also assume proper entity driven multi-camera support. With multiple cameras and filtering of what entities are rendered per camera, you could use different 'tags' for entities belonging to different layers to turn them on and off. You could have multiple camera entities rendering different layers to ensure correct ordering. A camera would merely be an entity with a LocationComponent and CameraComponent, although it could have other things.
This sounds good to me. I can see a major challenge in sorting out where exactly these cameras will render to. I.e. will they all render to the same, primary output buffer, back to front? Will they render to separate frame buffer object that are then stacked into the output buffer? Unless I'm missing a very simple way to do it this is fairly low-level opengl stuff that might prove the trickiest issue to understand for me. On the other hand, once understood it will probably be easy to make things easier for everybody else.


So already you can achieve a lot with a skysphere camera entity, and an entity for each visible object including an entity for the outer skysphere (an insideout sphere mesh).
Understood. Can you confirm when you write the "skysphere camera entity" you imply that the player-associated camera entity will remain a vary different one? Or should I be thinking about a very generic camera component, that could potentially also be attached to the player?

Focusing on the solar system simulation for a moment. With location component you can already create a nice, hierarchical model for the solar system out of a set of nested entities with attached locations, with Mesh components added to the entities that actually hold the celestial bodies to render them. To actually move these entities you will probably want two components: OrbitComponent and RotateComponent (although with some clever attachment of entities you could just have a RotateComponent - a circular orbit is merely a rotation in the correct frame of reference). These components, along with LocationComponent, would trigger the OrbitSystem and RotateSystem respectively to move the entities. If some other movement style was desired, another movement component/system combo could be created. I am assuming here that we're not doing full newtonian gravitational motion simulation, as that is overkill.
A few things on this paragraph.
1) let me reassure you i'm not thinking Newtonian gravity for planets or other celestial bodies. :) In a dream-like future it'd be nice to think about it in the context of forces affecting vehicles traveling from one body to another. The equations are fairly straightforward but navigation I suspect would be quite challenging (and such an interesting problem!!).
2) I now have a much clearer idea of how this would be structured, thank you.
3) You mention that the components -trigger- the systems. I would normally think that systems drive components but in this case sounds like it's the other way around. Can you elaborate on this specifically?
4) Circular orbits are very tempting as they are so simple, and I understand what you saying about simply rotating the frame of reference. From a purely astronomical perspective, elliptical orbits is what makes some eclipses total and some annular, as the changing distances vary the visual size of the moon relative to the sun. Perhaps I'll keep the two components (orbit/rotate) separate, implement the OrbitComponent with circular orbits and only at a later stage change it to more generic, elliptical trajectories.

Does this help give a feel for the more ES approach?
Yes, very much.

Notably, out of all of this there is no classes relating to sky or planet or anything like that - entities are planets because of how they are set up and how they behave. This is where the true potential of ES lies - by creating new things by reusing and mixing up existing behaviors.
Crucial recap this one I feel. I can now see how entities behaving like planets "emerge" from a composition of behaviors. Might be good to earmark this whole thread somehow for future reference, so that it can provide inspiration for a wiki page further detailing the ES. Relating the existing classes to three or four wildly different sets of examples, from the sky system to a combat system going through, say, a genetic system, might provide enough conceptual points for a reader to extrapolate from them.

So instead systems will need to set up the various sky entities. This could be driven from an auto-created prefab with a SkyComponent, which could also contain some key information on the sky and references to the various entities involved in the sky. Discovery of the various entities to allow modification and injection in the correct place in the sky model would also be important, and this is where further components may be useful.
This was a bit abstract for me, but I think the crucial questions are those I asked above: where does a system stores its parameters if only components store persistent data?

I feel that there should not be a need for a custom renderer for the sky
Please define custom renderer? Or are you referring to the concept of FBO-rendered layers to be eventually stacked into the final image sent to the screen?

rather we should move to support the necessary features for sky rendering from the existing Mesh, Particle and SkeletalMesh components, and a new Camera component.
No worries. I'll reuse as much as possible and I'll ask before I write something from scratch.

I'ld suggest the concept of a sky layer gets replaced a camera filter tag.
When you say "tag" are you talking about a simple string/id that is sought by an iterator to figure out what is rendered with the "current" camera or are you talking about some kind of component that establishes a one-to-many relationship with some kind of Tag instance, holding the list of all entities connected it and a reference to the camera that is supposed to render them?

Working with individual stars is likely to be problematic regardless of how it is approached. Possibly an entity with a StarFieldComponent, that stores information on stars, which drives the creation of a texture that then feeds into a MeshComponent to render the field. But not sure it is worth it for the gameplay value it would provide. It might be better to have a simple prebaked background starfield, and particle effects or billboards for important constellations.
I think we are on the same page. The naked eye in perfect conditions can theoretically see approximately 6000 stars (only about half of which are above the horizon). For visual/artistic reasons we might up or even lower that number, but the bulk of those stars wouldn't be interactive, or at most they'd be treated as a single entity and they might even end up in a texture. Only a small portion, only the brightest perhaps, would allow individual interaction. A list I found with all stars up to magnitude 2.5 is 91 stars long, Sirius being the brightest and Polaris being in the middle of the list. Perhaps that gives us an idea of the numbers we might want to be able to handle. That been said, being capable of interacting with, say, 91 stars doesn't mean we need to interact with 91 stars at once. Perhaps it just mean the capability to instantiate some kind of visual effect and place it in the right position, to be de-instantiated as soon as it is no longer needed.

Again, thank you Immortius for your reply. Very useful.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Does this mean that -Systems- do not hold any data? Or to put it another way, where does one hold system-wide data, or how do you configure the parameters of a system? I think this is something you briefly mention below.
Generally. Certainly systems don't have any persistent data. Systems can do things like cache relevant entities or map entities to some internal representation (the physics system does this to hook to jBullet), but this is purely processing related.

As far as configuring systems... in the first instance I would suggest that might be the wrong way of looking at things. You have some "thing" - the sky - that needs to be configure and behave differently based on that configuration. Then the sky should be an entity, and its configuration is one or more components. And the behavior of those components is enacted through systems. There are probably exceptions to this however.

I general I would say:

* Purely clientside application configuration, like keybinds, graphics settings and the such, which have no impact on gameplay as such and can be changed at any time. It makes sense to me that this would live in our configuration framework, outside of the ES. There's no support for modules adding to this yet, as previously mentioned. Alternatively it could be mapped onto a entity, if configuration took the form of one or more components, but that probably wouldn't add anything.

* Serverside configuration that needs to be replicated to clients. This generally would be pulled in the ES, as that is what provides networking support - although network events could also be used to transmit it. Probably similar to the below, except you wouldn't persist the entity and instead recreated it each run.

* Game/Mod configuration that is part of the game, so different games have different settings. Since this is persisted as part of the game, it should be components on some central "singleton" entity (Game entity?). These components would also be part of the configuration framework so selection are saved for next time a game is created, though that is mostly convenience.

* Anything else, definitely in the ES. This also leads to flexibility when things might not necessarily be singletons - is there a single sky? Or does each world have a different sky?

This sounds good to me. I can see a major challenge in sorting out where exactly these cameras will render to. I.e. will they all render to the same, primary output buffer, back to front? Will they render to separate frame buffer object that are then stacked into the output buffer? Unless I'm missing a very simple way to do it this is fairly low-level opengl stuff that might prove the trickiest issue to understand for me. On the other hand, once understood it will probably be easy to make things easier for everybody else.
This is definitely getting down into the weeds. From an API perspective, the end result should a composite scene composed from all of the cameras (that aren't specifically rendering to texture) displayed on the screen. There may be a property on the cameras determining which clear the depth buffer. Since we use a deferred rendering approach, I imagine the implementation would involve multiple passes of each camera rendering different information to a number of buffers before they are combined at the end - begla would have a better idea how this fits into the current rendering implementation.

Understood. Can you confirm when you write the "skysphere camera entity" you imply that the player-associated camera entity will remain a vary different one? Or should I be thinking about a very generic camera component, that could potentially also be attached to the player?
The latter is what I envision.

3) You mention that the components -trigger- the systems. I would normally think that systems drive components but in this case sounds like it's the other way around. Can you elaborate on this specifically?
What I mean here is the presence of the necessary components on an entity is what causes those systems to act on them - so a weaker meaning of trigger. Actual implementation would probably be those systems iterating over all entities with the required combination of components each frame and updating their position/rotation/etc - so yes, driving them.

Conceptually I think of components as a contract - if you give an entity a RotateComponent, that means the entity should rotate. The system is the implementation of this.

4) Circular orbits are very tempting as they are so simple, and I understand what you saying about simply rotating the frame of reference. From a purely astronomical perspective, elliptical orbits is what makes some eclipses total and some annular, as the changing distances vary the visual size of the moon relative to the sun. Perhaps I'll keep the two components (orbit/rotate) separate, implement the OrbitComponent with circular orbits and only at a later stage change it to more generic, elliptical trajectories.
Fair enough. Basically comes down to how you want it to work and what behavior you want to support.

Crucial recap this one I feel. I can now see how entities behaving like planets "emerge" from a composition of behaviors. Might be good to earmark this whole thread somehow for future reference, so that it can provide inspiration for a wiki page further detailing the ES. Relating the existing classes to three or four wildly different sets of examples, from the sky system to a combat system going through, say, a genetic system, might provide enough conceptual points for a reader to extrapolate from them.
Please define custom renderer? Or are you referring to the concept of FBO-rendered layers to be eventually stacked into the final image sent to the screen?
By renderer I mean a system working directly with opengl and other low-level rendering. We have one for MeshComponents, one for SkeletalMeshComponents, and one for particle effects. I think this, combined with the UI, covers most rendering needs - especially considering the support for custom shaders.

When you say "tag" are you talking about a simple string/id that is sought by an iterator to figure out what is rendered with the "current" camera or are you talking about some kind of component that establishes a one-to-many relationship with some kind of Tag instance, holding the list of all entities connected it and a reference to the camera that is supposed to render them?
The former - or more correctly a set of simple strings. I imagine the camera and/or rendering systems would be set up to group entities by tags to allow the relevant entities to be quickly identified for any given camera, as part of their implementation. This doesn't need to be exposed as part of the data model.
 
Top