Associating RenderableWorlds to Cameras

manu3d

Active Member
Contributor
Architecture
Currently, we have only one camera, the camera associated with the player. It is on my todolist for the renderer to be able to switch from one camera to another, paving the way for the concept of multiple cameras. However, as per current code, switching cameras would likely trigger a partial or complete regeneration of the world around the new active camera. From a user perspective the world would effectively disappear and then reappear as it does on startup.

To prevent this, I'm considering associating a RenderableWorld instance with each camera. Implementations of this interface keep up to date a number of chunk lists and chunk queues dependent on the camera position. The renderer then uses those to display the world on screen. Switching camera would then switch the RenderableWorld instance the renderer can chew on. And as the RenderableWorld instances would be persistent between switches, switching cameras would not trigger regeneration: a renderable world for that camera would always be there.

Of course this would take more memory as there would be more regions/chunks of the world loaded at the same time. This could be partially mitigated perhaps by automatically reducing the ViewDistance of a camera when it becomes inactive to some shorter range, to reinstate it when it becomes active again. This way a number of chunks would be unloaded while those in the immediate vicinity of the inactivated camera would remain available. Also, at this stage I'm not sure how multiple instances of RenderableWorld would really work together. Perhaps a higher-level entity would be necessary to prevent disposal of a chunk no longer needed by one instance if other instances are still using it.

Thoughts?
 
Last edited:

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Sounds like a good thing to support, although as usual I can't really talk to the technical details :)

Gets us one step closer to actual cameras too, like closed circuit TV in-game, heh. Or maybe more aptly a far-vision type spell or magic map table.
 

Florian

Active Member
Contributor
Architecture
It could be that this is or at least was possible already with RelevanceRegionComponents.

e.g. you create a entity with that component at a region which you wanted to have loaded in addition to the current position of the player.

Although there were some opimizations recently to primarly create meshes for chunks around the camera.

Since @Immortius created that component he may be able to give you details about the intended usage of that component.

About the concept in general: Do you have a certain usage goal for them? I could think of portals as possible use case. e.g. where the portal is showing another part of the world by "simply" rendering 2 cameras at once: the portal content is a camera rendered at the target location of the portal, offseted by the player relative position to the portal. For special effects this remote location camera could be first rendered into a texture. Very cool would be if you could render the world of a different server as a preview for "server to server" portals. That would make a "Terasology Internet" possible where you can get from one server to another by walking through portals. I don't think we should implement it now, but I just wanted to give you some ideas in which directections it could be made extendable for the future.
 
Last edited:

Immortius

Lead Software Architect
Contributor
Architecture
GUI
That indeed was the idea with the RelevanceRegionComponent - allowing an entity to cause the world around it to be loaded, for whatever reason it may have.

As far as usages of multiple camera, I think skybox is the big one. Possibly things like render-to-texture of arbitrary scenes.
 

Skaldarnar

Development Lead
Contributor
Art
World
SpecOps
Some other possible usages have come up in the past - most often very similar to the portal idea. I think we had in-game cameras (e.g., security cameras, scene capturing, etc.) and the infamous live world-map-table-thingy (a variant of a mini-map).

Since we've discussed Ouya support now and then, I wonder if this allows for split-screen instances as well?
 

manu3d

Active Member
Contributor
Architecture
Thank you all for your replies and apologies for my late one.

Sounds like the RelevanceRegionComponent is an excellent starting point, albeit it forces me to think about cameras as components sooner that I would have liked it. I write starting point because it doesn't have the five distance-sorted PriorityQueue<RenderableChunk> the renderer uses to iterate through chunks in various render passes (chunksOpaque, chunksOpaqueShadow, chunksOpaqueReflection, chunksAlphaReject, chunksAlphaBlend). I guess I'd have to add a component holding those render queues and a system updating them.

Concerning use cases, originally the discussion started in the thread about the skysphere as Immortius suggested there that cameras should eventually become components. I guess in the context of the skysphere RRCs wouldn't actually be needed as the skysphere cameras wouldn't have a portion of the world to render nor relevance regions. I was more thinking in terms of in-game cameras as Cervator and Skaldarnar suggested, using some form of render-to-texture.

Flo's idea is really good though. At least intRAserver portals, either leading elsewhere in the world or in other dimensions would be quite neat to have, especially if they show the destination through the portal. Technically this might be easily done by rendering both scenes and stencil-masking the destination scene so that it is shown only through the space inside portal. Using render-to-texture might also be possible, but my mind is being a bit challenged by the idea of having to deal with asymmetric frustrums of the destination's camera as a consequence. Also, what happens to the frustrum of the destination's camera when you get close to the portal? It should be possible to deal with the issue but right now I can't clearly envision the needed transition geometrically. I suspect however that render-to-texture would be more efficient as overall it would require rendering a small number of pixels for the destination scene.

IntER-server portals is also a neat idea. I guess the destination server would have to replicate quite a bit of data for the client to be able to render the other side of the portal. The alternative, the destination server sending the image to be placed in the portal, would rely on sending player-camera data and wait for the associated rendered image, which I fear would generate noticeable visual lag. Not to mention, multiple players near the portal would each require their own rendering: it just doesn't sound scalable. So, intERserver portals from my perspective are a big question mark in terms of how to do the networking bit, but from a visual/rendering perspective, once intRAserver portals are up and running it shouldn't be too difficult to plug remote data into them if it's available.

I do not have the time right now to delve into minimaps. Camera as components are likely to be beneficial in that context too, but I feel that minimaps (avatar-like or overlays or textured on in-game paper) are a bit of a beast on their own, if anything because they are likely to require a different rendering style and perhaps only a portion of the visual features and entities a player would see roaming the landscape.
 
Top