Issue 1741: DAG-based Rendering Pipelines

manu3d

Active Member
Contributor
Architecture
This is the discussion thread for issue #1741. I propose that exploratory discussions are held here, while distilled ideas, features and considerations emerging as important will be eventually added to the issue itself.
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
Great write-up! And so over my head I wish you good luck in getting intelligent and constructive feedback. You are ahead of most of us :)

What little I can provide: Pipeline handy, good blunt instrument. Two rock of different color better than one rock, especially for price of one rock. Watch for dinosaur.
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
Hmm, I think a more generic model like this is a good idea. Let me throw some additional requirements and thoughts into the mix:
  • Some of this at least should be entity driven - particularly cameras. A fair bit of the pipeline configuration could be appended to cameras.
  • Cameras should have a RenderToTexture option, which would be valuable in producing inputs for later stages.
  • Cameras should have some sort of priority ordering.
  • I would suggest modules shouldn't have full access to the rendering pipeline - they shouldn't be able to mess with stereo rendering for instance. There should be some well defined points for module additions, specifically the post processing should be well exposed
  • Suspect some of these things should be defined as assets, such as post processing pipelines.
  • Not sure how configuration interacts with this.
  • Possibly would be helped by more powerful shader support with things like param binding based on semantics (any shader that asks for lightLevel should get it bound automatically for instance)
 

msteiger

Active Member
Contributor
World
Architecture
Logistics
I like the idea of a clearer separation of different rendering features a lot. It should make it a lot easier to carve out those simple ones that even run on old Intel gfx cards with outdated drivers.
PR 1732 splits the construction of shader code into smaller chunks and could be a first step in that direction.

Since you describe a very generic model, I'm convinced that someone must have done something like that before. Maybe you can get some good ideas by looking at how jMonkeyEngine or libGDX configure their rendering pipelines.

I think that what you describe could (should?) be completely separated from how the scene graph is defined and managed (Advanced topic #1).
 

manu3d

Active Member
Contributor
Architecture
Thanks for your inputs Immortius. Let me reply point by point:

  • Some of this at least should be entity driven - particularly cameras. A fair bit of the pipeline configuration could be appended to cameras.
Yes, cameras as components are on my ideal todolist. I say "ideal" because a) I don't know WHEN I'm going to be able to tackle that b) I don't know IF I'm going to be able to tackle that. But to me it seems inevitable that's the direction cameras will have to go toward. Regarding pipeline configuration appended to cameras: I remember some discussions about associating somehow the list of objects to render with specific cameras. But I don't know if that's what you mean. Feel free to elaborate now or maybe later, when we get to it.

  • Cameras should have a RenderToTexture option, which would be valuable in producing inputs for later stages.
A couple of considerations on this:
  • Currently the renderer effectively has no render-to-display functionality apart from the very last step of post-processing, which writes the very final rendering onto the display. This is typical of deferred shading: everything gets written to buffers (textures) until you have produced the image to show on screen. So, technically, the 15-20 steps that produce the final image are effectively render-to-texture steps. In particular, what is currently called the "active camera" never ever renders to the display. Furthermore, the active camera often renders to the same set of buffers, progressively refining their content or is used for entirely different passes, rendering into different buffers. Multiple cameras would follow the same usage pattern.
  • In this context I would suggest that there cannot be a simple mapping camera -> texture. Instead, cameras would simply exist as part of entities. Then, rendering tasks within a rendering pipeline would be configured to use (usually) one of the available cameras. It is then up to each individual rendering task, weather they use a camera or not, to store their own output and make it available to dependent tasks down the pipeline.
  • Cameras should have some sort of priority ordering.
Can you elaborate on this? What's the concern you wish to address with this?

  • I would suggest modules shouldn't have full access to the rendering pipeline - they shouldn't be able to mess with stereo rendering for instance. There should be some well defined points for module additions, specifically the post processing should be well exposed
I agree in principle. At this stage I have not really given enough thought to how modules would interact with the rendering pipeline: how they might add entirely new pipelines, how they might inject/remove tasks into/from one and so on. I guess the first goal would be to transform the current renderer into a DAG-based pipeline. Then we'd probably want to try to create a security camera module and we'd want to refactor the portals module to take advantage of the pipeline/tasks based functionality. With that experience we'd probably have a much better idea of how other modules might want to interact with the renderer and where we should place boundaries.

Specifically on stereo rendering, from my perspective a stereo render is just going through the same rendering pipeline twice, with -some- rendering tasks switching cameras. What happens to the two images produced depends on the device in use. As you know some device use the same screen temporally alternating left/right eye and taking advantage of synchronized shutter glasses. Other devices, like Oculus, render the two images on the same screen at the same time, side by side. Other devices again (i.e. autostereoscopic displays) might need the two images vertically interleaved with each other. In all these cases some system is needed to generate those two images at the appropriate time and in the latter two cases an additional rendering task needs to be added to the end of the pipeline to combine the two images and make them device-specific. In my view these features would be module-provided, while the rest of the pipeline would probably be identical to the mono-pipeline. In this context the question becomes: how do you avoid -other- modules messing with the functionality provided by the active stereo module? That, I do not know yet. Perhaps modules that involve hardware might have some kind of priority so that other modules cannot override their anchor point (i.e. at the end) of the pipeline.

  • Suspect some of these things should be defined as assets, such as post processing pipelines.
Possibly, yes. Perhaps rendering tasks might be defined as text assets listing class to instance, inputs (buffers to read from, shader to use) and outputs (frame buffer to write to, viewport setting, write mask). Also "default pipelines" might be stored as text asset listing the rendering tasks and their dependencies with each other. I'd first want to convert the current renderer to use largely hardcoded rendering tasks and a rendering pipeline that can be changed at runtime. With that knowledge we'd be able to see much better what could or should be stored as an asset.

  • Not sure how configuration interacts with this.
Perhaps a first version of this system would simply rely on rendering tasks and rendering pipeline reading from the rendering config and acting accordingly. I.e. if light shafts are disabled, the light shafts task gets removed from the pipeline which checks that toggle at the beginning of every frame. Modules would have to offer their own configuration interface if they want to add configurable rendering pipelines and rendering tasks to the default one provided by the engine. Eventually rendering tasks and pipelines, including those provided by modules, might be able to inject configuration parameters directly in the render settings.

  • Possibly would be helped by more powerful shader support with things like param binding based on semantics (any shader that asks for lightLevel should get it bound automatically for instance)
I will probably explore shaderland when the refactoring work on the renderer and its closest support classes is completed. I've certainly noticed though that the way shader parameters are currently injected is quite dispersive, with lots of bits of information about how a shader is configured in different places. Also, some of these parameters might need setting only once instead of every time a material is enabled. As soon as I studied the relevant code I'll be able to discuss this much more easily and potentially much sooner than the big changes suggested in this thread.
 

manu3d

Active Member
Contributor
Architecture
Thank you msteiger for your contribution.

I like the idea of a clearer separation of different rendering features a lot. It should make it a lot easier to carve out those simple ones that even run on old Intel gfx cards with outdated drivers.
Good point. Although it seems to me right now most old integrated cards fail to even create a proper opengl context, well before the renderer kicks in.

PR 1732 splits the construction of shader code into smaller chunks and could be a first step in that direction.
Indeed. The way sometimes shaders aggregate a number of effects into one piece of code is somewhat problematic from an "atomic rendering tasks" point of view. I need to study the code managing shaders and materials in much more detail before I can figure out how all this fits could fit together.

Since you describe a very generic model, I'm convinced that someone must have done something like that before. Maybe you can get some good ideas by looking at how jMonkeyEngine or libGDX configure their rendering pipelines.
Very good point. I certainly will. Thanks for suggesting this.

I think that what you describe could (should?) be completely separated from how the scene graph is defined and managed (Advanced topic #1).
I'm not sure why you mention scene graph in the context of the advanced topic #1. When I referred to state changes in it I was writing entirely about opengl states i.e. currently bound frame buffer, currently bound textures, currently enabled material and so on. This is particularly relevant in post-processing as every pass tends to bind a different framebuffer, a different material and different input textures. By the time post-processing has started (even earlier actually) the 3D scene no longer comes into play and everything happens in a 2d world made of textures, quads and frame buffer attachments. So, I'm not sure what you meant with this last point?
 
Last edited:

msteiger

Active Member
Contributor
World
Architecture
Logistics
I guess I misunderstood the last point then. As I understand it, scene graph frameworks reorganize the scene graphs to minimize (OpenGL) state changes. Post-processing operations are probably different, but also easier to organize. Sounds reasonable to me, but also like a big pack of work :)
 

manu3d

Active Member
Contributor
Architecture
@msteiger : Sorry I never replied to this. Indeed scene graphs frameworks might attempt to do that, but are probably related to the actual rendering(s) of the 3d scene. In our case there is a lot binding/unbiding enabling/disabling well after the last 3d data has been rendered into 2d buffers. I -suspect- a scene graph framework wouldn't help there. That been said, I never used one, so I should probably just admit my ignorance on the matter. Do you have a particular Scene Graph Framework in mind that I could check out?
 

msteiger

Active Member
Contributor
World
Architecture
Logistics
No, not really. I actually doubt (now) that there is a generic scene graph library that suits the purpose. I would focus on creating individual rendering nodes that can be coupled dynamically (e.g. depending on video settings) just like you planned it.
 
Top