Thanks for your inputs Immortius. Let me reply point by point:
- Some of this at least should be entity driven - particularly cameras. A fair bit of the pipeline configuration could be appended to cameras.
Yes, cameras as components are on my ideal todolist. I say "ideal" because a) I don't know WHEN I'm going to be able to tackle that b) I don't know IF I'm going to be able to tackle that. But to me it seems inevitable that's the direction cameras will have to go toward. Regarding pipeline configuration appended to cameras: I remember some discussions about associating somehow the list of objects to render with specific cameras. But I don't know if that's what you mean. Feel free to elaborate now or maybe later, when we get to it.
- Cameras should have a RenderToTexture option, which would be valuable in producing inputs for later stages.
A couple of considerations on this:
- Currently the renderer effectively has no render-to-display functionality apart from the very last step of post-processing, which writes the very final rendering onto the display. This is typical of deferred shading: everything gets written to buffers (textures) until you have produced the image to show on screen. So, technically, the 15-20 steps that produce the final image are effectively render-to-texture steps. In particular, what is currently called the "active camera" never ever renders to the display. Furthermore, the active camera often renders to the same set of buffers, progressively refining their content or is used for entirely different passes, rendering into different buffers. Multiple cameras would follow the same usage pattern.
- In this context I would suggest that there cannot be a simple mapping camera -> texture. Instead, cameras would simply exist as part of entities. Then, rendering tasks within a rendering pipeline would be configured to use (usually) one of the available cameras. It is then up to each individual rendering task, weather they use a camera or not, to store their own output and make it available to dependent tasks down the pipeline.
- Cameras should have some sort of priority ordering.
Can you elaborate on this? What's the concern you wish to address with this?
- I would suggest modules shouldn't have full access to the rendering pipeline - they shouldn't be able to mess with stereo rendering for instance. There should be some well defined points for module additions, specifically the post processing should be well exposed
I agree in principle. At this stage I have not really given enough thought to how modules would interact with the rendering pipeline: how they might add entirely new pipelines, how they might inject/remove tasks into/from one and so on. I guess the first goal would be to transform the current renderer into a DAG-based pipeline. Then we'd probably want to try to create a security camera module and we'd want to refactor the portals module to take advantage of the pipeline/tasks based functionality. With that experience we'd probably have a much better idea of how other modules might want to interact with the renderer and where we should place boundaries.
Specifically on stereo rendering, from my perspective a stereo render is just going through the same rendering pipeline twice, with -some- rendering tasks switching cameras. What happens to the two images produced depends on the device in use. As you know some device use the same screen temporally alternating left/right eye and taking advantage of synchronized shutter glasses. Other devices, like Oculus, render the two images on the same screen at the same time, side by side. Other devices again (i.e. autostereoscopic displays) might need the two images vertically interleaved with each other. In all these cases some system is needed to generate those two images at the appropriate time and in the latter two cases an additional rendering task needs to be added to the end of the pipeline to combine the two images and make them device-specific. In my view these features would be module-provided, while the rest of the pipeline would probably be identical to the mono-pipeline. In this context the question becomes: how do you avoid -other- modules messing with the functionality provided by the active stereo module? That, I do not know yet. Perhaps modules that involve hardware might have some kind of priority so that other modules cannot override their anchor point (i.e. at the end) of the pipeline.
- Suspect some of these things should be defined as assets, such as post processing pipelines.
Possibly, yes. Perhaps rendering tasks might be defined as text assets listing class to instance, inputs (buffers to read from, shader to use) and outputs (frame buffer to write to, viewport setting, write mask). Also "default pipelines" might be stored as text asset listing the rendering tasks and their dependencies with each other. I'd first want to convert the current renderer to use largely hardcoded rendering tasks and a rendering pipeline that can be changed at runtime. With that knowledge we'd be able to see much better what could or should be stored as an asset.
- Not sure how configuration interacts with this.
Perhaps a first version of this system would simply rely on rendering tasks and rendering pipeline reading from the rendering config and acting accordingly. I.e. if light shafts are disabled, the light shafts task gets removed from the pipeline which checks that toggle at the beginning of every frame. Modules would have to offer their own configuration interface if they want to add configurable rendering pipelines and rendering tasks to the default one provided by the engine. Eventually rendering tasks and pipelines, including those provided by modules, might be able to inject configuration parameters directly in the render settings.
- Possibly would be helped by more powerful shader support with things like param binding based on semantics (any shader that asks for lightLevel should get it bound automatically for instance)
I will probably explore shaderland when the refactoring work on the renderer and its closest support classes is completed. I've certainly noticed though that the way shader parameters are currently injected is quite dispersive, with lots of bits of information about how a shader is configured in different places. Also, some of these parameters might need setting only once instead of every time a material is enabled. As soon as I studied the relevant code I'll be able to discuss this much more easily and potentially much sooner than the big changes suggested in this thread.