General programming question: method parameters

manu3d

Active Member
Contributor
Architecture
On one end of the spectrum, all inputs and potentially even all outputs of a method could be passed in as arguments.

On the other end of the spectrum, a method might require no arguments at all, somehow fetching all it needs through registers, member variables and calling methods of other objects.

In reality, long lists of parameters and the use of output parameters tend to be rare, so that the more commonly used spectrum is somewhere between methods with a few input parameters and methods requiring no parameters.

The question is then: having the choice, why should one lean more toward the "some parameters" end of the spectrum and why should one lean toward the "no parameters" end of the spectrum?

I'm asking because methods of the code I'm looking at can potentially find pretty much everything they need on their own, i.e. through member variables. However, I have the feeling that sometimes passing some parameters in would provide useful information about what the method depends upon, improving readability.

Thoughts?
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
There are a number of "non-functional" attributes that are affected by decisions in this space:
  • Testability
  • Maintainability
  • Robustness
  • Reusability
And a few other principles come into play like the Open-Closed principle.

Firstly, from a testability perspective, you want code that can be unit tested in isolation. This in general means it is nicer if code doesn't know anything about the context in which it is running, but is given the necessary information. For instance, if you have a method to add two numbers, you can test it easier if you can just call it with the two numbers rather than have to provide it with a complete entity environment from which it fetches the numbers. If you are doing test-first development, this generally forces the issue.

This also plays into reusability - the method which pulls numbers from specific entities in an entity system will be a lot less reusable than one you provide numbers to.

Maintainability and the Open-Closed principle are all about having individual classes and methods with a single purpose, such that the reasons why they need to change are minimised. It would be good if our number changing method only has to be altered if we want to add numbers differently, and not because of changes to how entities work. On the other hand perhaps there is a specific system for doing number changes based on two entities, which the "adding" method plugs into as an extension. The adding method changes when the way we want to add changes, and the entity work system changes when when the way to work with entities changes.
 

manu3d

Active Member
Contributor
Architecture
Thank you Immortius for your insight on this.

I feel that you have made a strong case for trying to pass as much as possible through method parameters and return statements. I didn't quite understand from your last paragraph however, in what circumstances you'd recommend for a method to rely, at least partially, on instance fields instead.

It seems to me that in many circumstances one can strike an intuitive balance between the two options relatively easily. But there are situations in which I'm pingponging between the two options. I.e. some methods of the WorldRenderer could rely on a camera passed as parameter or an activeCamera field within the implementation. What you suggest seems to hint toward the first option, so that the method can be more testable. I'd also tend to favor it because it makes clear what the method needs and what it provides. But a strict commitment to this way of thinking would eventually lead to high level methods with a long list of parameters or a crowd of custom wrapper classes to be used to produce packaged inputs.

Is this perhaps something that is more an art than science? Are there no hard and fast rules on the matter?
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
I suspect art :)

The last example I interpret as something like: If you're focusing on a camera, passing it in to do something, then cool. But if you have a different system that does advanced choosing of a camera, and it makes sure there's an appropriate camera object in the CoreRegistry instead, then it is better to get it from there than pass it in from elsewhere (in case some context might make the camera you're passing in less suitable, like an odd delay or rapidly changing valid camera in the CoreRegistry).

Which might be off, so take that with a grain of salt :geek:
 

manu3d

Active Member
Contributor
Architecture
That's ok with me. I'll use my best judgement then. Hopefully that'll be enough in most cases. What's important is that I'm not missing some easy rules/guidelines. Thank you!
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
There is a lot of instinct/gut feel to how I develop, yes. It is good in a way - I can develop good code fast working like this, but it does need to be reflected on and can make it hard to explain why I did X rather that Y and why Z is bad.

I agree there is a balance to be reached. When talking about fields, you are giving an object state - which is fine, a lot of things have state (player has health, apple has freshness). And state is generally testable - Vector3f is quite testable, for instance. It becomes less testable when dealing with singletons, object pools like the CoreRegistry and looser contextual state like that. This possibly comes down to the correct level of abstraction - the single purpose for which an object exists. If a WorldRenderer's purpose is to render a block world, then it makes sense for it to have the world it renders as state. It is less clear whether the camera to render to should be state or a param - either would work if we make the assumption there is only ever one camera, but that assumption is questionable - other engines support multiple camera rendering either to different parts of the screen or over each other. I could imagine the same WorldRenderer being invoked twice in a frame with different cameras. Which perhaps suggests a split between a renderer for a world and another layer that manages the overall rendering process?
 

manu3d

Active Member
Contributor
Architecture
The current WorldRenderer is, in fact, already invoked twice in a frame when Oculus support is enabled. This however is done ad-hoc, with special code to handle an Oculus-based setup rather than, say, a more generic two-cameras setup. It definitely cannot handle, as it currently stands, additional cameras, i.e. to render additional viewports or even render to a texture to be mapped on geometry in the 3d scene.

I certainly endorse the idea you mentioned a few times of having a Camera component so that (among other advantages) the WorldRenderer can be provided with a camera-capable entity as input rather than have the player's camera(s) semi-hardcoded in it. I hope I'll reach the point where I can implement this myself. It is certainly on my path, I can see it on the horizon. I just don't know if somebody else will get there first.

Regarding what you mention about another layer overseeing the rendering process, I think it would be certainly worth looking into. As I went through the rendering code I had been thinking that it would be useful to have a node-based renderer, a concept coming from my VFX years and exposure to compositing software such as shake and nuke. Nodes would have inputs and outputs and would represent processes such as renderings of a 3d scene, composites of the output of previous nodes and filters applied to a particular input. Together they would establish one or more rendering pipelines, which is perhaps a bit misleading because the nodes would be arranged as a directed acyclic graph rather than a single pipeline. Rendering from a different camera would amount to replacing the entity stored in a CameraNode usually found at the beginning of the pipeline. Viewports might be represented by nodes on the other end of the pipeline, just before the compositing node that represents the framebuffer actually shown on screen. Similarly, rendering to a texture would amount to directing the outputs of previous nodes to a RenderToTexture node. Toggling on/off effects such as bloom filters and DOF would amount to enabling/disabling the appropriate nodes or even removing them from the pipeline altogether. Pipelines could also be swapped at runtime to generate radically different rendering styles, as needed. Crucially, compared to the current monolithic renderer, for which access to the source code is needed for any change, a node-based renderer would allow modules to provide new nodes, replacements for existing ones, and even partial or entirely new pipelines.

This is the direction I'd go toward. I'm still decrypting the existing code though, and after that I'd need to address the issue of a Camera component and then the new skysphere. So, nobody hold their breath...
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
This is way over my head in 3D wizardry land, but I wanted to drop in a quick mention of water reflection (or really, any mirror type functionality) as a known pain point. @begla tried a few different approaches but was unhappy with how much extra rendering it world add. For instance the reason why only sea-level water does the "normal" reflection is because each block level would take a whole rendering pass to generate a reflection for, which is very expensive. So only one layer was blessed with the ability.

That all may have changed since last I remember talking to him about it, and I know there's another variant that works differently. But then they all have their quirks too. Tricky topic!
 

manu3d

Active Member
Contributor
Architecture
No, this hasn't changed. The renderer does a number of passes:

- a pass to create a shadowmap for the light of the sun
- a pass for the reflected scene (this is the one you are talking about which might also deal with refractions, but I'm not sure yet)
- a standard pass for all opaque meshes (this includes a number of sub-passes)
- a couple of passes for transparent objects (I think - I still need to understand the details)

Pretty much everything else the renderer does in terms of generating imagery is combining these renderings or applying filters.

Indeed the reflection pass is only good for one reflective plane as it re-renders the scene with an appropriately positioned, inverted camera and then combine the resulting image with the opaque rendering. Any additional mirror-plane, in any other position or orientation would require an additional render of the whole scene through an appropriately positioned, mirrored camera. This is typical for a rasterizer. Pixar's early movies used similar methods to render anything with reflections. Only later their renderer, RenderMan, acquired a raytracing subsystem. Raytracing very elegantly generates reflections, refractions and (sharp) shadows in a single pass. But it does require quite a few tricks to run fast. As nVidia engineer David Luebke puts it: "Rasterization is fast, but needs cleverness to support complex visual effects. Ray tracing supports complex visual effects, but needs cleverness to be fast."

The work I'm doing on the renderer right in these days is making the list of processes resulting in what is seen on screen more explicit rather than fragmented deep into nested methods calls. This way it will be very clear from the WorldRenderer.render() method what the renderer does or does not.
 
Top