FirstPersonViewNode vs OpaqueObjectsNode

manu3d

Active Member
Contributor
Architecture
Hi everybody,

I noticed something strange in the renderer. I thought the FirstPersonViewNode was responsible for rendering held-in-hand items, i.e. torches and pickaxes. It turns out those are rendered in the OpaqueObjectsNode and the FirstPersonViewNode seems to do nothing (at least in vanilla Terasology).

Both nodes iterate through registered RenderSystem but when the time comes they invoke different methods, RenderSystem.renderFirstPerson() and RenderSystem.renderOpaque() respectively.

Is this the intended behavior? If so, what should the FirstPersonView node render then?
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
I'm going to ping @Josharias on this topic, since he touched it last :D Might be good to ping on IRC / Slack if you spot him :)

His goal a while back was to split out held items from the player so you could eventually get to where player 1 could see what item player 2 is holding. But both that and the Gaze system (for letting player 1 see what direction player 2 is looking in) may have only gotten to some preliminary stages where you can see some of the magic, but it isn't all being put to use? That's my guess in any case.
 

Josharias

Conjurer of Grimoires
Contributor
World
SpecOps
Yes, I just havent gotten around to figuring out how to do this better. Presently held items are being "placed" in world space in order for them to be able to location link to other entities. Ideally, we could still location link this kind of thing, but have it render in the first person "layer" on top of the world.

The oddness around this problem has to do with the entity system and how to visualize these items in first person the same way they are visualized in the world. Previously, there was a special mesh created for each block... and it was not reliably representing the block faces correctly. So by using the held item's entity directly, we could just render the item contained in the character's inventory in front of the player's camera and it would match what was thrown on to the ground.

There is some other component needed, or an additional field on an existing component, to put an entity onto the first person layer. I do not yet know what the "right" way to do it is.

There is presently some trickery around getting the item to render involving man-handling the LocationComponent which I would be grateful to remove and reorganize. That is probably one of the first places to start thinking about this problem.
 

manu3d

Active Member
Contributor
Architecture
Ok, thank you both for your inputs.

I think having held-in-hand items rendered in the right place in the world is quite valuable as any effect on them would be consistent with the rest of the world: think for example if we want falling snow to stick to a character's sword - silly example, I know - [EDIT] or even correct lighting on the item. It also makes it much easier to eventually create a third-person view: in the simplest of scenarios it would be just a matter of changing the camera position and perhaps activating the rendering of an otherwise invisible player mesh.

I can imagine that by rendering held-in-hand items separately we can ensure they are always rendered in front of everything else by changing the function used for the depth test to GL_ALWAYS, as it is currently done in the FirstPersonViewNode. Perhaps that's what you meant @Josharias ?

However the FirstPersonViewNode also changes the Field Of View of the camera and seem to reset the camera position to its default OpenGL position, near world-coordinates 0,0,0: it seems to assume that held-in-hand objects are rendered on their own from a geometrically separate environment. Potentially I could remove those changes to the camera and just leave the change to the depth function. This would allow held-in-hand items to be rendered via the FirstPersonViewNode in the exact same they are currently rendered in the OpaqueObjectsNode except they'd be rendered on top of everything. What they would need is just to switch from implementing the RenderSystem.renderOpaque() method to implementing the RenderSystem.renderFirstPerson() method instead - that's the part I don't know about as I don't see any code using it, is it perhaps in the source code of the Core module?

Alternatively, we could leave things as they are and I'd remove the FirstPersonViewNode as it seems nothing uses it. Worst case scenario we can put it back in easily.

What do you guys think?
 
Last edited:

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
A quick module search found about a dozen usages of renderOpaque in modules, but they're all empty. Only the MeshRenderer and SkeletonRenderer seem to have code there. My first attempt via Find Usages didn't find any since I had somehow ended up with its options excluding implementations. Huh.

renderFirstPerson had nothing but empty methods in modules.

@manu3d I would go with your instincts on this, along with whatever @Josharias recommends, since you're actively working in the area and he's the only other one to have done so in recent memory :)
 

Josharias

Conjurer of Grimoires
Contributor
World
SpecOps
It is possible that resetting the camera to 0,0,0 for doing first person view stuff is the more "correct" way to do it. It smells like it has potential.

I wonder if what we are presently doing, with putting objects into world coordinates, is a different mechanism from first person view? More like a intelligent HUD that does a VR overlay like an in-game Hololens? There must be a standard layering system that most others games use that we should adopt (yes, lots of Josh ignorance showing).

In any case, I am always on board with removing code that is not being used so that future implementations do not have to fight with old code.
 

manu3d

Active Member
Contributor
Architecture
It is possible that resetting the camera to 0,0,0 for doing first person view stuff is the more "correct" way to do it. It smells like it has potential.
What's your thinking there? From my perspective it feels like more trouble. I.e. if you want to light up your held-in-hand item correctly you need to move all the lights to be relative to the player and then do that render separately. If you place the held-in-hand item in the world you just include it in the opaque rendering pass and the lighting is eventually applied to it. And as I mentioned in the previous post if you attach a character to the world-positioned held-in-hand item you have the potential for third-person view just by moving the camera away from the player position. Can you elaborate then? Maybe you are onto something I'm not grasping yet.

Regarding HUDs and VR overlay, I thought that was what the OverlayNode and the RenderSystem.renderOverlay() method are for. That's where I'd put Augmented Reality stuff, i.e. bounding boxes highlighting other players or even text floating over players. In fact I'd assume that's where it's done and I'd assume that's also where the block highlighting is done - I'd have to double-check.

Or am I misunderstanding what Overlays are?
 

Josharias

Conjurer of Grimoires
Contributor
World
SpecOps
Oh. Light placement... right. That would be problematic. Good call.

HUDs and overlays, what you have mentioned sounds reasonable to me.
 

manu3d

Active Member
Contributor
Architecture
Meanwhile I think I understood better what the FirstPersonViewNode does and contrary to what I mentioned it -does not- reset the camera position to 0,0,0. It only resets the ModelView matrix, so that any model is drawn at the world 0,0,0 -unless- it is moved somewhere, i.e. with glTranslatef(x,y,z). I shall liberally sprinkle ashes over my head to repent for my misleading interpretation. :(
 
Top