Archived Two wild ideas.

Status
Not open for further replies.

manu3d

Active Member
Contributor
Architecture
Closed in favor of separate threads - Accoustic Renderer + Raytracing Renderer

I had a couple of wild ideas thinking about Terasology over the past few weeks. As anything that takes resources I imagine the chance of realizing them, even if I tried to make them myself, are slim. But I figure it's probably good to place them in the open and who knows, a few years from now somebody might pick them up, give them a good shake and make them happen.

1. Ray-tracing. Terasology's current renderer is a rasterizer. To do things such as shadows, transparencies and reflections it renders the scene multiple times using some tricks and then combines the resulting layers. The flow is somewhat tricky to understand and it keeps in memory the various resulting images. It also doesn't scale well. I.e. if I'm not mistaken the current renderer can only handle one shadow-casting light (the sun) by rendering the scene into a shadowmap. It would need to render the scene additional times if additional shadow-casting lights were desired. Same thing if it needed to render additional reflecting surfaces beside the generally horizontal surface of liquids. Any additional reflective surface that is not oriented or is not at sea-level would require an additional render.

By comparison Ray-tracing does things such as shadows, reflections and refractions in a mathematically simple way. Adding shadow-casting lights only adds rays from the fragments being rendered to the light, a process that is also used for standard transparencies, reflections and refractions, a process that is relatively easy to understand and implement. What tends to make things much trickier for ray-tracing is that each ray needs to know if and where it hits something, requiring complex spatial partitioning (Octrees, BSPs) to avoid testing each triangle in the scene for a collision with the ray.

But that's where it occurred to me that a voxel-based game such as Terasology should be able to take advantage of its chunk/block-structured data and, at least for the rendering of the landscape, should be able to perform ray-block collision tests very quickly without particularly complex algorithms. In fact, the 3d rasterizers currently in use to generate the landscape from noise functions might be able (hopefully with little modification) to rasterize a ray into a distance-ordered list of chunks to be checked for collision and then do the same for individual blocks within the chunks.

Raytracing still has issues. For example a naive, least-rays-possible implementation will generate sharp shadows. Soft shadows would make things more complex, requiring additional rays per fragment. Also, while the landscape is made of blocks, things roaming it (players, critters, vehicles) are not and would require special handling, i.e. some sort of space partitioning to speed up collision testing. But my gut feeling is that in the long run a ray-tracing renderer would make things simpler code-wise and it would position Terasology well to handle increases in visual complexity as GPUs becomes more powerful.

2. Acoustic Renderer. Basically my thought was: could the geometric simplicity of Terasology's world lend well to a sound-based renderer, producing enough spatial detail and acoustic realism that a completely blind person could play alongside a fully-sighted one? Imagine a blind child and one of his school friend, or also a blind grandfather and his grandchild. A sufficiently realistic (3D) soundscape, including the way sound bounces off and gets absorbed other surfaces would give a blind person lots of useful information to move about in the environment. Crucially, sound realistically bouncing off surfaces would allow in-game human echolocation, allowing a player to actively acoustically "illuminate" nearby surfaces and perceive their shapes and properties through sound. A normal, arbitrary-triangle-based 3D game, with its complex surfaces might struggle to create a sufficiently realistic, interactive acoustic rendering of the environment in today's consumer-level hardware. A highly geometrically structured reality such as Terasology might have just the right simplicity to make an effective acoustic renderer possible.

Interestingly, while more traditional software-for-the-blind challenges would emerge alongside the renderer's development (non-visual UIs would be obviously needed for everything but 3d navigation), intriguing gameplay opportunities would also arise. I.e. a blind person will generally have a more refined, discriminating sense of hearing. In the game this might allow for environmental information in the form of sound that a sighted person would not be able to perceive, discriminate against other sounds or simply interpret correctly. I.e. a source of water underground might normally gurgle too subtly to be heard. A predator approaching might be too difficult to detect among the constant rustling of forest leaves. Or the call of a rare poisonous frog used to augment the efficacy of arrows might be indistinguishable from its more common, non poisonous and largely useless cousin... except to somebody with a very fine hearing. In all those circumstances a more refined hearing would provide an advantage. As it would in other circumstances where the sight of a normally sighted person gets impaired ingame. I'm thinking dark, windy caves where torches are blown off, or temporary blindness caused by a spell turning the whole screen into a useless overburnt blur. A blind person using the acoustic renderer would be unaffected and would be able to either help his or her sighted companions or even take advantage of the situation if the setting is a competitive one.

Finally, I would suggest that commercial enterprises such as Minecraft are unlikely to ever go in this kind of direction, as it is too risky a proposition given the numerically limited userbase for this feature and the associated reduced profit margins once R&D is taken in account. An open source project could however embrace the risk, make it a badge of honor and open the door for blind people toward experiencing fully 3D, fully interactive, voxel worlds. Not to mention, it would also rake quite a bit of free advertising in the media and through word of mouth, which would eventually generate additional sighted users, not just visually impaired ones.

I'll leave it at that. Plenty of opportunities with both ideas but also plenty of challenges. I look forward to your comments.
 
Last edited by a moderator:

Florian

Active Member
Contributor
Architecture
About 1.) Raytracing is good for getting realistic results, but it is slow. There is a reason for it not being used in games.

About 2.) I guess your suggestions are more for the far far future. While I find the idea of sound rending cool, it seems to be a very difficult topic. The sound rending projets I have seen are far behind of what you describe.

While idears are good in generally I think it is better to have smaller goals that can then actually be archived. For example we could need a good idea of how swimming needs to be changed so that swimming isn't a constant jump&drown cycle (see KinematicCharacterMover).
 

Cervator

Org Co-Founder & Project Lead
Contributor
Design
Logistics
SpecOps
I love both ideas, yeah they're far future, but that's OK! :)

1) On raytracing you need to get on IRC to say hi to harrison already, he's been talking about his ray tracer (in Oberon, so not exactly very applicable for Terasology) for literally years - he's been a mostly-lurking-hermit since near the very beginning. I've told him I'd be thrilled to have a working ray tracer for Terasology when the hardware is able to support it, somebody is available to put in the work, and the outcome is a comparable or better option to rasterizing :D He's got all sorts of videos showcasing some of his renderer's tricks like mirrors and infinite zooming. Supposed to be working on it in relation to a reboot of spasim (together with the original author) but not a lot of updates lately.

It reminds some of our facade setup to put a face on the engine. Right now the PC facade really just configures the engine to run with a visual head, based on LWJGL, while I imagine one day with the rendering abstracted sufficiently in the engine we can pull out the remaining LWJGL pieces into the PC facade itself leaving the engine with no rendering dependencies. Then maybe dropping in a ray tracing renderer one day would simply be building a ray tracing facade? I admit my knowledge there is shaky and maybe I'm not quite getting the architecture right. Heck, it seems like the engine alone should be headless instead of it being a toggle in the PC facade.

2) I really love the idea of an acoustic renderer, is that really a thing? Yeah it is pretty niche and we might even have to wait years for the libraries to catch up, but that sounds crazy neat. I like the idea of supporting unusual cases like that, several times some discussion has been going on for providing more educational elements beyond the game, and colorblindness has come up too, although I think mainly related to keeping blue the success color instead of green in Jenkins due to red/green being similar in some cases ... :)

In any case, yeah, low priority with more pressing down-to-earth matters taking precedence, neat or not. I think this thread might call for a new "WIBNIF" or "Moonshot" prefix (or both) to be added in this forum :D Although I'd repost them as two separate threads with a clear title so somebody can find them again in the future!
 

Immortius

Lead Software Architect
Contributor
Architecture
GUI
1) On raytracing you need to get on IRC to say hi to harrison already, he's been talking about his ray tracer (in Oberon, so not exactly very applicable for Terasology) for literally years - he's been a mostly-lurking-hermit since near the very beginning. I've told him I'd be thrilled to have a working ray tracer for Terasology when the hardware is able to support it, somebody is available to put in the work, and the outcome is a comparable or better option to rasterizing :D He's got all sorts of videos showcasing some of his renderer's tricks like mirrors and infinite zooming. Supposed to be working on it in relation to a reboot of spasim (together with the original author) but not a lot of updates lately.

It reminds some of our facade setup to put a face on the engine. Right now the PC facade really just configures the engine to run with a visual head, based on LWJGL, while I imagine one day with the rendering abstracted sufficiently in the engine we can pull out the remaining LWJGL pieces into the PC facade itself leaving the engine with no rendering dependencies. Then maybe dropping in a ray tracing renderer one day would simply be building a ray tracing facade? I admit my knowledge there is shaky and maybe I'm not quite getting the architecture right. Heck, it seems like the engine alone should be headless instead of it being a toggle in the PC facade.
My long term plan is to split things out as:

gestalt-game-assets - interfaces for the assets like Texture, Mesh, Shader, Sound, etc.
gestalt-lwjgl - lwjgl implementations for these assets and their systems, and things like nui's lwjgl rendering. There would probably be another library with the interfaces for these systems too, or that might be in gestalt-game-assets (with a slightly different name).

At that point we can add alternative implementations for the assets and their systems.

Whether this will allow ray tracing to replace lwjgl depends on whether ray tracing can handle conventional assets (mesh, texture, shaders, etc).

2) I really love the idea of an acoustic renderer, is that really a thing? Yeah it is pretty niche and we might even have to wait years for the libraries to catch up, but that sounds crazy neat. I like the idea of supporting unusual cases like that, several times some discussion has been going on for providing more educational elements beyond the game, and colorblindness has come up too, although I think mainly related to keeping blue the success color instead of green in Jenkins due to red/green being similar in some cases ... :)

In any case, yeah, low priority with more pressing down-to-earth matters taking precedence, neat or not. I think this thread might call for a new "WIBNIF" or "Moonshot" prefix (or both) to be added in this forum :D Although I'd repost them as two separate threads with a clear title so somebody can find them again in the future!
There have been a couple of sound-only games recently. I could imagine such a thing is quite possible, though it would require a solid effort to get a decent result. It would have a large impact on gameplay (gameplay would need to be tailored for the experience), I doubt it is something you would merely apply on top of another gametype.
 
Status
Not open for further replies.
Top