Closed in favor of separate threads - Accoustic Renderer + Raytracing Renderer
I had a couple of wild ideas thinking about Terasology over the past few weeks. As anything that takes resources I imagine the chance of realizing them, even if I tried to make them myself, are slim. But I figure it's probably good to place them in the open and who knows, a few years from now somebody might pick them up, give them a good shake and make them happen.
1. Ray-tracing. Terasology's current renderer is a rasterizer. To do things such as shadows, transparencies and reflections it renders the scene multiple times using some tricks and then combines the resulting layers. The flow is somewhat tricky to understand and it keeps in memory the various resulting images. It also doesn't scale well. I.e. if I'm not mistaken the current renderer can only handle one shadow-casting light (the sun) by rendering the scene into a shadowmap. It would need to render the scene additional times if additional shadow-casting lights were desired. Same thing if it needed to render additional reflecting surfaces beside the generally horizontal surface of liquids. Any additional reflective surface that is not oriented or is not at sea-level would require an additional render.
By comparison Ray-tracing does things such as shadows, reflections and refractions in a mathematically simple way. Adding shadow-casting lights only adds rays from the fragments being rendered to the light, a process that is also used for standard transparencies, reflections and refractions, a process that is relatively easy to understand and implement. What tends to make things much trickier for ray-tracing is that each ray needs to know if and where it hits something, requiring complex spatial partitioning (Octrees, BSPs) to avoid testing each triangle in the scene for a collision with the ray.
But that's where it occurred to me that a voxel-based game such as Terasology should be able to take advantage of its chunk/block-structured data and, at least for the rendering of the landscape, should be able to perform ray-block collision tests very quickly without particularly complex algorithms. In fact, the 3d rasterizers currently in use to generate the landscape from noise functions might be able (hopefully with little modification) to rasterize a ray into a distance-ordered list of chunks to be checked for collision and then do the same for individual blocks within the chunks.
Raytracing still has issues. For example a naive, least-rays-possible implementation will generate sharp shadows. Soft shadows would make things more complex, requiring additional rays per fragment. Also, while the landscape is made of blocks, things roaming it (players, critters, vehicles) are not and would require special handling, i.e. some sort of space partitioning to speed up collision testing. But my gut feeling is that in the long run a ray-tracing renderer would make things simpler code-wise and it would position Terasology well to handle increases in visual complexity as GPUs becomes more powerful.
2. Acoustic Renderer. Basically my thought was: could the geometric simplicity of Terasology's world lend well to a sound-based renderer, producing enough spatial detail and acoustic realism that a completely blind person could play alongside a fully-sighted one? Imagine a blind child and one of his school friend, or also a blind grandfather and his grandchild. A sufficiently realistic (3D) soundscape, including the way sound bounces off and gets absorbed other surfaces would give a blind person lots of useful information to move about in the environment. Crucially, sound realistically bouncing off surfaces would allow in-game human echolocation, allowing a player to actively acoustically "illuminate" nearby surfaces and perceive their shapes and properties through sound. A normal, arbitrary-triangle-based 3D game, with its complex surfaces might struggle to create a sufficiently realistic, interactive acoustic rendering of the environment in today's consumer-level hardware. A highly geometrically structured reality such as Terasology might have just the right simplicity to make an effective acoustic renderer possible.
Interestingly, while more traditional software-for-the-blind challenges would emerge alongside the renderer's development (non-visual UIs would be obviously needed for everything but 3d navigation), intriguing gameplay opportunities would also arise. I.e. a blind person will generally have a more refined, discriminating sense of hearing. In the game this might allow for environmental information in the form of sound that a sighted person would not be able to perceive, discriminate against other sounds or simply interpret correctly. I.e. a source of water underground might normally gurgle too subtly to be heard. A predator approaching might be too difficult to detect among the constant rustling of forest leaves. Or the call of a rare poisonous frog used to augment the efficacy of arrows might be indistinguishable from its more common, non poisonous and largely useless cousin... except to somebody with a very fine hearing. In all those circumstances a more refined hearing would provide an advantage. As it would in other circumstances where the sight of a normally sighted person gets impaired ingame. I'm thinking dark, windy caves where torches are blown off, or temporary blindness caused by a spell turning the whole screen into a useless overburnt blur. A blind person using the acoustic renderer would be unaffected and would be able to either help his or her sighted companions or even take advantage of the situation if the setting is a competitive one.
Finally, I would suggest that commercial enterprises such as Minecraft are unlikely to ever go in this kind of direction, as it is too risky a proposition given the numerically limited userbase for this feature and the associated reduced profit margins once R&D is taken in account. An open source project could however embrace the risk, make it a badge of honor and open the door for blind people toward experiencing fully 3D, fully interactive, voxel worlds. Not to mention, it would also rake quite a bit of free advertising in the media and through word of mouth, which would eventually generate additional sighted users, not just visually impaired ones.
I'll leave it at that. Plenty of opportunities with both ideas but also plenty of challenges. I look forward to your comments.
I had a couple of wild ideas thinking about Terasology over the past few weeks. As anything that takes resources I imagine the chance of realizing them, even if I tried to make them myself, are slim. But I figure it's probably good to place them in the open and who knows, a few years from now somebody might pick them up, give them a good shake and make them happen.
1. Ray-tracing. Terasology's current renderer is a rasterizer. To do things such as shadows, transparencies and reflections it renders the scene multiple times using some tricks and then combines the resulting layers. The flow is somewhat tricky to understand and it keeps in memory the various resulting images. It also doesn't scale well. I.e. if I'm not mistaken the current renderer can only handle one shadow-casting light (the sun) by rendering the scene into a shadowmap. It would need to render the scene additional times if additional shadow-casting lights were desired. Same thing if it needed to render additional reflecting surfaces beside the generally horizontal surface of liquids. Any additional reflective surface that is not oriented or is not at sea-level would require an additional render.
By comparison Ray-tracing does things such as shadows, reflections and refractions in a mathematically simple way. Adding shadow-casting lights only adds rays from the fragments being rendered to the light, a process that is also used for standard transparencies, reflections and refractions, a process that is relatively easy to understand and implement. What tends to make things much trickier for ray-tracing is that each ray needs to know if and where it hits something, requiring complex spatial partitioning (Octrees, BSPs) to avoid testing each triangle in the scene for a collision with the ray.
But that's where it occurred to me that a voxel-based game such as Terasology should be able to take advantage of its chunk/block-structured data and, at least for the rendering of the landscape, should be able to perform ray-block collision tests very quickly without particularly complex algorithms. In fact, the 3d rasterizers currently in use to generate the landscape from noise functions might be able (hopefully with little modification) to rasterize a ray into a distance-ordered list of chunks to be checked for collision and then do the same for individual blocks within the chunks.
Raytracing still has issues. For example a naive, least-rays-possible implementation will generate sharp shadows. Soft shadows would make things more complex, requiring additional rays per fragment. Also, while the landscape is made of blocks, things roaming it (players, critters, vehicles) are not and would require special handling, i.e. some sort of space partitioning to speed up collision testing. But my gut feeling is that in the long run a ray-tracing renderer would make things simpler code-wise and it would position Terasology well to handle increases in visual complexity as GPUs becomes more powerful.
2. Acoustic Renderer. Basically my thought was: could the geometric simplicity of Terasology's world lend well to a sound-based renderer, producing enough spatial detail and acoustic realism that a completely blind person could play alongside a fully-sighted one? Imagine a blind child and one of his school friend, or also a blind grandfather and his grandchild. A sufficiently realistic (3D) soundscape, including the way sound bounces off and gets absorbed other surfaces would give a blind person lots of useful information to move about in the environment. Crucially, sound realistically bouncing off surfaces would allow in-game human echolocation, allowing a player to actively acoustically "illuminate" nearby surfaces and perceive their shapes and properties through sound. A normal, arbitrary-triangle-based 3D game, with its complex surfaces might struggle to create a sufficiently realistic, interactive acoustic rendering of the environment in today's consumer-level hardware. A highly geometrically structured reality such as Terasology might have just the right simplicity to make an effective acoustic renderer possible.
Interestingly, while more traditional software-for-the-blind challenges would emerge alongside the renderer's development (non-visual UIs would be obviously needed for everything but 3d navigation), intriguing gameplay opportunities would also arise. I.e. a blind person will generally have a more refined, discriminating sense of hearing. In the game this might allow for environmental information in the form of sound that a sighted person would not be able to perceive, discriminate against other sounds or simply interpret correctly. I.e. a source of water underground might normally gurgle too subtly to be heard. A predator approaching might be too difficult to detect among the constant rustling of forest leaves. Or the call of a rare poisonous frog used to augment the efficacy of arrows might be indistinguishable from its more common, non poisonous and largely useless cousin... except to somebody with a very fine hearing. In all those circumstances a more refined hearing would provide an advantage. As it would in other circumstances where the sight of a normally sighted person gets impaired ingame. I'm thinking dark, windy caves where torches are blown off, or temporary blindness caused by a spell turning the whole screen into a useless overburnt blur. A blind person using the acoustic renderer would be unaffected and would be able to either help his or her sighted companions or even take advantage of the situation if the setting is a competitive one.
Finally, I would suggest that commercial enterprises such as Minecraft are unlikely to ever go in this kind of direction, as it is too risky a proposition given the numerically limited userbase for this feature and the associated reduced profit margins once R&D is taken in account. An open source project could however embrace the risk, make it a badge of honor and open the door for blind people toward experiencing fully 3D, fully interactive, voxel worlds. Not to mention, it would also rake quite a bit of free advertising in the media and through word of mouth, which would eventually generate additional sighted users, not just visually impaired ones.
I'll leave it at that. Plenty of opportunities with both ideas but also plenty of challenges. I look forward to your comments.
Last edited by a moderator: