As requested by @Cervator I repost in a separate thread one of the two ideas mentioned in this thread.
Terasology's current renderer is a rasterizer. To do things such as shadows, transparencies and reflections it renders the scene multiple times using some tricks and then combines the resulting layers. The flow is somewhat tricky to understand and it keeps in memory the various resulting images. It also doesn't scale well. I.e. if I'm not mistaken the current renderer can only handle one shadow-casting light (the sun) by rendering the scene into a shadowmap. It would need to render the scene additional times if additional shadow-casting lights were desired. Same thing if it needed to render additional reflecting surfaces beside the generally horizontal surface of liquids. Any additional reflective surface that is not oriented or is not at sea-level would require an additional render.
By comparison Ray-tracing does things such as shadows, reflections and refractions in a mathematically simple way. Adding shadow-casting lights only adds rays from the fragments being rendered to the light, a process that is also used for standard transparencies, reflections and refractions, a process that is relatively easy to understand and implement. What tends to make things much trickier for ray-tracing is that each ray needs to know if and where it hits something, requiring complex spatial partitioning (Octrees, BSPs) to avoid testing each triangle in the scene for a collision with the ray.
But that's where it occurred to me that a voxel-based game such as Terasology should be able to take advantage of its chunk/block-structured data and, at least for the rendering of the landscape, should be able to perform ray-block collision tests very quickly without particularly complex algorithms. In fact, the 3d rasterizers currently in use to generate the landscape from noise functions might be able (hopefully with little modification) to rasterize a ray into a distance-ordered list of chunks to be checked for collision and then do the same for individual blocks within the chunks.
Raytracing still has issues. For example a naive, least-rays-possible implementation will generate sharp shadows. Soft shadows would make things more complex, requiring additional rays per fragment. Also, while the landscape is made of blocks, things roaming it (players, critters, vehicles) are not and would require special handling, i.e. some sort of space partitioning to speed up collision testing. But my gut feeling is that in the long run a ray-tracing renderer would make things simpler code-wise and it would position Terasology well to handle increases in visual complexity as GPUs becomes more powerful.
Terasology's current renderer is a rasterizer. To do things such as shadows, transparencies and reflections it renders the scene multiple times using some tricks and then combines the resulting layers. The flow is somewhat tricky to understand and it keeps in memory the various resulting images. It also doesn't scale well. I.e. if I'm not mistaken the current renderer can only handle one shadow-casting light (the sun) by rendering the scene into a shadowmap. It would need to render the scene additional times if additional shadow-casting lights were desired. Same thing if it needed to render additional reflecting surfaces beside the generally horizontal surface of liquids. Any additional reflective surface that is not oriented or is not at sea-level would require an additional render.
By comparison Ray-tracing does things such as shadows, reflections and refractions in a mathematically simple way. Adding shadow-casting lights only adds rays from the fragments being rendered to the light, a process that is also used for standard transparencies, reflections and refractions, a process that is relatively easy to understand and implement. What tends to make things much trickier for ray-tracing is that each ray needs to know if and where it hits something, requiring complex spatial partitioning (Octrees, BSPs) to avoid testing each triangle in the scene for a collision with the ray.
But that's where it occurred to me that a voxel-based game such as Terasology should be able to take advantage of its chunk/block-structured data and, at least for the rendering of the landscape, should be able to perform ray-block collision tests very quickly without particularly complex algorithms. In fact, the 3d rasterizers currently in use to generate the landscape from noise functions might be able (hopefully with little modification) to rasterize a ray into a distance-ordered list of chunks to be checked for collision and then do the same for individual blocks within the chunks.
Raytracing still has issues. For example a naive, least-rays-possible implementation will generate sharp shadows. Soft shadows would make things more complex, requiring additional rays per fragment. Also, while the landscape is made of blocks, things roaming it (players, critters, vehicles) are not and would require special handling, i.e. some sort of space partitioning to speed up collision testing. But my gut feeling is that in the long run a ray-tracing renderer would make things simpler code-wise and it would position Terasology well to handle increases in visual complexity as GPUs becomes more powerful.