New
#1
This is awesome news. DirectX is stepping up it's game. What once upon a time sucked big time, has become something really impressive.
Thanks for bringing this to our attention, @Brink
3D Graphics is a Lie
For the last thirty years, almost all games have used the same general technique—rasterization—to render images on screen. While the internal representation of the game world is maintained as three dimensions, rasterization ultimately operates in two dimensions (the plane of the screen), with 3D primitives mapped onto it through transformation matrices. Through approaches like z-buffering and occlusion culling, games have historically strived to minimize the number of spurious pixels rendered, as normally they do not contribute to the final frame. And in a perfect world, the pixels rendered would be exactly those that are directly visible from the camera:
Figure 1a: a top-down illustration of various pixel reduction techniques. Top to bottom: no culling, view frustum culling, viewport clipping
Figure 1b: back-face culling, z-buffering
Through the first few years of the new millennium, this approach was sufficient. Normal and parallax mapping continued to add layers of realism to 3D games, and GPUs provided the ongoing improvements to bandwidth and processing power needed to deliver them. It wasn’t long, however, until games began using techniques that were incompatible with these optimizations. Shadow mapping allowed off-screen objects to contribute to on-screen pixels, and environment mapping required a complete spherical representation of the world. Today, techniques such as screen-space reflection and global illumination are pushing rasterization to its limits, with SSR, for example, being solved with level design tricks, and GI being solved in some cases by processing a full 3D representation of the world using async compute. In the future, the utilization of full-world 3D data for rendering techniques will only increase.
Figure 2: a top-down view showing how shadow mapping can allow even culled geometry to contribute to on-screen shadows in a scene
Today, we are introducing a feature to DirectX 12 that will bridge the gap between the rasterization techniques employed by games today, and the full 3D effects of tomorrow. This feature is DirectX Raytracing. By allowing traversal of a full 3D representation of the game world, DirectX Raytracing allows current rendering techniques such as SSR to naturally and efficiently fill the gaps left by rasterization, and opens the door to an entirely new class of techniques that have never been achieved in a real-time game. Readers unfamiliar with rasterization and raytracing will find more information about the basics of these concepts in the appendix below.
What is DirectX Raytracing?
At the highest level, DirectX Raytracing (DXR) introduces four, new concepts to the DirectX 12 API:
- The acceleration structure is an object that represents a full 3D environment in a format optimal for traversal by the GPU. Represented as a two-level hierarchy, the structure affords both optimized ray traversal by the GPU, as well as efficient modification by the application for dynamic objects.
- A new command list method, DispatchRays, which is the starting point for tracing rays into the scene. This is how the game actually submits DXR workloads to the GPU.
- A set of new HLSL shader types including ray-generation, closest-hit, any-hit, and miss shaders. These specify what the DXR workload actually does computationally. When DispatchRays is called, the ray-generation shader runs. Using the new TraceRayintrinsic function in HLSL, the ray generation shader causes rays to be traced into the scene. Depending on where the ray goes in the scene, one of several hit or miss shaders may be invoked at the point of intersection. This allows a game to assign each object its own set of shaders and textures, resulting in a unique material.
- The raytracing pipeline state, a companion in spirit to today’s Graphics and Compute pipeline state objects, encapsulates the raytracing shaders and other state relevant to raytracing workloads.
You may have noticed that DXR does not introduce a new GPU engine to go alongside DX12’s existing Graphics and Compute engines. This is intentional – DXR workloads can be run on either of DX12’s existing engines. The primary reason for this is that, fundamentally, DXR is a compute-like workload. It does not require complex state such as output merger blend modes or input assembler vertex layouts. A secondary reason, however, is that representing DXR as a compute-like workload is aligned to what we see as the future of graphics, namely that hardware will be increasingly general-purpose, and eventually most fixed-function units will be replaced by HLSL code. The design of the raytracing pipeline state exemplifies this shift through its name and design in the API. With DX12, the traditional approach would have been to create a new CreateRaytracingPipelineState method. Instead, we decided to go with a much more generic and flexible CreateStateObject method. It is designed to be adaptable so that in addition to Raytracing, it can eventually be used to create Graphics and Compute pipeline states, as well as any future pipeline designs.
Anatomy of a DXR Frame
The first step in rendering any content using DXR is to build the acceleration structures, which operate in a two-level hierarchy. At the bottom level of the structure, the application specifies a set of geometries, essentially vertex and index buffers representing distinct objects in the world. At the top level of the structure, the application specifies a list of instance descriptions containing references to a particular geometry, and some additional per-instance data such as transformation matrices, that can be updated from frame to frame in ways similar to how games perform dynamic object updates today. Together, these allow for efficient traversal of multiple complex geometries.
Figure 3: Instances of 2 geometries, each with its own transformation matrix
The second step in using DXR is to create the raytracing pipeline state. Today, most games batch their draw calls together for efficiency, for example rendering all metallic objects first, and all plastic objects second. But because it’s impossible to predict exactly what material a particular ray will hit, batching like this isn’t possible with raytracing. Instead, the raytracing pipeline state allows specification of multiple sets of raytracing shaders and texture resources. Ultimately, this allows an application to specify, for example, that any ray intersections with object A should use shader P and texture X, while intersections with object B should use shader Q and texture Y. This allows applications to have ray intersections run the correct shader code with the correct textures for the materials they hit.
The third and final step in using DXR is to call DispatchRays, which invokes the ray generation shader. Within this shader, the application makes calls to the TraceRay intrinsic, which triggers traversal of the acceleration structure, and eventual execution of the appropriate hit or miss shader. In addition, TraceRay can also be called from within hit and miss shaders, allowing for ray recursion or “multi-bounce” effects.
Figure 4: an illustration of ray recursion in a scene
Note that because the raytracing pipeline omits many of the fixed-function units of the graphics pipeline such as the input assembler and output merger, it is up to the application to specify how geometry is interpreted. Shaders are given the minimum set of attributes required to do this, namely the intersection point’s barycentric coordinates within the primitive. Ultimately, this flexibility is a significant benefit of DXR; the design allows for a huge variety of techniques without the overhead of mandating particular formats or constructs.
PIX for Windows Support Available on Day 1
As new graphics features put an increasing array of options at the disposal of game developers, the need for great tools becomes increasingly important. The great news is that PIX for Windows will support the DirectX Raytracing API from day 1 of the API’s release. PIX on Windows supports capturing and analyzing frames built using DXR to help developers understand how DXR interacts with the hardware. Developers can inspect API calls, view pipeline resources that contribute to the raytracing work, see contents of state objects, and visualize acceleration structures. This provides the information developers need to build great experiences using DXR.
What Does This Mean for Games?
DXR will initially be used to supplement current rendering techniques such as screen space reflections, for example, to fill in data from geometry that’s either occluded or off-screen. This will lead to a material increase in visual quality for these effects in the near future. Over the next several years, however, we expect an increase in utilization of DXR for techniques that are simply impractical for rasterization, such as true global illumination. Eventually, raytracing may completely replace rasterization as the standard algorithm for rendering 3D scenes. That said, until everyone has a light-field display on their desk, rasterization will continue to be an excellent match for the common case of rendering content to a flat grid of square pixels, supplemented by raytracing for true 3D effects.
Thanks to our friends at SEED, Electronic Arts, we can show you a glimpse of what future gaming scenes could look like.
Project PICA PICA from SEED, Electronic Arts
In addition, while today marks the first public announcement of DirectX Raytracing, we have been working closely with hardware vendors and industry developers for nearly a year to design and tune the API. In fact, a significant number of studios and engines are already planning to integrate DXR support into their games and engines, including:
Electronic Arts, Frostbite
Electronic Arts, SEED
Epic Games, Unreal Engine
Futuremark, 3DMark
Unity Technologies, Unity Engine
And more will be coming soon.
What Hardware Will DXR Run On?
Developers can use currently in-market hardware to get started on DirectX Raytracing. There is also a fallback layer which will allow developers to start experimenting with DirectX Raytracing that does not require any specific hardware support. For hardware roadmap support for DirectX Raytracing, please contact hardware vendors directly for further details.
Available now for experimentation!
Want to be one of the first to bring real-time raytracing to your game? Start by attending our Game Developer Conference Session on DirectX Raytracing for all the technical details you need to begin, then download the Experimental DXR SDK and start coding! Not attending GDC? No problem! We'll be posting the slides on this blog after the session.
Read more: Announcing Microsoft DirectX Raytracing! DirectX Developer Blog