• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion exciting new features, research & advancements in gaming (graphics & adjacent software)

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
after GDC 2026, and Microsoft's current plans to expose ML-related hardware capabilities seem to come from two angles at once:

1. As instructions available in normal shaders

These can be useful for implementing inference of small ML models as part of existing vertex/pixel/compute shaders. For example, this could include neural texture (de)compression, approximation of complex material and lighting models (BRDFs), character animation, or approximate physics simulation. In the future, we may have many small models evaluated every render frame.
These are not really news from this week, because they were announced and their specifications have been available for quite some time. Microsoft develops its HLSL language advancements quite openly by sharing HLSL specification proposals.
Long vectors, as specified in proposal 0026 - HLSL Long Vectors. It adds support for vectors with more than four elements, e.g. vector<float, 15>. Note that they are still normal variables, local to an individual shader thread.
Linear algebra, as specified in proposal 0035 - Linear Algebra Matrix. It adds a matrix type, such as Matrix<ComponentType::F16, 8, 32, MatrixUse::A, MatrixScope::Wave>, as well as vector-matrix and matrix-matrix operations like Multiply and MultiplyAccumulate.

2. DirectX Compute Graph Compiler

That's a new announcement from GDC 2026. Microsoft teased it as a completely new technology that will consume entire ML models and optimize them for efficient execution on a specific GPU. It will feature "graph optimization, memory planning, and operator fusion". This is clearly an approach to executing ML workloads intended to keep the entire GPU busy for some time, similar to upscaling and other screen-space effects. They will likely execute as multiple compute dispatches, maybe even as separate command buffer submissions.
Note that ML frameworks can already do these things. With this project, Microsoft is basically creating another one, but tailored for cooperation with DirectX 12 and graphics workloads.
Note also that the graph approach is well known in the game development community. Advanced game engines often implement their own graphs representing render passes and dependencies between them, like the Render Dependency Graph in Unreal Engine. AMD also developed a similar solution called Render Pipeline Shaders. However, it never gained traction, possibly because developers saw it as overkill to employ LLVM to compile a custom domain-specific language.

 
This is a great description for the DXR 2.0 stuff (BVH related):

Likely DX13 sees the entire pipeline augmneted with work graphs.
 
DX13 needs a 'killer game' at launch to show off its full potential and convince players and developers that the upgrade is worth it.
If they can't use it to transform the game then let's at least hope we see some nextgen rendering pipelines augmented with the features for perf boosts.

I hope so but we haven't had even like a game use mesh shaders aside from Alan Wake II or Doom: The Dark Ages (Vulkan but it supports it). The extended cross gen console period + the popularity of GTX 10 series makes it very hard.
IIRC there are a few games that use it on console and not on PC like Avatar Frontiers of Pandora. Also IIRC doesn't FF 7 remake use it, MHW and I know AC Shadows uses it.
Who is gonna use 10 series in 2028 and beyond on AAA games? Crossgen is over.

And considering how games are expensive to make these days a game fully taking advantage of DX13 will probably be years after RDNA 5/Helix comes out.
I don't think we can begin to fathom the weird shit they'll be able to do with that engine. Devs prob can't either. Gonna be a fine wine situation fs on those nextgen products.
 
This one specifically is solely for caustics.

It's one of those eternal problem points for brute forcing with tracing, and the specular manifold sampling (SMS) is more key to this than ReSTIR, the latter just turbocharges an already very efficient method pioneered by Autodesk's Arnold people, because it favors the unidirectional PT baseline that Arnold render is built on.

Since that original SMS paper there have been a few that have followed up on and refined it, including a path guiding one.
 
Something to bear in mind is that not all of these techniques are necessarily practical for real time use, and that's still OK.

Offline rendering is still a thing and will remain so unless Vaire strikes gold on their 4000x more efficient compute promise and someone uses it to stack a butt load of RT accelerated GPU dies together like 3D NAND to handle a massive amount more samples per pixel than present RT accelerated GPUs are capable of.

Point being that some of them could end up in Blender Cycles or Pixar Renderman cranking out the next Hollywood movies at a much faster time scale for a given render farm size.
 
Last edited:
Something to bear in mind is that not all of these techniques are necessarily practical for real time use, and that's still OK.
Yup and like you said RTRT won't catch up. We can only somewhat approximate with ML prob, but that's still on a very shaky and unproven foundation. NRC and RR is nowhere near enough.

I was just astounded that NVIDIA has published this many technique + ReSTIR patents in the last year: Seven papers!

Many big papers from independent researchers too. More than a dozen IIRC.

It's good to see that the non-ML foundation continues to improve.
 
Yup and like you said RTRT won't catch up
I'm not so certain that it will never catch up as long as the pixel resolution doesn't increase significantly in the future, or if it does then only in VR where foveated rendering can mitigate the impact with sparse sampling in the optical periphery.
 
This thread is really useful for keeping track of GPU research and graphics apis. One thing I would add is that real time ray tracing and AI based upscaling like DLSS and FSR are becoming increasingly important for both performance and visual quality.

Also Vulkan and Metal are great examples of low-level apis that give developers more control over GPU resources compared to older apis like OpenGL or DirectX 11.
 
Back
Top