• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion exciting new features, research & advancements in gaming (graphics & adjacent software)

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
after GDC 2026, and Microsoft's current plans to expose ML-related hardware capabilities seem to come from two angles at once:

1. As instructions available in normal shaders

These can be useful for implementing inference of small ML models as part of existing vertex/pixel/compute shaders. For example, this could include neural texture (de)compression, approximation of complex material and lighting models (BRDFs), character animation, or approximate physics simulation. In the future, we may have many small models evaluated every render frame.
These are not really news from this week, because they were announced and their specifications have been available for quite some time. Microsoft develops its HLSL language advancements quite openly by sharing HLSL specification proposals.
Long vectors, as specified in proposal 0026 - HLSL Long Vectors. It adds support for vectors with more than four elements, e.g. vector<float, 15>. Note that they are still normal variables, local to an individual shader thread.
Linear algebra, as specified in proposal 0035 - Linear Algebra Matrix. It adds a matrix type, such as Matrix<ComponentType::F16, 8, 32, MatrixUse::A, MatrixScope::Wave>, as well as vector-matrix and matrix-matrix operations like Multiply and MultiplyAccumulate.

2. DirectX Compute Graph Compiler

That's a new announcement from GDC 2026. Microsoft teased it as a completely new technology that will consume entire ML models and optimize them for efficient execution on a specific GPU. It will feature "graph optimization, memory planning, and operator fusion". This is clearly an approach to executing ML workloads intended to keep the entire GPU busy for some time, similar to upscaling and other screen-space effects. They will likely execute as multiple compute dispatches, maybe even as separate command buffer submissions.
Note that ML frameworks can already do these things. With this project, Microsoft is basically creating another one, but tailored for cooperation with DirectX 12 and graphics workloads.
Note also that the graph approach is well known in the game development community. Advanced game engines often implement their own graphs representing render passes and dependencies between them, like the Render Dependency Graph in Unreal Engine. AMD also developed a similar solution called Render Pipeline Shaders. However, it never gained traction, possibly because developers saw it as overkill to employ LLVM to compile a custom domain-specific language.

 
This is a great description for the DXR 2.0 stuff (BVH related):

Likely DX13 sees the entire pipeline augmneted with work graphs.
 
DX13 needs a 'killer game' at launch to show off its full potential and convince players and developers that the upgrade is worth it.
If they can't use it to transform the game then let's at least hope we see some nextgen rendering pipelines augmented with the features for perf boosts.

I hope so but we haven't had even like a game use mesh shaders aside from Alan Wake II or Doom: The Dark Ages (Vulkan but it supports it). The extended cross gen console period + the popularity of GTX 10 series makes it very hard.
IIRC there are a few games that use it on console and not on PC like Avatar Frontiers of Pandora. Also IIRC doesn't FF 7 remake use it, MHW and I know AC Shadows uses it.
Who is gonna use 10 series in 2028 and beyond on AAA games? Crossgen is over.

And considering how games are expensive to make these days a game fully taking advantage of DX13 will probably be years after RDNA 5/Helix comes out.
I don't think we can begin to fathom the weird shit they'll be able to do with that engine. Devs prob can't either. Gonna be a fine wine situation fs on those nextgen products.
 
Back
Top