• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Poll: Do you care about ray tracing / upscaling?

Page 43 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Do you care about ray tracing / upscaling?


  • Total voters
    255
Well, the highest RDNA 5 Gaming Card is presumably only going to have 18...
Highest end RDNA5 gaming card is a cut-down AT0 with at least 384bit bus.

Why didn't not they do it for demo?
Because it's probably still in its training and/or fine-tuning phase. Distill and quantization is the very last thing to do in these models.
Though if they're smart, this is actually going into the drawer for a long time. The release is completely poisoned thanks to Jensen lying to his teeth.
 
Last edited:
Been following this DLSS5 discussion on some forums, including Reddit, and some of the topics on r/hardware that were critical of DLSS5 just simply got removed by a moderator. Is r/hardware under the influence of nVidia?

These were some of the topics that got removed, they showed videos were nVidia's DLSS5 was criticized:

nVidia Answers My DLLS 5 Questions:

Gamers Nexus nVidia Says Youre Completely Wrong:

While a couple of others, that are kind of positive topics of DLSS5, are still on.

For a long time I have had this feeling that there is a lot of nVidia influence on Reddit, be it comments or upvotes. The last couple of days only have strengthened that feeling.
Not hard to figure out the NV shill mod, really not hard at all.
There are several kinda offshoots of r/hardware for that exact reason.
 
Well, the highest RDNA 5 Gaming Card is presumably only going to have 18...
dGPU, so plus at least 16 GB of RAM on host system, of which 12 reasonably can be alloc-able, so 12+18=30 GB, parity with consoles.
Granted he might be talking about the PS6 only
Most likely because for FG to work one needs min 60 FPS to begin with, which PS6 should be able to achieve until GTA7 comes around...
 

Tangently related to something else people wanted to hate on, Sony confirms they are going to add Frame Gen at some point to "Playstation Platforms" but not this year. Granted he might be talking about the PS6 only.
10ec584b947f2338decbf0f6f770ba74.gif
 
The whole DLSS 5 debacle kind of proved what people here were saying back in 2023 – the moment upscaling/AI stops being a helpful optional tool and starts trying to replace the actual rendering pipeline, people push back hard. DLSS 2-3 was accepted because it enhanced what was already there. DLSS 5 is getting torched because it's trying to change what's there. Big difference.

As for RT: three years later and we're still in the "it's cool but not worth the tradeoff" phase for most people. The tech has improved, sure, but the cost-to-visual-benefit ratio still isn't there unless you're buying high-end. And now the conversation has shifted to whether RT is even going to matter once AI rendering tries to fake the same results anyway.
 
I think Richard and John were just too caught up in Nvidia's reality distortion field at the time. They didn't have the time or opportunity to look at things properly either, so that resulted in a very poor approach in their first dlss5 video.

They often get into the trap of early access and rushing to scoop everyone, and probably strings attached to the early access, or maybe just not wanting to bite the hand that feeds them the early access.

I remember they had first access to DLSS Frame Generation and that coverage was very glowing as well. Their later general coverage at the same time as everyone else was more critical.

DLSS 5 won't matter to me, because it's apparently only on RTX 50 series and above, and I'm sticking with 40 series for MANY more years. I'll probably be more interested in what DLSS 8 features we are complaining about when I think about upgrading.

But it's just another tool in the tool box. Studios don't have to use it, players don't have to turn it on.
 
Maybe a bad time to trust Nvidia's words, but I assume they are telling the truth that its a really early showcase and that the model can be quantized and optimized down to a much smaller memory and GPU load foot print. Doesn't change my doubts about the ability to control it though when it more and more looks like a glorified post processing i2i gen AI filter. If it could only be applied, moderately, to specific materials, and was actually hooked into the game engine, I'd be much more excited knowing it wouldn't give the whole image that cheap FB AI slop look.
 
As for RT: three years later and we're still in the "it's cool but not worth the tradeoff" phase for most people. The tech has improved, sure, but the cost-to-visual-benefit ratio still isn't there unless you're buying high-end. And now the conversation has shifted to whether RT is even going to matter once AI rendering tries to fake the same results anyway.
So this fits exactly with what Threat Interactive is saying that Nvidia introduces these features so competitors can fast-follow and Nvidia always has the first mover advantage. This is a very passive-aggressive way of locking out competitors.
 
So this fits exactly with what Threat Interactive is saying that Nvidia introduces these features so competitors can fast-follow and Nvidia always has the first mover advantage. This is a very passive-aggressive way of locking out competitors.
It's also a way to design planned obsolescence into their own products. If we assume Nvidia doesn't deem AMD as a threat, without meaningful updates, consumers wouldn't buy Nvidia's GPUs and their year-on-year gaming revenue would fall.
 
It's also a way to design planned obsolescence into their own products. If we assume Nvidia doesn't deem AMD as a threat, without meaningful updates, consumers wouldn't buy Nvidia's GPUs and their year-on-year gaming revenue would fall.
There's a reason the results look all the same. AI takes results from ALL online human development, but acts as a single entity. So basically what it outputs is cumulative human viewpoint, focused as if it were a single person. Therefore, it looks similar because it's a singular viewpoint.

I don't think Nvidia's promises of it not being slop but another way of customization and you can be unique can ever be true, because when you are drawing as an artist, you are controlling every single aspect of the design, down to the individual lines. You will need to have that sort of control, which at that point you lose the whole point of doing neural rendering in the first place, because everything you don't do will be filled by a computer.

So eventually the vision seems to be where you do everything using an AI prompt. I guess we're back to using typewriters again.

Also, remember graphics cards basically started as approximating output, rather than accurate output. That is one way the Voodoo founders achieved acceptable performance with low cost, by using FP32 over FP64. The problem you are having with DLSS and grainy, blurry textures is related to this. To save compute, they are further approximating rather than being accurate.

Using neural renders as future GPUs would create far more distortions though. They are going as low as Int8 for AI. If it becomes future of gaming, the issue you are seeing with anti-aliasing such as DLSS would be seen everywhere, in textures, in movement, lighting, vertex transformations, geometry.
 
Last edited:
We're actually all hallucinations of DLSS10

It seems games have become more about streaming, and graphics, so I have been wondering how long until they just include AI to play them for you. Or maybe you just train your AI assistant for a while until it can take over in your style.

Then you can relax and watch, along with everyone you are streaming to...
 
Tom Henderson says Ubisoft and Capcom devs were unaware of their games propping up at GTC with DLSS5:

I wouldn't read too much into "devs" not being aware.

Ubisoft and Capcom have something like 20K employees between them. The chance of catching some random employee that knows about this seems low. Also as a new tech, there was probably as small team working under NDA for the demos.

I don't hate the faces, as much as most people. I don't think it makes all the characters look the same either. Almost universally it was an improvement for Bethesda characters. Probably because Bethesda seems to have serious problem making human faces... Maybe I'd turn it on in some Bethesda games.

In motion it appears to have more problems.

The landscape stuff seems to mostly look worse. I'm not getting what the improvement I'm supposed to be seeing there. It seem to make things brighter, and like the default darker looking landscapes.

So mostly it's Meh, will apparently require RTX 50+, and currently requires two 5090's so will probably remain a hit to FPS for some time.
 
The landscape stuff seems to mostly look worse. I'm not getting what the improvement I'm supposed to be seeing there. It seem to make things brighter, and like the default darker looking landscapes.
It does the same thing with faces, but we have confirmation from them that the effect can be toned down. Once you go down to a 30-40% blend the faces look subjectively better even with the fake lighting issues. As long as we accept this is not an evolution of ray tracing, it should be a nice optional feature.

Native > Blend > Official Demo

Starfield Face.jpg
 
That in between does look much better. The original DLSS Off, kind of looks like crap. The lighting effect is too strong in the official DLSS On version.

I was frame stepping one of the demo videos and notice they had a frame between DLSS Off and DLSS On, which was kind of in between, and I thought that looked better as well.
 
Back
Top