• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Poll: Do you care about ray tracing / upscaling?

Page 40 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Do you care about ray tracing / upscaling?


  • Total voters
    255
I have a feeling:
  • Show Nvidias AI prowess
  • Nvidia is totally aware of the controversy DLSS 5 creates. Showing it now has two purposes:
    • Firstly, live through the initial upheaval and go on. By the time in fall 2026, most of these reactions will be dampened. If you would see that stuff the first time when DLSS 5 gets released, the negative backlash would undermine DLSS 5. With an early showcase it will acutally increase the acceptance by gamers and developers, because you have time to adjust to it.
    • Secondly, receive feedback from the community (developers, gamers, journalists, ...). These overblown "hero light" faces receive most of the criticism. And this feedback is also visible to game developers. This should hopefully help to create a better DLSS 5 configuration and parameterization for the initial DLSS 5 launch. Good for us gamers.
My wish would be, that we as users can configure the DLSS 5 parameterization as well. If e.g. some faces look "too much generated", dial down the respective setting.
They also might just be dumb and are huffing their own farts.
 
I find it amusing that Bethesda went in on this tech. I mean, they were all in with AMD three years ago, conspiring against DLSS with Starfield not supporting it natively. Now we have decent character models in game with nvidia's new god rays 2.0 part 5.
 
DF had to turn of comments again lol.

"DF is now fully independent!"*

*Independent from user donations/Patreon/etc. Now fully funded by Nvidia.
Comment section is crucifying them lmao:

#1 "It's radically transformed" Yeah, and my house was radically transformed after it was hit by a tornado.
#2 "looks a lot better with DLSS 5 on" NPC blinks like a reptile
#3 This is Digital Foundry's Doritos and Mountain Dew moment.
#4 Based on their reaction to this thing, I'm gonna guess that the Digital Foundry guys watch their TVs in Vivid Mode.
#5 the description says "Digital Foundry is now fully independent!" but the video makes it seem like both of them were kidnapped.
 
DF certainly ruined their rep - it was already very annoying when they'd take excellent game, say it's excellent game and then nitpick at some insignificant, often pixel related, element that they've spent lots of time looking for, but DLSS5 - ALL GOOD!!!
DF ruined their rep for me when they sprayed themselves over Horizon Forbidden West's graphics, calling the game a PS5 graphics masterclass. So I bought at launch and the performance mode was so oversharpened and shimmery it looked like an XBox 360 port when playing on a 4k monitor, like 720p. I couldn't play it more than ten minutes.


It was such a widespread complaint about the performance mode and wasn't fixed until four months later.

Anyways, Digital Foundry has been for sale for years and I always treat them as marketing more than anything. Definitely trust Hardware Unboxed and Gamers Nexus way more as they're not afraid to piss Nvidia, Playstation, etc off.
 
DF ruined their rep for me when they sprayed themselves over Horizon Forbidden West's graphics, calling the game a PS5 graphics masterclass. So I bought at launch and the performance mode was so oversharpened and shimmery it looked like an XBox 360 port when playing on a 4k monitor, like 720p. I couldn't play it more than ten minutes.


It was such a widespread complaint about the performance mode and wasn't fixed until four months later.

Anyways, Digital Foundry has been for sale for years and I always treat them as marketing more than anything. Definitely trust Hardware Unboxed and Gamers Nexus way more as they're not afraid to piss Nvidia, Playstation, etc off.
I think it's fair to say that DF is access journalism at this point.
 
I find it amusing that Bethesda went in on this tech. I mean, they were all in with AMD three years ago, conspiring against DLSS with Starfield not supporting it natively. Now we have decent character models in game with nvidia's new god rays 2.0 part 5.
that was absolutely zero conspiracy there

  1. bethesda was Microsoft first party dev
  2. starfield used custom engine
  3. starfield was in a huge huge mess
  4. Microsoft used their connections in AMD to optimize the game for xbox
  5. AMD sent a few engineers to optimize starfield
  6. the optimization method devised by AMD used a custom hardware feature that was favorable to AMD
  7. Game ran like turd on nvidia PCs (because no one optimized for that)
so in short starfield (& UE5 based Senua's Saga) were optimized for xbox console (which so happened to run on AMD hardware)

now Microsoft has acquired activision. so expect similar dynamics to play out on directx 13 games running on xbox helix (which is literally the same gpu chip as the rdna 5 successor to 9070xt)
 
It is important to consider things from NVs perspective.
Since AMD has a monopoly over the high tech consoles, NV has to differentiate and convince PC devs and ports to use their features to avoid being overshadowed which is why GameWorks was created in the first place. Before then they were largely happy to compete in the same playing field, just with various levels of cheating, bribes and paid shills involved (NV has had technical forum shills for 25+ years).
RTX was created for ProViz stuff like rendering, adding RT accel to the Volta SM which had the tensors to accelerate denoising. RTRT working out with DLSS was a gamble, NV could gamble because Maxwell/Pascal were so dominant and put NV in a very safe position. Maxwell coincided with a rather concerted movement of 'console gamers' moving to PC through that totally not astroturfed PCMR cringefest that continues to this day.

Each generation of consoles being AMD only means NV has to try to ensure the generic option will not be desirable enough to usurp their desired option for how games should be configured.

Since it has worked so far, console ports are largely NV optimised, and PC native games are overwhelmingly NV optimised.

8th gen was Gameworks
9th gen was RTX/DLSS
10th gen AMD/MS are basically building everything in by default to the GDK with Project Helix. If RDNA5 and all of the advanced RT/neural optimised techniques plus AMD being all in on Work Graphs are good and easy enough for devs to get behind as the primary development platform for 10th gen games. NV end up in their nightmare situation of being the secondary optimisation target and having to pay for first priority.

Naturally MS will try to play both sides, but their studios will be mandated to ensure things are built for Helix first, with NV being a seperate optimisation fork to cover all of the PC userbase so long as they pay for it.

Sony is Sony, they will have their own forks for their own API, hardware is nearly 100% the same this time and they should have things rolling nicely for PS6 to work out just fine. 120Hz mode should be in every PS6 game with 60 being the quality mode, though I worry devs will cheese FG.

So back to DLSS5, this is how they hope to shutdown 10th gen consoles from encroaching on their PC game moat, something that is clearly being pushed out to as many major devs well ahead of viability to try to control the narrative. This is intended for Rubin, and they will create horrific charts showing Rubin 6070 or w/e running circles around the 5090.

It will require FG on a 5090 to get to 60FPS, with horrible input lag guaranteed no matter what and with FG you naturally get ghosting and smearing. If DLSS5 does solve temporal stability issues, especially in fast motion that is a true game changer but that does not seem to be the case (other techniques used to produce the base image could be the culprit), it likely still uses frame accumulation to hit acceptable frametimes instead of having to generate a whole new final output from just one frame at a time.

When compared to existing PT implementations and not cherry picking unoptimised raster, the difference is not as stark, and there is so much low hanging fruit still remaining in existing approaches that more targeted hw/sw codesign can address.

DLSS5 is all about approximating photorealistic lighting and textures... from one base model that can only be tuned through sliders and masking parts of the scene off from the effect.
Thus it comes down to what the model exactly is, what framedata it references from and most importantly, how it is trained and what data they use for training.
They say it is deterministic but uhh, if it is just a special image generator model anchored to 3D screenspace as a baseline to generate from, well, it just isn't.
Without enough detail in precise spots it show signs of hallucinating textures and lighting that shouldn't exist.

Need to know more to say much else, but for now this is the gambit NV is making.
-DLSS5 if it works out usurps most of their existing tech, gambling at basically obsoleting not just themselves, but most importantly AMD
-They are betting that they can get speedy enough inference spamming in the future that this technique will achieve high enough framerates with superior IQ to adding best practices to the existing paradigms, which in the future will add neural accelerated modeling across basically everything to achieve the same effect, photorealistic lighting/textures while drawing as few rays as possible as light tracing maps horribly on Von Neumann architectures
-Basically hope that AAA games continue to buy into the UEslop homogenous style and that consumers will continue to consume

Here's the thing, this clearly works for cinematic slow motion AAA eyecandy walkfests, but games are very diverse, many genres will never adopt stuff like this unless you get extremely high FPS, low input lag and most importantly motion clarity and completely consistent imagery.
And indie titles or Nintendo stuff are all about the art direction and stylised graphics over maximum fidelity and realism.

The gaming industry at large has to ask, just how much does achieving realism in real time graphics, or at least realistic effects in fictional scapes truly matter to video games?
After all, one company desperately wants you to accept them, and only them to the detriment of consumers, competition and technological progress as a whole?

Since in the nature of invidia, ever since they were founded, NV has envied that which they don't already have, no matter how much they do have, it is never enough.
Cold harsh business practices at the detriment of everything that isn't NV has led to the world we live in.

Jensen is not a visionary, his cofounders brought the idea and expertise to him after all, he has largely copied the business acumen of Steve Jobs, who was a visionary focused on one specific group of people, the consumer. Both are highly questionable people for different reasons but that is besides the point.
NV has always exploited and belittled the consumer, knowing how dumb and gullible peasants are to bread and circuses.

NV are good at GPUs and thus did everything to try to proliferate GPUs, 3rd parties made the advancements, released the papers that convinced NV to pivot hard in certain directions.
This is the area Jensen is exceptional at, cutting through normal corpo resistance ala Intel and committing to new areas by working hard and aiming to have a monopoly before anyone else showed up.
They are not the only company to have these ideas at the times they started, but AMD lacked resources and Intel was led by absolute fools. So it kinda just worked out for them, now others are fully committed and like always, NV is shown to not be invincible when the industry at large tries.

I wish more people actually looked at their history and also what others were up to over the years to gain proper context on how things are as they are today, a mixture of many bouts of luck combined with setbacks and disgraceful business practices. Much can be written about the Intel/AMD wars, and about ATi to properly understand where NV sits.

And next time Tae, actually include the many crimes NV executed against PC gaming, okay?
 
aiming to have a monopoly before anyone else showed up.
Remember they tried to buy ARM too. That would’ve killed ARM on the spot for everyone else if the deal passed.

100% they would’ve killed ARMs business model of licensing and entire product roadmaps of other companies would’ve been disrupted. Nvidias likely solution would’ve been “oh you want a ARM CPU buy our CPU that’s comes with a CUDA GPU attached”
 
Remember they tried to buy ARM too. That would’ve killed ARM on the spot for everyone else if the deal passed.

100% they would’ve killed ARMs business model of licensing and entire product roadmaps of other companies would’ve been disrupted. Nvidias likely solution would’ve been “oh you want a ARM CPU buy our CPU that’s comes with a CUDA GPU attached”
It would’ve been a continuation of their “vertically integrated, horizontally open” approach: we’ll help you develop solutions for your workloads, but they only work on our hardware so that you become reliant on us.
 

Memes are funny as hell and YT and Twitter is flooding with them. The tech media's DLSS 5 coverage has been mostly negative.

I really can't see how they can turn this steaming pile of shit into something viable, but they don't care since wowing dumb investors is all that matters.
DLSS1 was a laughing stock too, but no one remembers that now and the next iteration(s) of DLSS became something that PC gamers worship. DLSS5 is probably going to follow a similar trajectory. Nvidia will polish it or change their approach at its core (like they did with DLSS2) if necessary and shill the new, at least minimally viable version of this tech at full blast so that everyone forgets this fiasco.
 
Doc is in the same situation as DF i.e. tap dancing for anyone that throws money in the hat.

The level of hyperbole in his one tweet is mind numbing.
DLSS1 was a laughing stock too, but no one remembers that now and the next iteration(s) of DLSS became something that PC gamers worship. DLSS5 is probably going to follow a similar trajectory. Nvidia will polish it or change their approach at its core (like they did with DLSS2) if necessary and shill the new, at least minimally viable version of this tech at full blast so that everyone forgets this fiasco.
Good job citing precedent. I hope that is how this plays out. I have been duly impressed with the maturation of DLSS. 5 definitely needs to deliver when made public.
 
DLSS1 was a laughing stock too, but no one remembers that now and the next iteration(s) of DLSS became something that PC gamers worship. DLSS5 is probably going to follow a similar trajectory. Nvidia will polish it or change their approach at its core (like they did with DLSS2) if necessary and shill the new, at least minimally viable version of this tech at full blast so that everyone forgets this fiasco.
The current approach is never gonna work. Output pixels and motion vectors is not enough context and for it to be actually good it has to be integrated directly into the game engine pipeline.
The reason why they didn't do this is probably because then the model would be even bigger and more complex. Already a problem to get it running in real time rn so achieving this with even larger model is unlikely.

This problem is best tackled by breaking it up into multiple smaller models and I'm rooting for neural shaders (deterministic MLP based neural encoders), the original vision NVIDIA had with 50 series and that AMD, Sony and Microsoft are working on rn for the 10th console era. Unlike DLSS 5, a glorified ML post-processing filter, they integrate directly into the existing pipeline, augmenting and replacing the existing rendering pipeline. They're also far more controllable and customisable by gamedevs and won't suffer from all the issues plaguing DLSS 5.
 
DLSS1 was a laughing stock too, but no one remembers that now and the next iteration(s) of DLSS became something that PC gamers worship. DLSS5 is probably going to follow a similar trajectory. Nvidia will polish it or change their approach at its core (like they did with DLSS2) if necessary and shill the new, at least minimally viable version of this tech at full blast so that everyone forgets this fiasco.
This is not the tech's fault though, this was a sales/marketing fail. The tech works, it can already produce imagery that most people would consider an improvement. (at least from a subjective PoV)

The best analogy I can find for the DLSS 5 announcement is a decade of loudness war condensed in just a few images. Some of the best sound engineers probably worked on those heavily compressed albums too.
 
DLSS1 was a laughing stock too, but no one remembers that now and the next iteration(s) of DLSS became something that PC gamers worship. DLSS5 is probably going to follow a similar trajectory. Nvidia will polish it or change their approach at its core (like they did with DLSS2) if necessary and shill the new, at least minimally viable version of this tech at full blast so that everyone forgets this fiasco.
It's just fundamentally a shit idea. To accurately model lighting and shadows you need information on off-screen objects- where are the lights? What shape and colour are the lights? Where are the shadow casting objects? What is light coming from the skylight (diffuse light reflected from clouds/atmosphere) Vs direct illumination from the sun? What time of day is it, what angle is the sun shining through the atmosphere? How much of an unobstructed view of the sky does this surface have?

A screen space solution like this fundamentally does not have access to that information, which is why all the examples massively flatten the lighting and trash the existing shadows and local lighting in the scenes. It's like how screen space reflections and screen space AO were decent approximations on the cheap, but had fundamental limitations you can't circumvent. This is the same, except it's anything but cheap to compute.

Path tracing is the way to actually simulate this stuff- it's expensive, but I'm sure if you throw the power of 2 5090s at it you could get some pretty good results. Nvidia already has great R&D into path tracing, they should keep investing there and ditch this stupidity.

The only reason I can see for Nvidia to push this approach to lighting is if they know AMD are about to start beating them at ray tracing performance, and they want to move the bottleneck into tensor throughput instead.
 
A screen space solution like this fundamentally does not have access to that information, which is why all the examples massively flatten the lighting and trash the existing shadows and local lighting in the scenes.
Path tracing is the way to actually simulate this stuff- it's expensive, but
Look, you can protest all you want, but at the end of the day this won't change the reality of what's to come.

Raster is the future. /s
 
Back
Top