• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion RDNA 5 / UDNA (CDNA Next) speculation

Page 98 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
are we talking of HBM for CDNA 5, which will be out in June this year
Yeah sir cdna5 this year
joe is amd's vice president
I'm most curious about AMD's base die process.
C-hbm teams developed b-die for nvidia and will be testing own c-die,
process is SF2, N3P.
 
Last edited:
From what I’ve heard, to stay competitive with Nvidia in data center GPUs, AMD has reallocated many Radeon employees to the data center division to support the rapid update cycle of CDNA and ROCm. Nevertheless, this should not serve as a justification for the delayed progress of RDNA4.
One major upside of NextGen consoles getting prepared is, that there is money from Sony and Microsoft. And AMD has overall much more money than during the times PS5 & Co. were developed. That is a big reason why I assume that RDNA5 will be a decent and quite capable architecture. From what infos, leaks and rumors are suggesting, RDNA5's HW capabilites look on par or even above Blackwell. That is not a bad place to be. With probably two or three especially advanced HW accelerated things (DGF, work graphs, universal compression). The rest will be software (like e.g. FSR).

Yeah sir cdna5 this year
joe is amd's vice president
I'm most curious about AMD's base die process.
I would assume TSMC N3P. Or maybe a Samsung node in case of Samsung HBM? And who knows, if this custom base Die from AMD is real, you could add other things than just additional LPDDR5X memory controllers and PHY. You could add some basic math engines, which is not quite Processing-In-Memory (PIM) but Processing-Near-Memory. The also showcased AXDIMM-PIM would be close to that concept (processing does not happen on the memory Die itself but on a buffer Die). Samsung showed off their PIM tech together with AMD chips (Alveo FPGA & MI100 prototypes) and Samsung is researching the PIM topic together with AMD since at least 2020.

Neat:
Those additional math accelerators on the HBM base Die would be available for HBM as well as LPDDR5X memory.
Regarding LPDDR5X I think about stuff like the prefill stage of LLM, where Nvidia uses Rubin CPX as additional accelerator.

But in general, the LPDDR5X controllers and PHY could also be located in the GPU base Die.
 
But in general, the LPDDR5X controllers and PHY could also be located in the GPU base Die.
So what I came up with was the HBM LPDDR5X combo controller. Since the clocks are similar, I think synchronization would be easy.
I don't know how the base die goes to Samsung and is manufactured as HBM. You know, NDA is the problem. I suspect Micron is the first vendor. Hynix doesn't have LPDDR5X like the 440x or 430x.
The first work of the official custom team doesn't seem to be AMD, and they tend to ignore AMD.
 
Last edited:
下一代主机准备的一个主要好处是索尼和Microsoft提供了资金支持。而且AMD整体上比PS5等开发时期的资金多得多。这也是我认为RDNA5会是一个不错且相当有能力的架构的一个重要原因。根据信息、泄露和传闻,RDNA5的硬件能力看起来与Blackwell相当甚至更高。这并不是一个坏处。大概有两三个特别高级的硬件加速工具(DGF、工作图、通用压缩)。其余的都是软件(比如FSR)。
Aside from the lack of support for SER, OMMs, and DMM, RDNA4 is architecturally already very close to Ada/Blackwell. The next step will be closing the gap in ray tracing performance and machine learning capabilities. The next-gen console with large VRAM is also preparation for neural rendering and full PT. As for Rubin (GR20x), we don't yet know the extent of the architectural adjustments NVIDIA will make, though many of Rubin's new features will likely be implemented in the future The Witcher 4.
 
Big RDNA5 might be at least 30% faster than a 5090, and maybe even 50% faster, unless there are architectural issues.


the above is a ML transcript of some you-tuber's video

my guess is that cut down AT0 will be approx 6x ps5 pro (assuming sufficient power supply & cooling)
 
Big RDNA5 might be at least 30% faster than a 5090, and maybe even 50% faster, unless there are architectural issues.


the above is a ML transcript of some you-tuber's video

my guess is that cut down AT0 will be approx 6x ps5 pro (assuming sufficient power supply & cooling)
50% would be like a dream, hope it comes to be true
 
+50% faster than a 5090 would be roughly 2.7x faster than a 9070XT. With 3x the WGP count why not (32 vs 96)?

Yes, if there are issues with clock rates and power efficiency (RDNA3 ahoi) or CU count scaling (5090 ahoi), then it will not happen. But if everything goes well, such a performance target is not that unreasonable.

From the article:
In tests and simulations not made public, it boasts 10% higher IPC and rasterization than RDNA4, which is already fast per CU. It also has more than twice as many ray tracing capabilities per CU.
Never heard of such specific numbers about RDNA5. Leak or just speculation?
 
Big RDNA5 might be at least 30% faster than a 5090, and maybe even 50% faster, unless there are architectural issues.
sound like wet dream, just for record:

Fiji "oveclocker dream"
Vega "poor Volta"
RDNA3/4 "chiplet GPU king"
etc.

the only real case where AMD get really close to Nvidia in performance wise was RDNA2 with 520 mm2 big chip
 
There are reports of Sony contemplating delaying PS6 to 2028 or even 2029.

2028 seems like a no brainer given memory prices, and in any case they will benefit from having as large PS5 installment base as possible.

Now N3 was released in 2023, so how silly it would be to use N3 design in 2029 for a console meant to last like 7+ years, that's like Switch 1 territory rather than next gen.

Would be really pathetic if they don't bite the bullet and make new design for N2.
 
From the article:
Never heard of such specific numbers about RDNA5. Leak or just speculation?
this site does (unauthorised) ML translation of youtube videos . so need to see original video to check if there are more links/sources
 
I'm more intrested of how fast AT2 would be. Is 35%+ (for raster) vs 9070XT too much to hope for given the lacklustre increase in CU count?
 
the only real case where AMD get really close to Nvidia in performance wise was RDNA2 with 520 mm2 big chip
If the leaks are true, AT0 will be the first graphics die that AMD builds that is anywhere near reticle limit in I don't even know how many generations. That's the entirety of the basis for the performance expectations.

Sadly, the same leaks point to a full AT0 probably not being available as a consumer GPU.
 
I'm more intrested of how fast AT2 would be. Is 35%+ (for raster) vs 9070XT too much to hope for given the lacklustre increase in CU count?

... I'd honestly expect raster gains to be fairly muted, and most of the effort to have gone to the RT + AI side.
 
Back
Top