That honestly doesn't bode too well, because so far, only the even ones (2 and 4) were actually good.1/3/5/7 (every odd tock) is like that.
That honestly doesn't bode too well, because so far, only the even ones (2 and 4) were actually good.1/3/5/7 (every odd tock) is like that.
I love superstitions but they just have a roadmap and crank it.That honestly doesn't bode too well, because so far, only the even ones (2 and 4) were actually good.
You mean the lack of halo, or is there something wrong with the uArch itself?Also RDNA4 sucks balls for obvious reasons.
A bit of both.You mean the lack of halo, or is there something wrong with the uArch itself?
Only in mobile and 3.5 fixed that anyway.RDNA3 vs. Ada looked FAR worse.
It is indeed a very good shader core; they really haven't shipped any bad ones anyway.They nearly caught up on a range of issues (V/f, PPW, RT, PT, AI/FSR, mem bw efficiency), and pretty much are slightly ahead on raster IPC for CU vs. SM.


Double SP Per CU + Universal Compression, HOHO 😎a significant enhancement to reduce memory bandwidth needs by compressing virtually all graphics data, not just textures, enabling faster performance, lower memory requirements for 4K/higher gaming, and boosting upscaling tech like FSR. It works by applying compression across the entire GPU pipeline, leveraging increased silicon speed to overcome traditional performance costs, thereby increasing effective memory bandwidth and improving efficiency for next-gen gaming experiences.
Doing exactly what - RT?the Radiance core will be 3 times faster per core
What makes you think they will be 3x faster per core - about the catch up they need to hit nvidia levels of RT?Yep, new term for RT cores.
What? Where's that coming from?
X3D parts need massively lower power to stay cool enough even under meh-ish cooling solutions, and iirc L3/VCache runs at the clock of the fastest core, so they limited PT and turbo clocks vs. the vanilla models.
Pretty sure that in terms of low leakage, the 5800X3D, 7800X3D and 9800X3D are better bins than most 5700X, 7700X and 9700X, respectively.
The 9850X3D has to be a better bin of course; it can probably just hit higher clocks at the same voltages, but like adroc said, they had 1.5 years of yield improvements and to accumulate top-bin chips for this SKU.

Probably not enough info yet to zero in the exact product.GFX1310
View attachment 137171
on XNACK
Supported Hardware
Not all GPUs are supported. Most GFX9 GPUs from the GCN series usually support XNACK, but only APU platforms enabled it by default. On dedicated graphics cards, it’s disabled by the Linux amdgpu kernel driver, possibly due to stability concerns as it’s still an experimental feature.
For users of GFX10/GFX11 GPUs from the RDNA series, unfortunately, XNACK is no longer supported. Only computing cards from the CDNA series has XNACK support, such as Instinct MI100 and MI200 - and they also belong to the GFX900 series.
Thus, use of Unified Shared Memory, which is the recommended practice and heavily used in SYCL programming, suffers from a serious hit. By not supporting XNACK on customer-grade desktop GPUs, AMD has essetially made a core feature in SYCL almost useless, forcing it to be an exclusive feature for datacenter users running CDNA cards with a price tag of $5000. This is unfortunate, but is something that developers who want to write cross-platform GPU code need to live with (for the highest performance, you may want to use manual data movements anyway, so it’s not all a loss, more on that later).
What is XNACK on AMD GPUs, and How to Enable the Feature
On AMD GPUs, the feature XNACK is essential for running HIP code with Managed Memory. But what XNACK does, or how can it be enabled, is poorly documented. I believe this article is the only comprehensive guide on the entire Web.niconiconi.neocities.org
It's AT0 or AT2Probably not enough info yet to zero in the exact product.
Makes sense yeah, there has been no consistent pattern so it could be either.It's AT0 or AT2
gfx1350CDNA6
gfx1300/1301Orion/Canis
i understand LLVM releases are once in 6 months. so if AMD misses this deadline then the next one is 6 months away. is that correct ?More gfx13 in LLVM: https://github.com/llvm/llvm-project/pulls?q=gfx13+
No idea what any of it means except it looks like all the ISA stuff for now is just a placeholder.
No.Very interesting AMD/Xilinx patent: https://patentscope.wipo.int/search/en/detail.jsf?docId=US471590844
Is this in RDNA5?
not sure if xilinx stuff comes to rOCM or Instinct/radeon stuffVery interesting AMD/Xilinx patent: https://patentscope.wipo.int/search/en/detail.jsf?docId=US471590844
Is this in RDNA5?
LoL, every Radeon generation was supposed to be Nvidia's nightmare 😀 🙄RGT on L1 cache pooling/sharing & reduced L2 cache & lack of infinity cache
youtube commentRGT on L1 cache pooling/sharing & reduced L2 cache & lack of infinity cache
This has nothing to do with ROCm and patent has broad applicability. It's the ML data format architecture to rule them all. No more only using FP8 or some other format for training and inference. Mix and match everything and metadata guided content adaptive data arrays combined with upcast circuitry that can convert the data into a different format or a higher precision format as needed depending on the workload.not sure if xilinx stuff comes to rOCM or Instinct/radeon stuff
what AMD has said is that they can do the equivalent of strix halo on xilinx — if needed. this was in response to competition from qualcomm & the likes on edge inference