• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
240 CU's seems extremely unlikely given AMD probably had a good idea of fab capacity constraints coming before Zen3 and RDNA2 were announced last October.

Much more likely is 10-13% clock and 10-13% IPC (FPS per Mhz per CU) boost for each GCD chiplet.

Multiplied by 2 CU's gives a more or less a 2.5x performance increase from the RDNA2 flagship GFX card.

Obviously this is not counting overheads and they may be comparing using a favourable game engine.

I would not at all be surprised to see RT performance gain by more than 25% per CU though - given how early we are in the RT HW saga and how much low hanging fruit likely left to be picked with a base µArch to build on it seems guaranteed that 2.5x would be conservative on that score if they can manage so much with raster gfx.
The problem is you don't get perfect scaling increase CU's and clocks. We can see this comparing 5700xt and 6900xt, 6900xt is about 2x faster despite 1.18x higher clocks.

A 1.13x ipc and 1.13x clock gain would land in the 2.1x area for average performance increase, so to me 240CU's make more sense.


A 240 CU RDNA3 GPU would be significantly larger than a 240 CU CDNA GPU, and there's no chance AMD would sell a larger die for a lower price to gamers vs selling the insane high profit margins datacenter GPUs.

So far I haven't seen Navi32 anywhere so I don't know what it is.
You need brand to make money but I still don't see what mi200 has to do with RDNA3. AMD is selling gaming cards right now despite having higher margins on mi100.

I get 800-1000mm2 for a 240CU monolith die using 0.6-0.7x scaling for 5nm, 1 chiplet should be around 300mm2.
 
I seriously doubt they would do that. Implementing the same RDNA 3 IP in two incompatible designs seems a lot of extra effort for questionable gain. Time, money, manpower. Especially since after Rembrandt and Raphael the low end GPU market might be completely decimated.
Somehow I missed this.

To explain my reasoning here, first we need to be on the same page here. So to try and get my point across, I'd like for you to think about why Polaris never really helped AMD gain significant market share on the desktop, despite being much better value in it's later years (GTX1060 vs RX580).

Your hint is: you can draw parallels to the current desktop market as well.
 
Somehow I missed this.

To explain my reasoning here, first we need to be on the same page here. So to try and get my point across, I'd like for you to think about why Polaris never really helped AMD gain significant market share on the desktop, despite being much better value in it's later years (GTX1060 vs RX580).

Your hint is: you can draw parallels to the current desktop market as well.
Are you excluding the terrible initial RX580 drivers (initial impression have a big impact)? How about the lame marketing by AMD compared to NV? AMD has made good gains on the former and small gains on the latter, IMHO.
 
You guys are assuming the CUs are getting larger along with the CU count. I am thinking they simplify the design and optimize it for high clocks. If you read through driver source code for RDNA2, you can see that they they were heaving in that direction, and there are some other indications that this will continue. Don’t be surprised if game clocks and boost clocks exceed 3.5-3.75 ghz next gen.

A part of me also wants to question a 40CU chiplet design. Seems like it would be more cost effective to go with 20CUs. Going with smaller chiplets can help with cooling as well…which also helps with higher clocks.
 
Are you excluding the terrible initial RX580 drivers (initial impression have a big impact)? How about the lame marketing by AMD compared to NV? AMD has made good gains on the former and small gains on the latter, IMHO.

I did say a while after launch. Polaris really shone in value shortly before and for a long time after Turing. By which point the media had covered it several times over and praised the value it brought on several occasions. Yet still no progress.
 
I did say a while after launch. Polaris really shone in value shortly before and for a long time after Turing. By which point the media had covered it several times over and praised the value it brought on several occasions. Yet still no progress.
Well, the mining craze hit again around that time frame. I know that I paid 15% above MSRP for my 1070 'on sale' (haha) after waiting a few months to find one. IIRC, the hashrate was higher on the 580.
 
The problem is you don't get perfect scaling increase CU's and clocks. We can see this comparing 5700xt and 6900xt, 6900xt is about 2x faster despite 1.18x higher clocks.

A 1.13x ipc and 1.13x clock gain would land in the 2.1x area for average performance increase, so to me 240CU's make more sense.
I was talking average performance over RDNA2 when I made those points, not absolute.

As for RDNA2 being slower per clock/CU currently than RDNA1, this is not greatly surprising as RDNA2 is much more recent and AMD tend to lag a bit on optimising their drivers properly for new µArchs at least as far back as the early GCN era.

Give it another six months before making such judgements.
I get 800-1000mm2 for a 240CU monolith die using 0.6-0.7x scaling for 5nm, 1 chiplet should be around 300mm2.
Is each chiplet 80CU in this scenario?

If so, regardless of it being chiplets or not that is still a huge 900mm2 at a state of the art fab node even without the IO/Cache die - yield improvements from smaller compute dies only decrease the price so much.

This is beyond even Threadripper 3 at 7nm - which is what, 600mm2 minus the 12nm IOD with 3990X?

Such a 3 GCD SKU would be insanely expensive.

Likely eclipsing the 3990X launch price by at least $1000.

This is not a viable market strategy considering how much people are complaining about high end GPU costs already.
 
I did say a while after launch. Polaris really shone in value shortly before and for a long time after Turing. By which point the media had covered it several times over and praised the value it brought on several occasions. Yet still no progress.
In my, European, country, there is PLENTY of RX 6700 XTs in stock, available for order, but, with pretty hefty markup, however, nowhere near as high as Nvidia GPUs have.

Like, I mean, seriously, you have to pay 50€ more to jump from RTX 3060 to RX 6700 XT, and you don't have to wait for it few weeks!
 
In my, European, country, there is PLENTY of RX 6700 XTs in stock, available for order, but, with pretty hefty markup, however, nowhere near as high as Nvidia GPUs have.

Like, I mean, seriously, you have to pay 50€ more to jump from RTX 3060 to RX 6700 XT, and you don't have to wait for it few weeks!
It's the exact same here
6700xt used to be 1150€ around release and has gone down to 1000€, wildly available. From Nvidia you maybe have a couple 3060s for 950€ and nothing above that, except a few 3090s for 3000€ every now and then
 
In my, European, country, there is PLENTY of RX 6700 XTs in stock, available for order,

Can't say the same. In fact pure opposite. Biggest shop will open orders for some GPU models tomorrow. I looked at the prices. Ridiculous. banging my head I didn't get a 3060 TI last December. 6700XT are listed for >$900, RTX 3060 > $600, RTX 2060, sic, for $400
 
Polaris really shone in value shortly before and for a long time after Turing. By which point the media had covered it several times over and praised the value it brought on several occasions. Yet still no progress.

Well because it wasn't a lot better than say a 290(x) in terms of pure performance. For sure not an upgrade route for many previous "high end" owners. Where high end meant a $250-$300 price point hence the 580 was often worse in terms of performance/$ due to the 290x fire sales year(s) before. Only thing going for it was the reduced power use. The issue is the RX 580 looks good now because of current insane pricing but when it launched it was mostly "meh" in performance/$. Either you had something a lot worse than a 290(x) and then you went with a 570 or you went with something better than a 580 (= Nvidia).

I did neither due to what I thought to be terrible offers. Well I look like a fool now and just hoping my 290x will last another year, at least else I'm looking to paying more for the same performance than I did 6 years ago. think about that.
 
Well because it wasn't a lot better than say a 290(x) in terms of pure performance. For sure not an upgrade route for many previous "high end" owners. Where high end meant a $250-$300 price point hence the 580 was often worse in terms of performance/$ due to the 290x fire sales year(s) before. Only thing going for it was the reduced power use. The issue is the RX 580 looks good now because of current insane pricing but when it launched it was mostly "meh" in performance/$. Either you had something a lot worse than a 290(x) and then you went with a 570 or you went with something better than a 580 (= Nvidia).

I did neither due to what I thought to be terrible offers. Well I look like a fool now and just hoping my 290x will last another year, at least else I'm looking to paying more for the same performance than I did 6 years ago. think about that.

The 290X was $549 at launch. The RX480 launched at $200 for a 4GB, and $239 for an 8GB. Polaris was never marketed as a high end card. But it was a rock solid mid range card. And when it came to performance per watt, it blew the 290X/390X out of the water.

Yes, the 290X/390X did drop in price after the mining crash, but those cheap used cards should not be considered when comparing prices of a new product.
 
I was talking average performance over RDNA2 when I made those points, not absolute.

As for RDNA2 being slower per clock/CU currently than RDNA1, this is not greatly surprising as RDNA2 is much more recent and AMD tend to lag a bit on optimising their drivers properly for new µArchs at least as far back as the early GCN era.

Give it another six months before making such judgements.

Is each chiplet 80CU in this scenario?

If so, regardless of it being chiplets or not that is still a huge 900mm2 at a state of the art fab node even without the IO/Cache die - yield improvements from smaller compute dies only decrease the price so much.

This is beyond even Threadripper 3 at 7nm - which is what, 600mm2 minus the 12nm IOD with 3990X?

Such a 3 GCD SKU would be insanely expensive.

Likely eclipsing the 3990X launch price by at least $1000.

This is not a viable market strategy considering how much people are complaining about high end GPU costs already.
It might not matter ek2121 is suggesting we could see very high clocks for RDNA3, then it's possible to hit those +2.5x performance target.

I'm still 80CU's per chiplet despite there been some speculation on small chiplets like 8x20 or 4x40CU for navi31, because of the MacOS leak.

Gamers are willing to pay over $2000 for a gpu, we seeing it now 🙂
 
As long as we still buy them, we can complain all what we want. 😛
True, but the question is will the number of people willing to shell out for such an insane SKU justify reducing the number of chiplets available for lower priced SKU's that will attract far more buyers?
 
The 290X was $549 at launch. The RX480 launched at $200 for a 4GB, and $239 for an 8GB. Polaris was never marketed as a high end card. But it was a rock solid mid range card. And when it came to performance per watt, it blew the 290X/390X out of the water.

Yes, the 290X/390X did drop in price after the mining crash, but those cheap used cards should not be considered when comparing prices of a new product.

It kind of makes you wonder why AMD didn't keep polaris alive. Maybe push it to TSMC 10nm (or they could've just left it on 14nm). They could have released it as a low end Radeon 6300 or something. Add 8gb to it. It might've helped their capacity issues.
 
Can't say the same. In fact pure opposite. Biggest shop will open orders for some GPU models tomorrow. I looked at the prices. Ridiculous. banging my head I didn't get a 3060 TI last December. 6700XT are listed for >$900, RTX 3060 > $600, RTX 2060, sic, for $400
I did buy 3060 Ti when it was available immediately after it launched. I didn't think 559 € was good value (at least this MSI Gaming X Trio is cool and quiet) but considering how much this thing costs now and how bad the availability is, it was a deal of the century.

I have noticed that Radeons have better availability. Or to be more specific: people do not want to buy them. I do have my reasons why I would prefer green team but that doesn't necessarily apply to everyone, so I do wonder...
 
I have noticed that Radeons have better availability. Or to be more specific: people do not want to buy them. I do have my reasons why I would prefer green team but that doesn't necessarily apply to everyone, so I do wonder...
Not necessarily.

It can just be that far more people are still cash strapped than usual at this point in a release cycle and those that do want them/have cash may have simply given up waiting and looking for supply.

As someone who has watched stock market prices go up and down I can sympathize with the monotony of the waiting game.
 
I have noticed that Radeons have better availability. Or to be more specific: people do not want to buy them. I do have my reasons why I would prefer green team but that doesn't necessarily apply to everyone, so I do wonder...
How do you figure? I went to both NewEgg and Amazon to look for available stock of 3080/90 and 6800/6900 cards in stock and sold by 1st party at or around msrp, not marketplace sellers.......nothing from either vendor. NewEgg did have two Radeon models in stock, a 6800XT for double MSRP and a 6900XT Red Devil for $2619. I don't call that better availability, but in any case pretty much everything is available right now if you're willing to pay enough for it, better mining performance is going to cause the nVidia models to be snapped up at a higher price than the Radeons and no one can deny that fact, but that's not by gamers. People buying for games are pretty much still snapping up whichever they can get at the price point they are willing to pay at this point. I know I wasn't brand particular this go around, I would have jumped on a 3090, 3080 or 6800, any would do. In fact, my kick myself moment was back when just after the 3090 was released NewEgg had the Gigabyte Aorus 3090 eGPU external gaming box available for MSRP briefly and I passed on it thinking "I'll just wait for a normal 3090".....big mistake.
 
Last edited:
How do you figure? I went to both NewEgg and Amazon to look for available stock of 3080/90 and 6800/6900 cards in stock and sold by 1st party at or around msrp, not marketplace sellers.......nothing from either vendor. NewEgg did have two Radeon models in stock, a 6800XT for double MSRP and a 6900XT Red Devil for $2619. I don't call that better availability, but in any case pretty much everything is available right now if you're willing to pay enough for it, better mining performance is going to cause the nVidia models to be snapped up at a higher price than the Radeons and no one can deny that fact, but that's not by gamers. People buying for games are pretty much still snapping up whichever they can get at the price point they are willing to pay at this point. I know I wasn't brand particular this go around, I would have jumped on a 3090, 3080 or 6800, any would do. In fact, my kick myself moment was back when just after the 3090 was released NewEgg had the Gigabyte Aorus 3090 eGPU external gaming box available for MSRP briefly and I passed on it thinking "I'll just wait for a normal 3090".....big mistake.
Well, that's the situation here in Finland.
 
With the Computex announcements of AMD using stacked SRAM on their CPUs, I'm wondering what implications this has for infinity cache in future AMD GPUs. It seems like they'll be able to scale the size of infinity cache considerably or even utilize stacking to produce chips with a smaller die size without sacrificing cache size.
 
With the Computex announcements of AMD using stacked SRAM on their CPUs, I'm wondering what implications this has for infinity cache in future AMD GPUs. It seems like they'll be able to scale the size of infinity cache considerably or even utilize stacking to produce chips with a smaller die size without sacrificing cache size.
The stacked cache was designed to only cover the lowest power part of the CPU chiplet die and not by accident. I don't think those copper vias can handle the thermals from logic loads. The total area of the copper vias must be only a few mm^2. How do you dissipate 100+ W through this at a low temp delta.
 
Back
Top