• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Zen 5 Speculation (EPYC Turin and Strix Point/Granite Ridge - Ryzen 9000)

Page 1010 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
9850X3D Reviewed:


”3% more performance, 30% more power”

Let’s hope 9950X3D2 performs better.
Just watched the GN review as well. The new part was not interesting, but I notice that Z4 vs Z5 (both vanilla) is still highly confusing, Z5 outperforms on average but Z4 will have better minimum frame rates in some games, as well as faster in some synthetic tests. I thought we'd have more conclusive answers by now and that Z5 would age better. Maybe it will in another 2 or 4 years with more software updates...
 
Maybe it will in another 2 or 4 years with more software updates...
What software change, other than compiling specifically for znver5 which is never happening on Windows, did you suppose would benefit Zen 5 more than Zen 4?

Whenever Windows schedules on the wrong CCD, it'll hurt both. Zen 5 is going to stay where it has been except for a few initial bugs that were solved by the time the 9800X3D launched.
 
So you think 3% perf improvement for 30% more power and higher price is fine/good? Not sure if serious.

Unless you were born yesterday, you would know that the last 1% of performance is most costly in terms of $ and Watts. Except in this case, the difference in $s is trivial. 4%. So not costly at all.

I don't think there has ever been such a great deal that you would only pay 4% for the last 3% of performance. Typically, it is far more costly.

But it is ok if you want to pass on the last percent of performance. In that case, 9800x3d is there to fill all your needs.
 
So you think 3% perf improvement for 30% more power and higher price is fine/good? Not sure if serious.
yes , lets remember context , this is a new BIN/ SKU , it is not a new part. remember back in like 2002 when AMD and intel would release new bins like almost every 3 months...... this is just that.......

people only pretend to care about power on desktop when they are getting beaten in T1.
 
A straight port of Strix Point to N3P would have netted next to nothing of importance. They would have gotten maybe another 200Mhz of clock speed over Gorgon Point with the same exact logic. The RDNA 3.5 iGPU might have gained 5% peak clock over Gorgon Point. The die would have shrunk by, what, 10% for a net increase in per chip cost. Fully loaded power draw would have maybe gone down a few percent. IT wouldn't be worth it without substantive changes that would require a new floorplan. Once you start to mess with the floorplan, costs go up dramatically. If they had REALLY wanted, they could have done an optimized shrink that made the minor logic changes to get the best performance from the new node as well as changing the floorplan to accommodate a 16MB MALL cache. Doing a MALL cache like that, gaining 400 Mhz in iGPU clock speed and supporting a couple bins higher in LPDDR5X RAM would have gone a LONG way to bringing the iGPU up to Panther Lake performance levels. Maybe not beat it, but the difference wouldn't be compelling in any way. The extra few hundred MHZ of ST boost speed would have kept it ahead of PTL in most every benchmark, and the MT performance would see a notable uplift. But, BUT, none of that would have justified the increase in costs and wouldn't have done much for market volume.

In other words, a waste of money. Save the money and sell what you already have while you work on something MUCH better.
 
Exactly why I don't think it's worth it. Who will even notice 3% perf improvement, unless you're gunning for benchmark bragging rights.

In Zen 3, AMD introduced lower binned cheaper V-Cache models, below the 5800x3d model.

It seems that AMD is aiming to add more bins down the road, with 9850x3d as a flagship and lower binned, lower core SKUs with V-Cache.

BTW, the king of efficiency is still 7800x3d (which I have in my home system). It will not surprise me if we see 65 Watts V-Cache SKUs.

In non-V-Cache models, AMD just took off the "x" suffix to get the 65 Watt parts. Which would not work so well with "x3d". So maybe just different SKU numbers.
 
you can get Halo 392 full laptop 'ASUS TUF' for around 1500 bucks now
I don’t want an ASUS tuf. What now?
A straight port of Strix Point to N3P would have netted next to nothing of importance. They would have gotten maybe another 200Mhz of clock speed over Gorgon Point with the same exact logic. The RDNA 3.5 iGPU might have gained 5% peak clock over Gorgon Point. The die would have shrunk by, what, 10% for a net increase in per chip cost. Fully loaded power draw would have maybe gone down a few percent. IT wouldn't be worth it without substantive changes that would require a new floorplan. Once you start to mess with the floorplan, costs go up dramatically. If they had REALLY wanted, they could have done an optimized shrink that made the minor logic changes to get the best performance from the new node as well as changing the floorplan to accommodate a 16MB MALL cache. Doing a MALL cache like that, gaining 400 Mhz in iGPU clock speed and supporting a couple bins higher in LPDDR5X RAM would have gone a LONG way to bringing the iGPU up to Panther Lake performance levels. Maybe not beat it, but the difference wouldn't be compelling in any way. The extra few hundred MHZ of ST boost speed would have kept it ahead of PTL in most every benchmark, and the MT performance would see a notable uplift. But, BUT, none of that would have justified the increase in costs and wouldn't have done much for market volume.

In other words, a waste of money. Save the money and sell what you already have while you work on something MUCH better.
why are they porting RDNA3.5 to N3 then?
 
Back
Top