• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion RDNA 5 / UDNA (CDNA Next) speculation

Page 112 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
How much do you expect AMD to buy each hbm4?
For reference, NVIDIA costs ₩1000000(approximately 700 dollars).

lower clock↓+ cbdie↑($30?) + sole vendor↓= 730 - × (🙏200?) = ₩800000×12😭
 
Last edited:

This AI Startup Demands AMD to Build a 96 GB RDNA 5 GPU for a Wild Venture, and Is Already Seeking Investors​

Muhammad Zuhair
Mar 8, 2026 at 03:53pm EDT


TinyCorp has a rather interesting demand for AMD, involving a 'chonky' RDNA 5 GPU.

TinyCorp Hopes To Get an RDNA 5 GPU With 96 GB VRAM For $2,500 Each; Probably Ignoring the Memory Shortages​


(I asked grok & it said an AT0 with 96gb to 128gb ram will cost $4000 to $8000)

Interestingly, TinyCorp claims it will build its own board with RDNA 5 silicon onboard, provided AMD doesn't release a similar model.

 
Give me a $6-$8k 96GB GDDR7 RDNA 5/UDNA card with 4 way interconnect capability and I will buy those over any RTX Pro 6000. That would be a real game changer and also both economical and technically feasible
 
The RAM alone would be like $1000 or more currently, lol.
I think @coercitiv was making a joke about $2500 being the cost of the 96GB VRAM alone.


This no way they are selling these if they make them for 1/4 the price NV charges for a 96GB GPU.

will be in the 5-7k range.
Pricing Pro cards for ML loads using the same "Nvidia -10%" method as they use in consumer GPUs would result in no one at all buying their cards. The consequences for inadequate GPUs in professional settings are completely different from a guy not being able to use the best upscaler in the latest videogame.



Give me a $6-$8k 96GB GDDR7 RDNA 5/UDNA card with 4 way interconnect capability and I will buy those over any RTX Pro 6000. That would be a real game changer and also both economical and technically feasible
As someone who's procured Pro GPUs for ML loads throughout the past 5 years, this is unfortunately way too risky.

Unless you know exactly what the loads you're going to be running and you're absolutely sure AMD GPUs can run them (not our case), odds are you'll run into a bunch of loads AMD GPUs just can't run properly if at all. This means tens / hundreds of euros/dollars sitting idle or worse, spending that or more in precious engineering resources to make things work.

ROCm may work fine for a select number of things, but every brand new ML state-of-the-art app or implementation comes out the window with full Nvidia support, and AMD hardware may or may not work some months/years afterwards.


For example, these tiny corp guys are saying they'll use AT0 only to run llm inference of the latest deepseek and qwen models like llama-cpp and then sell cloud services using that hardware. That's fine, but if a big new thing comes up that is built from the ground up on CUDA / CUDNN (which is very likely).
 
As someone who's procured Pro GPUs for ML loads throughout the past 5 years, this is unfortunately way too risky.

Unless you know exactly what the loads you're going to be running and you're absolutely sure AMD GPUs can run them (not our case), odds are you'll run into a bunch of loads AMD GPUs just can't run properly if at all. This means tens / hundreds of euros/dollars sitting idle or worse, spending that or more in precious engineering resources to make things work.

ROCm may work fine for a select number of things, but every brand new ML state-of-the-art app or implementation comes out the window with full Nvidia support, and AMD hardware may or may not work some months/years afterwards.


For example, these tiny corp guys are saying they'll use AT0 only to run llm inference of the latest deepseek and qwen models like llama-cpp and then sell cloud services using that hardware. That's fine, but if a big new thing comes up that is built from the ground up on CUDA / CUDNN (which is very likely).
You're right that Nvidia has better software support and every new ML implementation comes with full Nvidia support... and thats exactly why I said AMD needs to create workstation competitors to Nvidia like an RTX Pro 6000 competitor. Nvidia is being used by most professionals for their prototyping systems and its a really decent deal. But the market is waiting for such products from AMD to support but they havent had a proper productline. CUDA moat is on shaky ground. At my workplace we use mainly Nvidia GPUs for prototyping and self hosting part of our services(I would say 90%). Tested AMD R9700 and its actually really good especially with Vulkan(ROCm has good potential but again AMD just needs to give us the kind of cards we need and we'll sort out the software),but it just doesnt meet our minspec requirements by some small margins because AMD didnt make any Halo cards this time otherwise an AMD "R9800" would have been a huge game changer if it existed.

With AI driven development even being used for kernel level optimizations, and the growing move towards optimizing GPUs and other ASICs for ML workloads, AMD could seriously increase its market share in that core workstation and entrylevel datacenter cards. which is key to getting the same level of developer support that Nvidia gets with CUDA that you mentioned. The issue now is what kind of product lineup AMD makes with RDNA 5/UDNA. They are doing a good job with their rackscale systems like Helios and working with META but its when they get more market share in the workstation area that they will actually seriously dent the CUDA moat. Make that RTX pro 6000 competitor even if its just 30% more powerful and give me 96GB of VRAM with 4 way interconnect and I will buy at least 8 of them even more over Nvidia. Unless Nvidia puts Nvlink back in its workstation cards, otherwise AMD will have a serious opportunity.
 
What about tdp of mi455x
Is the image far from reality? It looks lower than 2000.
 

Attachments

  • 1000002598.jpg
    1000002598.jpg
    69.6 KB · Views: 28
Back
Top