• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion RDNA 5 / UDNA (CDNA Next) speculation

Page 60 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
A little OT, but what is stopping them from lending compute power, similarly to AWS?
With Palatnir, and who knows what else, income is guaranteed even if LLMs for general public stagnate.
Line go up ...
This is a valid Q

What is happening is this:
  1. Hyperscalers (& Meta) are in an arms race
  2. Google, Microsoft, Amazon, & Meta will mostly fab custom hardware (like they did with ARM CPUs)
  3. This leaves open AI as the sole remaining "Hyperscaler" for whose order nv, Broadcom, AMD, (+ MS & google) are competing
  4. Stakes very high for nv & amd as they need open AI to tailor software to CUDA / ROCm (or create a new standard like deepseek is doing with Chinese hardware companies)
 
I dunno, AMD's stock is up 25% premarket. It's like they do this because they know AMD stock will skyrocket. OpenAI buys a billion dollars worth of AMD GPUs, gets stock worth 25 billion for free.

I think, more like $10s of billions worth of AMD products (CPU, GPU, Networking) up to $100 billion.
 
They will know what MI500 will look like.

Good deal for AMD, that should put billions into improving stupidly named ROCm framework, perhaps CUDA's moat will finally be bridged...

It is interesting that AMD is attacking the CUDA moat with AI.

On a recent appearance, Lisa said AMD is using AI to translate code that was targeted for CUDA to work with AMD software stack.
 
It's the only one.

I think another moat was perception (true in the past) that NVidia was the only game in town.

From that POV, the OpenAI deal with AMD is a game changer, elevating AMD from being a bottom feeder, with low single digit market penetration in datacenter GPU to well into double digits.

Maybe not too much unlike client GPU, where AMD is underrepresented, but the opportunity to increase market share is very feasible

OpenAI validated AMD Mi450, and now other can also follow.
 
I think another moat was perception (true in the past) that NVidia was the only game in town.

From that POV, the OpenAI deal with AMD is a game changer, elevating AMD from being a bottom feeder, with low single digit market penetration in datacenter GPU to well into double digits.

Maybe not too much unlike client GPU, where AMD is underrepresented, but the opportunity to increase market share is very feasible

OpenAI validated AMD Mi450, and now other can also follow.
They validated much more than MI450. They validated the things that come after MI450..
 
Yeah but AMD still has to do a lot of work to make it seamless.
I'm sure that AMD will be happy to provide engineers for such an effort. I'm not entirely sure what Triton compiles down to for Nvidia hardware. It could be CUDA. It could be PTX. Not sure what AMD provides in terms of language intrinsics that maps similarly to PTX.
 
They don't use pytorch ?? (I believe pytorch is optimized for CUDA)
I'm sure they use pytorch. Just not sure if its used in production. OpenAI uses alot of python in their stack, but Triton is a python based language.

While not talking about their operational software stack in details I found this OpenAI account interesting;


edit: from my understanding pytorch has multiple backends as firstclass citizens nowadays. Not just CUDA.
 
Back
Top