• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question x86 and ARM architectures comparison thread.

Page 34 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
M2 and M3 are pretty old now, why is there only a single data point for each?
It would have broken the scale of the chart and made it harder to read the stuff they wanted the reader to focus on. M2 and M3 are there to note where the next goals will be after they catch M1.

Data scientist here. It's a good chart.
 
It would have broken the scale of the chart and made it harder to read the stuff they wanted the reader to focus on. M2 and M3 are there to note where the next goals will be after they catch M1.

Data scientist here. It's a good chart.
It's not strictly speaking accurate though, and that is where the majority of these comparisons break down.

Dial in SIMD intensive workloads and the argument of Apple Mx's dominance starts to wither.

As great as they are in scalar workloads, their vector processing capacity is still pretty weak compared to even the lower end Zen5c cores.

This goes for basically all the ARM cores on the market at the moment.
 
It would have broken the scale of the chart and made it harder to read the stuff they wanted the reader to focus on. M2 and M3 are there to note where the next goals will be after they catch M1.

Data scientist here. It's a good chart.
So why does the M4 review do the same thing? There's a line for the Qualcomm chip but no Apple chips. Are they trying to draw attention toward the 8 Gen3?

1774638465141.png

edit: and the iPhone 17 review video. All the Apple chips have a dot and all the Android chips have a line.

1774639220293.png
 
Last edited:
So why does the M4 review do the same thing? There's a line for the Qualcomm chip but no Apple chips. Are they trying to draw attention toward the 8 Gen3?

View attachment 140796

edit: and the iPhone 17 review video. All the Apple chips have a dot and all the Android chips have a line.

View attachment 140797
I'm guessing he never wrote the script to get the data points to draw the curve.

I mean, if your focus is on x86/Android, then the x86/Android curves are useful and Apple Silicon is this 'oh yeah, there's also this thing which suggests what's possible'.
 
As great as they are in scalar workloads, their vector processing capacity is still pretty weak compared to even the lower end Zen5c cores.
Could it be because they have no proper SVE2 support?

Thing is that’s an easy fix if Apple ever wanted to take vector workloads seriously. Their user base is mostly MacBook users and I bet less than 1% need SVE2 support.

Now as to why Neoverse doesn’t have parity with AMD is because ARM is lead by idiots when it comes to server.
 
These charts are quite damning.
View attachment 140744View attachment 140745
Source
Will next gen x86 cores be able to finally catch up to M1 in ST performance per watt?
Another flawed benchmark and result. Apple CPU's power draw should be 2x the reading from the Apple software reading. So those 7W and 9W for M1 and M5 should be 14W and 18W. Still good but not that impressive when using real power draw number. Also forgetting TSMC 3nm vs 4nm?

And many of those ARM vs x86 benchmark are really apple (fruit) vs orange (fruit), where ARM(Apple) has huge advantage vs x86. Like
  1. Optimized MacOS vs Windows (current a mess)
  2. 16k paging on ARM(Apple) vs 4k paging (x86, other ARM CPU's) (According to google 16k paging is 5-10% faster depending on workload, Link-https://android-developers.googleblog.com/2025/07/transition-to-16-kb-page-sizes-android-apps-games-android-studio.html)
  3. Better node for ARM(Apple, Qualcomm) which translate to lower power draw
  4. Not running the same binary
  5. Also lets not forget thing like if SoC==Benchmark Software's favourite brand than Score = Score+(5-10)%
When you subtract those advantage for ARM (Apple) x86 is not that behind.

So in conclusion, most ARM (Apple) vs x86 comparison is flawed. It is best case scenario for ARM (Apple) and worst case scenario for x86.
 
1774639220293.png
If nothing else that shows the Oryon 2 M being well handled by the A725.

Even the A19-E core isn't that much of an improvement over A725 in INT workloads, and A18-E is beaten by the A725 in both INT and FP!! 😮

Quite pleasantly surprised by that info!
 
Last edited:
Are you seriously implying that the SPEC consortium, in 2017, was plotting to give ARM Macs some kind of artificial boost?
Why not? For ideology or money anyone can do anything. SPEC consortium is not free of those.

Is there is an ideological push that x86 is closed, old, inefficient, incarnation of devil arch, where ARM(Apple) is open(??), new, efficient and incarnation of angel arch.

Geekbench has no problem of supporting instruction that on one ARM vendor(Apple) support but has problem with Intel's Binary optimization tool. What you call that?
 
Why not? For ideology or money anyone can do anything. SPEC consortium is not free of those.

Is there is an ideological push that x86 is closed, old, inefficient, incarnation of devil arch, where ARM(Apple) is open(??), new, efficient and incarnation of angel arch.

Geekbench has no problem of supporting instruction that on one ARM vendor(Apple) support but has problem with Intel's Binary optimization tool. What you call that?

wow.
 
Back
Top