• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Intel Meteor, Arrow, Lunar & Panther Lakes + WCL Discussion Threads

Page 984 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Wildcat Lake (WCL) Specs

Intel Wildcat Lake (WCL) is upcoming mobile SoC replacing Raptor Lake-U. WCL consists of 2 tiles: compute tile and PCD tile. It is true single die consists of CPU, GPU and NPU that is fabbed by 18-A process. Last time I checked, PCD tile is fabbed by TSMC N6 process. They are connected through UCIe, not D2D; a first from Intel. Expecting launching in Q1 2026.

Intel Raptor Lake UIntel Wildcat Lake 15WIntel Lunar LakeIntel Panther Lake 4+0+4
Launch DateQ1-2024Q2-2026Q3-2024Q1-2026
ModelIntel 150UIntel Core 7 360Core Ultra 7 268VCore Ultra 7 365
Dies2223
NodeIntel 7 + ?Intel 18-A + TSMC N6TSMC N3B + N6Intel 18-A + Intel 3 + TSMC N6
CPU2 P-core + 8 E-cores2 P-core + 4 LP E-cores4 P-core + 4 LP E-cores4 P-core + 4 LP E-cores
Threads12688
Max Clock5.4 GHz4.8 GHz5 GHz4.8 GHz
L3 Cache12 MB6 MB12 MB12 MB
TDP15 - 55 W15 - 35 W17 - 37 W25 - 55 W
Memory128-bit LPDDR5-520064-bit LPDDR5x-7467128-bit LPDDR5x-8533128-bit LPDDR5x-7467
Size96 GB48 GB32 GB128 GB
Bandwidth83 GB/s60 GB/s136 GB/s120 GB/s
GPUIntel GraphicsIntel GraphicsArc 140VIntel Graphics
RTNoNoYESYES
EU / Xe96 EU2 Xe8 Xe4 Xe
Max Clock1.3 GHz2.6 GHz2 GHz2.5 GHz
NPUGNA 3.017 TOPS48 TOPS49 TOPS






PPT1.jpg
PPT2.jpg
PPT3.jpg



As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



LNL-MX.png
 

Attachments

  • PantherLake.png
    PantherLake.png
    283.5 KB · Views: 24,049
  • LNL.png
    LNL.png
    881.8 KB · Views: 25,534
  • INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    181.4 KB · Views: 72,443
  • Clockspeed.png
    Clockspeed.png
    611.8 KB · Views: 72,329
Last edited:
Why settle for less when there is a great affordable CPU available that will be good enough for the next 5-6 years? And upgrade to it will be also available due to AM5 platform longevity.
after 5-6 Year i want a new platform cause things have changed and i might get better IO/Memory Speed even on the same Socket one or two thing might bottleneck the new CPU. 5 Year is a decent time to upgrade stuff.

If i buy a 270K i won't have the need to upgrade for a decent while can't say that about 9700X relatively
 
Why settle for less when there is a great affordable CPU available that will be good enough for the next 5-6 years?
Completely agree with that sentiment... just completely disagree that AMD X3D is the answer. ARL will be just as 'good enough for the next 5-6 years' as an X3D, it'll just cost less up front.

I know full well that the sales are overwhelmingly in favor of the X3D parts. They're selling for the same reason it used to be overwhelmingly in favor of Intel. Even the majority of 'informed' consumers simply look at CPU benchmarks run with top of the line GPU and assume those results scale down linearly with lower end GPUs. But in reality a 265K will match an X3D on almost all games when both are bottlenecked by a 5060 Ti. (The exception being games of the Factorio variety which see an outsized X3D impact.) I wouldn't be surprised if the majority of X3D sales are going into systems that'd be better off with less money spent on an ARL CPU and more money spent on the GPU.
 
If i buy a 270K i won't have the need to upgrade for a decent while can't say that about 9700X relatively
In most cases but aren't we rather confident that NVL is going to be hugely faster in games? Like 40%? With a much better L3. It seems like a poor time to lock in a dead end platform.

Though I did end up buying 265K anyway last year. Normally I "recycle" my previous desktop chip into my home server but it was actually cheaper to sell my 7950X after upgrading to a 9950X3D and buy a 265K for similar performance. I needed another board in either case since I reused my X670E.
 
There is such a wide range of opinions on how much CPU is required for gaming depending on the situation. Do you game now and then? Do you game a lot? Are you competitive? Is your GPU a 5060 or a 5090? Do you game at 1080p low quality settings for max frame rate on a 5090? Then yeah the CPU is the bottleneck. 1440p maxed out settings on a 5060? Then your GPU is most likely the bottleneck.

The CPU, GPU, game, game settings, and gamer are all factors in determining how much CPU.
 
The CPU, GPU, game, game settings, and gamer are all factors in determining how much CPU.
The lack of optimization in most AAA games is the real factor forcing gamers to think hard about the CPU. Heck, consoles are still giving a mostly decent experience for gamers who do not game on PC. It's only the PC gamers who play a game on a console and then complain about performance dips or image quality.

I've played on PS1 all the way through PS5. PS1 had some ugly games but for the time they looked good and the controller rumble feature kept things interesting. PS2 had REALLY ugly blocky graphics for the time (played MGS3). PS3 had much fewer ugly games. PS4, none I can remember being ugly. PS5, only played Gran Turismo on it and that looked almost spectacular (almost because I expected more). In all these experiences (Alan Wake on Xbox 360 too), I never had a moment where a game suffered some showstopping freeze or hiccup. Smooth as butter. So when PC gamers complain, it's either lack of game optimization, mismatched hardware (slower GPU/faster CPU or vice versa) or them being game quality inspectors instead of actually playing and enjoying the game.
 
  • Like
Reactions: 511
In most cases but aren't we rather confident that NVL is going to be hugely faster in games? Like 40%? With a much better L3. It seems like a poor time to lock in a dead end platform.
bLLC might get close to that but standard version is not getting there if Intel fixes their cache.
Though I did end up buying 265K anyway last year. Normally I "recycle" my previous desktop chip into my home server but it was actually cheaper to sell my 7950X after upgrading to a 9950X3D and buy a 265K for similar performance. I needed another board in either case since I reused my X670E.
There is such a wide range of opinions on how much CPU is required for gaming depending on the situation. Do you game now and then? Do you game a lot? Are you competitive? Is your GPU a 5060 or a 5090? Do you game at 1080p low quality settings for max frame rate on a 5090? Then yeah the CPU is the bottleneck. 1440p maxed out settings on a 5060? Then your GPU is most likely the bottleneck.

The CPU, GPU, game, game settings, and gamer are all factors in determining how much CPU.
My criterion was not only gaming performance but productivity as well are we only judging gaming performance what about code compilation something I do won't mind the additional cores.
There is the case of AVX-512 as well for those who need it for those 9700X makes sense.

Gaming performance shouldn't be your only factor for buying CPU unless you make money off of it imo.
 
bLLC might get close to that but standard version is not getting there if Intel fixes their cache.


My criterion was not only gaming performance but productivity as well are we only judging gaming performance what about code compilation something I do won't mind the additional cores.
There is the case of AVX-512 as well for those who need it for those 9700X makes sense.

Gaming performance shouldn't be your only factor for buying CPU unless you make money off of it imo.
I agree . My point was there is a wide range of valid opinions when it comes to CPU's and gaming.

I bought my 5070 more for CUDA for apps like PureRaw, UVR, and other than gaming.
 
But is that a TECHNICAL requirement, or a marketing one? We see that it is technically to have partially enabled e-core clusters in the bios and in production Alder lake-n chips.
i haven't seen a Intel Hybrid CPU with only half the cluster enabled so i guess it's a technical
 

Unless a CPU gaming test suite is including CPU intensive titles like PoE / D4, iRacing / ACC, Stellaris / CK3, Civ 7 turn time and a bunch of other genuinely CPU dependent games this kind of data is factually misrepresentative beyond being a guide for AAA games where we already know the GPU is limiting a lot of the time.
 
But in reality a 265K will match an X3D on almost all games when both are bottlenecked by a 5060 Ti..
The same applies to R5 7600X though. If you take a look at Amazon's best selling CPU list you will notice some things:
  • Dominated by AMD
  • Distributed across a wide price spectrum, from under $100 to over $600
  • X3D parts are only a fraction of the sales
  • even AM4 parts are selling due to RAM crisis (while Alder Lake has gotten more expensive because Intel is supply bound on Intel 7)
You argue ARL would be just as good, but do you remember how poorly the ARL launch went? It had major performance consistency issues and stability/compatibility issues on top. Performance consistency was so bad that the 12700K I was looking to upgrade from was faster in browser benchmarks than the 285K.

Every few weeks we got firmware or OS performance oriented updates for ARL that were supposed to fix the consistency issues. They usually helped a little but they were rushed and served piece by piece, so they also served as a periodic reminder that ARL was in trouble. The Windows update was a particularly painful moment for Intel, because it improved performance on AMD CPUs by just as much, if not more.

Nobody could justify ARL, it was either a gaming flop or a big question mark for anyone looking for productivity uplift. And all of this came after the Raptor Lake fiasco, which had eroded Intel's brand to a new low among the DIY crowd -> especially the people who bought their premium SKUs.
 
Last edited:
But in reality a 265K will match an X3D on almost all games when both are bottlenecked by a 5060 Ti.

Yea if you create a GPU bottleneck then sure multiple CPUs can perform the same. You could have said the exact same thing about Bulldozer vs Sandy CPUs if you GPU bottleneck them both.

Plenty of games are CPU bound though. ACC, iRacing, MSFS, WoW, PoE, PoE2, D4, Stellaris, Civ 7 turn time, Anno, BG3, LoL, Dota 2 and plenty more titles on top. Most review sites use older AAA titles that are far less popular than the titles I have listed to base their comparison on which is just an incomplete picture.
 
The same applies to R5 7600X though.
No disagreement there. The R5 7600X will give comparable gaming performance to a 245K for about $25 less.
You argue ARL would be just as good, but do you remember how poorly the ARL launch went?
That is a fair argument - the majority of available reviews will show consumers the launch performance state. That definitely is another mark as to why a typical 'informed' consumer will be led more toward AMD than is justified by the current state of performance.
Plenty of games are CPU bound though. ACC, iRacing, MSFS, WoW, PoE, PoE2, D4, Stellaris, Civ 7 turn time, Anno, BG3, LoL, Dota 2 and plenty more titles on top. Most review sites use older AAA titles that are far less popular than the titles I have listed to base their comparison on which is just an incomplete picture.
Taking the most recent Anno 117 as an example from that list, it is indeed one of those titles that sees an outsized impact from X3D. According to this review - https://www.pcgameshardware.de/Anno...Test-Benchmarks-Demo-Anforderungen-1484293/3/ - with an RTX 5090 we go from 92.6 average FPS with a 7700X to 155.2 with a 7800X3D - a 1.676x advantage to X3D. ARL is around the 100 average FPS mark. But the problem is again the usage of an RTX 5090 on the CPU benchmark. Switching to their GPU benchmark page, the different scene/settings show 120.7 average FPS for an RTX 5090 and 69.9 for an RTX 5070 - a 1.726x advantage for the RTX 5090. So even a 'CPU bound' game of Anno 117 that shows a huge advantage for X3D... is going to primarily be GPU bound with with an RTX 5070 or less.
 
Taking the most recent Anno 117 as an example from that list, it is indeed one of those titles that sees an outsized impact from X3D. According to this review - https://www.pcgameshardware.de/Anno...Test-Benchmarks-Demo-Anforderungen-1484293/3/ - with an RTX 5090 we go from 92.6 average FPS with a 7700X to 155.2 with a 7800X3D - a 1.676x advantage to X3D. ARL is around the 100 average FPS mark. But the problem is again the usage of an RTX 5090 on the CPU benchmark. Switching to their GPU benchmark page, the different scene/settings show 120.7 average FPS for an RTX 5090 and 69.9 for an RTX 5070 - a 1.726x advantage for the RTX 5090. So even a 'CPU bound' game of Anno 117 that shows a huge advantage for X3D... is going to primarily be GPU bound with with an RTX 5070 or less.

They state they have not assessed the end game with large cities which is where CPU will matter most.

If your CPU cannot manage 60 FPS at end game no amount of tuning the settings will get you there where as running on a 5070 you can lower settings to get to a comfy frame rate as long as the CPU can keep up.

That is the other major point. It is far easier to work around GPU induced low frame rates by tuning settings than it is to work around CPU limitations.
 
That is a fair argument - the majority of available reviews will show consumers the launch performance state. That definitely is another mark as to why a typical 'informed' consumer will be led more toward AMD than is justified by the current state of performance.
I hope we get in-depth reviews for ADL Refresh and not just superficial gaming benchmarks. I'd like to see if the higher die-to-die clocks help performance in various workloads.

So far it's hard to isolate the benefits since the gaming benchmarks also include the difference in L3 cache (+6MB in each case) and higher memory speed (7200 vs. 6400) while the MT benchmarks they showed care little about anything besides throughput.

Another interesting aspect will be benching the 250K vs. 265K.
 
Hey, I own Intel stock! (And I'm up bigly)
Very nice
But I don't use Twitter. There's something on Twitter that encourages echoes.
True. I also think there are way more private investors in Intel online than there are for AMD, or that the ones that invest in AMD mainly stay on r/AMD_stock rather than twitter, since the Intel stock subreddit isn't nearly as active.
The problem though is that those intel stock owners actively shill for Intel pretty much. Which, I mean is kinda justified since they own stock lol, but still annoying none the less.
I'm still on twitter just for the tech leaks mostly lol.
 
Whenever I see links like these from Twitter on the forum I feel like those movie characters that are going about their lives when 2 randos burst through one wall of their apartment, crush some furniture while beating each other, then promptly exit through the balcony after breaking as many windows as possible.
 
Last edited:
They state they have not assessed the end game with large cities which is where CPU will matter most.

If your CPU cannot manage 60 FPS at end game no amount of tuning the settings will get you there where as running on a 5070 you can lower settings to get to a comfy frame rate as long as the CPU can keep up.

That is the other major point. It is far easier to work around GPU induced low frame rates by tuning settings than it is to work around CPU limitations.
Definitely a fair point. And definitely a cause for disappointment in 'game benchmark' reviews where they just run the same test suite as they will on future GPU/CPU reviews rather than attempt to provide additional data points. Just doubling the runs to include GPUs on a $200 CPU and CPUs on a $400 GPU would provide far more insight into how the game is likely to behave.

It still doesn't provide the full picture of course, since since as you rightly note certain games have 'high stress' scenarios which fall outside the normal benchmark regime. But it's very important to note that behavior in those scenarios cuts both ways. Case in point would be Factorio where a default benchmark result shows an obscene lead for the 7800X3D of 421 updates per second compared to 245 for the 7700X -
But then you go to the 'high stress' map and suddenly the 7800X3D is only barely ahead at 45 updates per second compared to 40 for the 7700X. This behavior makes perfect sense as the X3D cache can be large enough to have an overstated effect in 'low stress' scenarios where everything of importance fits in the cache. But soon as you move up to the 'high stress' scenarios it only offers a minimal benefit since all the 'most important' code may well fit in a 16MB cache, but the 'less important' goes from 64MB to 256MB. (Note that these scenarios are still where ARL was quite bad initially and still not great due to becoming memory latency dependent.)

I'm not familiar with the current state as I stopped playing almost a decade ago, but WoW was definitely another interesting case for being CPU bound in certain scenarios. There the general case was GPU bound, but get into 25M mythic raids and it definitely became CPU limited... but with a note of GPU in that it was the GPU driver thread that was the primary limitation. (AMD GPU performance looked good in the general case, but was quite bad by comparison due to their driver being lower performance.) Reducing certain graphics settings would reduce the work done by the driver and hence increase performance, despite not being GPU bound.
 
View attachment 139889
dell seems to have fixed their poor on battery ST perf with PTL, though Lenovo's PTL chip still throttles ST perf heavily while on battery.
wonder what the impact on battery life this will have.
It's absolute top in video playback battery life. Shows you how much better FHD IPS is over 1800p OLED.

At least the system isn't a total waste. But the price for the Dell is a deal breaker.
 
Definitely a fair point. And definitely a cause for disappointment in 'game benchmark' reviews where they just run the same test suite as they will on future GPU/CPU reviews rather than attempt to provide additional data points. Just doubling the runs to include GPUs on a $200 CPU and CPUs on a $400 GPU would provide far more insight into how the game is likely to behave.

It still doesn't provide the full picture of course, since since as you rightly note certain games have 'high stress' scenarios which fall outside the normal benchmark regime. But it's very important to note that behavior in those scenarios cuts both ways. Case in point would be Factorio where a default benchmark result shows an obscene lead for the 7800X3D of 421 updates per second compared to 245 for the 7700X -
But then you go to the 'high stress' map and suddenly the 7800X3D is only barely ahead at 45 updates per second compared to 40 for the 7700X. This behavior makes perfect sense as the X3D cache can be large enough to have an overstated effect in 'low stress' scenarios where everything of importance fits in the cache. But soon as you move up to the 'high stress' scenarios it only offers a minimal benefit since all the 'most important' code may well fit in a 16MB cache, but the 'less important' goes from 64MB to 256MB. (Note that these scenarios are still where ARL was quite bad initially and still not great due to becoming memory latency dependent.)

I'm not familiar with the current state as I stopped playing almost a decade ago, but WoW was definitely another interesting case for being CPU bound in certain scenarios. There the general case was GPU bound, but get into 25M mythic raids and it definitely became CPU limited... but with a note of GPU in that it was the GPU driver thread that was the primary limitation. (AMD GPU performance looked good in the general case, but was quite bad by comparison due to their driver being lower performance.) Reducing certain graphics settings would reduce the work done by the driver and hence increase performance, despite not being GPU bound.

Pcgh did a test of wow midnight. Very favourable results for the X3D chips although the 14900KS was good as well just not 7800X3D and 9800X3D good. The trusty 5800X3D was faster than the 12900K and nearly 50% faster than the 5800X.

Raids and cities are where you want higher FPS and those are the scenarios where the CPU is important.

Same with a lot of ARPGs where end game mapping is CPU limited.

Pointing to a game suite of AAA games and then saying CPU doesn't matter for gaming and actually played misses a huge chunk of games out and tells a very incomplete story.
 
Pcgh did a test of wow midnight. Very favourable results for the X3D chips although the 14900KS was good as well just not 7800X3D and 9800X3D good. The trusty 5800X3D was faster than the 12900K and nearly 50% faster than the 5800X.
Interesting results, thanks for pointing it out. I'll certainly be amused that their analysis makes it sound like the CPU bottleneck remains the same as it was a decade ago. It's quite unfortunate that they didn't re-run a selection of CPUs with the reduced environment details setting.

I'll also note that while X3D is again showing true dominance only in the lower stress average FPS numbers, the 1% and 0.2% lows indicative of the higher stress points show a much smaller advantage. eg, the 245K at 34 and 20 is quite comparable to the 7600X3D at 37 and 20. It's only the average FPS where the 7600X3D has a bit of an advantage at 69.5 vs 57.8.
 
Back
Top