• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Intel Meteor, Arrow, Lunar & Panther Lakes + WCL Discussion Threads

Page 985 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Wildcat Lake (WCL) Specs

Intel Wildcat Lake (WCL) is upcoming mobile SoC replacing Raptor Lake-U. WCL consists of 2 tiles: compute tile and PCD tile. It is true single die consists of CPU, GPU and NPU that is fabbed by 18-A process. Last time I checked, PCD tile is fabbed by TSMC N6 process. They are connected through UCIe, not D2D; a first from Intel. Expecting launching in Q1 2026.

Intel Raptor Lake UIntel Wildcat Lake 15WIntel Lunar LakeIntel Panther Lake 4+0+4
Launch DateQ1-2024Q2-2026Q3-2024Q1-2026
ModelIntel 150UIntel Core 7 360Core Ultra 7 268VCore Ultra 7 365
Dies2223
NodeIntel 7 + ?Intel 18-A + TSMC N6TSMC N3B + N6Intel 18-A + Intel 3 + TSMC N6
CPU2 P-core + 8 E-cores2 P-core + 4 LP E-cores4 P-core + 4 LP E-cores4 P-core + 4 LP E-cores
Threads12688
Max Clock5.4 GHz4.8 GHz5 GHz4.8 GHz
L3 Cache12 MB6 MB12 MB12 MB
TDP15 - 55 W15 - 35 W17 - 37 W25 - 55 W
Memory128-bit LPDDR5-520064-bit LPDDR5x-7467128-bit LPDDR5x-8533128-bit LPDDR5x-7467
Size96 GB48 GB32 GB128 GB
Bandwidth83 GB/s60 GB/s136 GB/s120 GB/s
GPUIntel GraphicsIntel GraphicsArc 140VIntel Graphics
RTNoNoYESYES
EU / Xe96 EU2 Xe8 Xe4 Xe
Max Clock1.3 GHz2.6 GHz2 GHz2.5 GHz
NPUGNA 3.017 TOPS48 TOPS49 TOPS






PPT1.jpg
PPT2.jpg
PPT3.jpg



As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



LNL-MX.png
 

Attachments

  • PantherLake.png
    PantherLake.png
    283.5 KB · Views: 24,048
  • LNL.png
    LNL.png
    881.8 KB · Views: 25,534
  • INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    181.4 KB · Views: 72,443
  • Clockspeed.png
    Clockspeed.png
    611.8 KB · Views: 72,329
Last edited:
though Lenovo's PTL chip still throttles ST perf heavily while on battery.
wonder what the impact on battery life this will have.
To me that Ideapad Pro 5 looks like it was tested in the Optimized Cooling mode and not Lenovo's High Performacne mode. On my Ideapad Pro 5 /w Krackan Point (same gen as the ARL model), this Optimized Cooling mode sets the Windows power mode to Balanced. This result in a Speedometer 3.1 score of 24 in Balanced versus 35 in Performance mode, (which is the same score achieved when plugged in)

I think most of the Intel and AMD CPUs in that list would score very well on battery vs. main power if the power profiles weren't all over the place.

In a way I'm happy that Qualcomm decided to make their X2 Elite marketing about this issue, we might get consistent default power profiles out of this noise, though I still think the bigger problem is how Microsoft approached power modes and power plans (in the most complicated and inconsistent fashion, as ever).
 
Last edited:
As a related aside: does Intel have the ability to have partially enabled quad e-core clusters? I don't recall ever seeing an sku from them that had a partially enabled cluster.
I believe Intel cores are disabled at fabric level segments. Or ring. E cores share one ring stop with shared L2. So you can only disable the stop, which means all 4 cores and L2 cache. P cores have their own ring stop, so can be disabled individually.

NovaLake P cores have clusters, so you likely would only be able to disable 2 at a time
 
I believe Intel cores are disabled at fabric level segments. Or ring. E cores share one ring stop with shared L2. So you can only disable the stop, which means all 4 cores and L2 cache. P cores have their own ring stop, so can be disabled individually.

NovaLake P cores have clusters, so you likely would only be able to disable 2 at a time
How is the Alder Lake N50 2(2) implemented?
 
How is the Alder Lake N50 2(2) implemented?
You are right.

I was not aware of this SKU. Looks like maybe terms of implementation, they install cluster wise, but their ability to laser is precise (even going into the core itself to cut off AVX512 support when I think about it further).
 
Whenever I see links like these from Twitter on the forum I feel like those movie characters that are going about their lives when 2 randos burst through one wall of their apartment, crush some furniture while beating each other, then promptly exit through the balcony after breaking as many windows as possible.
There should be a movie about that
 
Yea if you create a GPU bottleneck then sure multiple CPUs can perform the same. You could have said the exact same thing about Bulldozer vs Sandy CPUs if you GPU bottleneck them both.

Plenty of games are CPU bound though. ACC, iRacing, MSFS, WoW, PoE, PoE2, D4, Stellaris, Civ 7 turn time, Anno, BG3, LoL, Dota 2 and plenty more titles on top. Most review sites use older AAA titles that are far less popular than the titles I have listed to base their comparison on which is just an incomplete picture.

Exactly. Measuring FPS while walking through a 3D scene, and pretending that's what all gaming is about, is a very limited view. Even good review sites - such as Hardware Unboxed - give quite bad recommendations, based on this assumption.

And don't get me started on Steve's recent recommendation that platform longevity is irrelevant, and he is still digging that hole.

Of the recent games on my Steam list, a game I bought recently - Terra Invicta - is very CPU bound.

Then, I have Civ IV - Colonization + WTP mod, which is a turn based game, and turn times become quite long in late game due to AI and trade routes.

Then, I have Transport Fever 2, which, at the highest speed is purely CPU limited, can go only as fast as the CPU allows.

SimAirport - same, CPU limited. Both of these games model every step of every person (Sim), and as player builds up, there is more and more to simulate until the limit of the CPU becomes the only limiter.

EU4+MoE - long turn times, CPU limited.

Dyson Sphere Program - also CPU limited, once the world is built up. The Chinese team developing this game is probably now treating their game as a research project in CPU multi-tasking and AMD donated to them some ThreadRipper CPUs.
 
Exactly. Measuring FPS while walking through a 3D scene, and pretending that's what all gaming is about, is a very limited view. Even good review sites - such as Hardware Unboxed - give quite bad recommendations, based on this assumption.

And don't get me started on Steve's recent recommendation that platform longevity is irrelevant, and he is still digging that hole.

Of the recent games on my Steam list, a game I bought recently - Terra Invicta - is very CPU bound.

Then, I have Civ IV - Colonization + WTP mod, which is a turn based game, and turn times become quite long in late game due to AI and trade routes.

Then, I have Transport Fever 2, which, at the highest speed is purely CPU limited, can go only as fast as the CPU allows.

SimAirport - same, CPU limited. Both of these games model every step of every person (Sim), and as player builds up, there is more and more to simulate until the limit of the CPU becomes the only limiter.

EU4+MoE - long turn times, CPU limited.

Dyson Sphere Program - also CPU limited, once the world is built up. The Chinese team developing this game is probably now treating their game as a research project in CPU multi-tasking and AMD donated to them some ThreadRipper CPUs.

Steve is not recommending the platform longevity is irrelevant.

He constantly recommends the opposite.
 
Steve is not recommending the platform longevity is irrelevant.

He constantly recommends the opposite.

Nope, only retroactively. Retroactively, HWUB praised AM4 platform longevity.

But proactively, they said that platform longevity is meaningless to them. Steve said that hardware longevity will play no role in making recommendations between AM5 CPUs and Arrow Lake CPUs.

Maybe in 5 years, HWUB will make another video, retroactively recommending AM5, but when it mattered to users, they did the opposite. They told their viewers to ignore platform longevity.
 
Even when platform cost means the advice makes no sense. Like saying 7600 is better value because you can get X3D later when 14600K was $150 with a motherboard

To me, what matters more is "work" replacing motherboard. I have done it way too many times for it to be great fun. And also, my vision is not what it used to be, and the DIY companies are not exactly going out of their way to make things easier.

Regarding 7600x vs. 14600k, at the time of Zen 5 intro, Alder Lake platform offered PCIe 5.0 slots only for GPU and none for NVMe.

Which proved to be a poor foresight on part of Intel, since there were no GPU cards supporting PCIe 5.0 but there were NVMe drives supporting PCIe 5.0.

The cheap Alder Lake mobos you mention were probably DDR4 - a poor choice for longevity.

So AM5 platform was hands down, far better at the time of launch for future longevity (at the time of Zen 4 launch vs. Alder/Raptor) and then again, at the time of Zen 5 and Arrow Lake launch, since Arrow Lake platform was a stillborn platform.
 
Regarding 7600x vs. 14600k, at the time of Zen 5 intro, Alder Lake platform offered PCIe 5.0 slots only for GPU and none for NVMe.

Which proved to be a poor foresight on part of Intel, since there were no GPU cards supporting PCIe 5.0 but there were NVMe drives supporting PCIe 5.0.
ADL was supposed to launch like few years before it originally launched things got delayed quite a lot with 10nm
 
When people give advice, they usually mean what they would do. Getting 7600X for later upgrade to X3D or Zen 6 is no worse than getting 14600K+mobo for cheaper. Depends on what the user wants. If longevity is important, get 7600X otherwise get 14600K.

Unless you want a PCIe 5.0 NVMe.

BTW, I have 2 AM5 systems, one is running a small server, and for that one, I bought 7700x while the mobo has some server features (= $$$). I got an excellent bundle deal on 7700x.

So cheap CPU and expensive mobo - a play on longevity and potential CPU upgrade down the road.
 
Last edited:
ADL was supposed to launch like few years before it originally launched things got delayed quite a lot with 10nm

Which could have worked out much better for Intel, if ADL launched on time and platform, as a result, would have had greater longevity.

But at the time Steve from HWUB advised against taking platform longevity of AM5 into account, Alder platform had no PCIe 5.0 for NVMe (with drives widely available) and Arrow Lake was about to launch as a single gen platform.

All because Steve was Big Mad about Zen 5%.

We will see how Steve's opinion will evolve with NVL and Zen 6 launches.
 
Which could have worked out much better for Intel, if ADL launched on time and platform, as a result, would have had greater longevity.

But at the time Steve from HWUB advised against taking platform longevity of AM5 into account, Alder platform had no PCIe 5.0 for NVMe (with drives widely available) and Arrow Lake was about to launch as a single gen platform.

All because Steve was Big Mad about Zen 5%.

We will see how Steve's opinion will evolve with NVL and Zen 6 launches.
I don't care about his opinion at all tbh as for opinion with NVL/Zen6 we are forgetting the Ram Crisis lol
 
Nope, only retroactively. Retroactively, HWUB praised AM4 platform longevity.

But proactively, they said that platform longevity is meaningless to them. Steve said that hardware longevity will play no role in making recommendations between AM5 CPUs and Arrow Lake CPUs.

Maybe in 5 years, HWUB will make another video, retroactively recommending AM5, but when it mattered to users, they did the opposite. They told their viewers to ignore platform longevity.
No, he said he wasn't going to mention platform longevity in the ARL launch review because AMD wouldn't confirm to him whether or not Zen 6 was going to be on AM5.

And he made the argument that the performance delta between launch and retirement of a socket was the true metric when discussing the benefit of longevity. Not how many years the socket was on the market.

You're either intentionally misrepresenting the argument or just didnt understand what he was saying.

Once it became obvious Zen6 was on AM5, he brought back up the benefit of longevity in his follow up reviews.

He's been, obnoxiously and almost to a fault, constantly talking about how great platform longevity is for a decade now, even when it may not even be a worthwhile argument.
 
Platform longevity is more about BIOS/FW update and improvement and bug fixes that should be taken into account as well it's not just a CPU Swap
 
No, he said he wasn't going to mention platform longevity in the ARL launch review because AMD wouldn't confirm to him whether or not Zen 6 was going to be on AM5.

Which means, he can only recommend a platform, for its longevity, retroactively.

It's like retroactive stock trading:
Steve HWUB: "NVidia was a great stock to own for past 5 years"
Viewer: "Would you recommend NVidia now"
Steve HWUB: "No I can't, because NVidia did not confirm to us what their stock price will be in the future"

That's a great analysis job by Steve.

And he made the argument that the performance delta between launch and retirement of a socket was the true metric when discussing the benefit of longevity. Not how many years the socket was on the market.

Viewer: "Would you recommend a platform that promises exceptional longevity"
Steve HWUB: "No because the CPU vendor did not confirm to us how fast their future CPUs will be.

You're either intentionally misrepresenting the argument or just didnt understand what he was saying.

I understood perfectly well. And I can also understand the difference between a stenographer and an analyst.

Once it became obvious Zen6 was on AM5, he brought back up the benefit of longevity in his follow up reviews.

So, no independent judgement, waiting to be spoon-fed.

He's been, obnoxiously and almost to a fault, constantly talking about how great platform longevity is for a decade now, even when it may not even be a worthwhile argument.

He certainly dropped the ball when he said he would disregard (and would recommend to the viewers) to disregard platform longevity in his ARL review and for ARL launch.
 
I feel like a minor history lesson tangent with all this talk about Intel's poor platform longevity.

Intel has been on the ATX platform since it introduced it three decades ago. Want other examples of how our hobby wouldn't even exist without Intel? How about pretty much every other open hardware protocol used? PCI, AGP, USB, SATA, PCI-E, and other less notable examples all either developed and released directly by Intel or by an industry working group that Intel formed and spearheaded in order to make it happen. Without Intel forcing an open ecosystem you wouldn't be able to build your own computer because computers would be like every other consumer electronic device where you purchase the finished product and parts from one manufacturer are wholly incompatible with others. Even with Intel's push some computer manufacturers tried that approach.

The industry would also be quite a bit further behind on process technology without Intel's contributions. With the clearest example there being that EUV wouldn't exist without Intel kicking off the ASML co-investment program for EUV development with 1.5x the amount invested by Samsung and TSMC combined. There's also all the open source and other software contributions... including the behind the scenes efforts to keep Windows from sucking as much as Microsoft wants it to.

Yes, there's justifiable annoyance at Intel for the shenanigans they pulled to keep AMD from leeching off the industry they created. But what has AMD contributed to the industry again? The half-baked x86-64 extension? No question that Intel let the research project of Itanium go way too far, but they were never going to abandon x86 and had their own plans for an x86 64 bit extension... they just intended to wait a bit longer.

So yeah, what exactly are AMD, Apple, and NVIDIA doing to contribute back to the personal computer and the industry in general? I'd always considered Apple to be the king of closed ecosystems, but NVIDIA's giving them good competition.
 
  • Love
Reactions: 511
Intel has been on the ATX platform since it introduced it three decades ago. Want other examples of how our hobby wouldn't even exist without Intel? How about pretty much every other open hardware protocol used? PCI, AGP, USB, SATA, PCI-E, and other less notable examples all either developed and released directly by Intel or by an industry working group that Intel formed and spearheaded in order to make it happen. Without Intel forcing an open ecosystem you wouldn't be able to build your own computer because computers would be like every other consumer electronic device where you purchase the finished product and parts from one manufacturer are wholly incompatible with others. Even with Intel's push some computer manufacturers tried that approac
You can add CXL/UCI-E/MRDIMM which got standardized into JEDEC Khronos group there is a very long list you forgot DRAM Intel Invented DRAM.

also this
 
Last edited:
Intel attempted to close up the PC ecosystem by running their competition out of business and forcing OEMs/ODMs away from DRAM and into RDRAM. Fortunately, they failed.
Intel went with RDRAM starting in 1999 because moving from parallel to serial interfaces was viewed as the future at the time and it promised to be faster than the SDR DRAM of the time. First DDR SDRAM specification was formalized mid-2000, and I believe AMD's 760 chipset was the first to support it in Q4 of 2000. Upon observing the actual performance of RDRAM Intel was quite happy to drop it in favor of DDR SDRAM starting with the 845 chipset in Q3 of 2001. A 2 year foray into a new memory interface standard when there initially were no alternatives is hardly an attempt to force the industry... really just a result of Intel attempting to constantly improve their offerings and hence trying something new and unproven.
 
Back
Top