• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Intel current and future Lakes & Rapids thread

Page 126 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I dunno if gutting L2$ would net them any speed back, since AMD's clocks pretty fast.

It'd be more for space than performance. Rocket Lake, if it's Willow Cove the 10 core is going to be so big. Yield wise it shouldn't be a problem, especially if it's chiplets but the wafers it eats could be an issue.
 
It's promising but the current state is not impressive, because of the low clock rates squandering the IPC increase. And the 1065G7 is equipped with 3733MHz memory compared to 2133MHz? Odd.

The integrated graphics are more impressive. But a comparison to Vega 11 would be interesting, despite its memory bandwidth disadvantage.
 
I dunno what he was talking about, but IPC varies with clocks.
By definition that statement makes no sense.

IPC is instructions per cycle. Frequency is cycles per second. So clocks, as you put it, do not effect the instructions per cycle, but instead the number of cycles in a period, here in one second, which is the frequency of the cycles.

So, through definitions alone, changing clocks, meaning frequency, has no effect whatever on IPC and only effects the frequency of cycles per second.

Does it effect performance? Yes because it increased the number of cycles per second. But it does NOT effect the number of instructions per cycle.
 
What's odd about LPDDR4x being fast?
Nothing odd about it being fast but it's not very interesting for desktop folks. Many are already running memory that fast so the small improvements in application performance would be even smaller in reality.
 
Nothing odd about it being fast but it's not very interesting for desktop folks since many of us are already running memory that fast so the small improvements in application performance would be even smaller in reality.
The main good thing about LPDDR isn't performance, but its low power idle and deep sleep states.
 
The main good thing about LPDDR isn't performance, but its low power idle and deep sleep states.
I'm sure the 75% higher memory bandwidth is why they went with it. They delayed doing it until the power issues with LPDDR4 were sorted out. JEDEC (PDF) claims up to 40% lower pj/bit with LPDDR4X vs LPDDR4. But with the all the added bits per second it ends up about about equal in power consumption with LPDDR3.
 
Last edited:
It's promising but the current state is not impressive, because of the low clock rates squandering the IPC increase. And the 1065G7 is equipped with 3733MHz memory compared to 2133MHz? Odd.

The integrated graphics are more impressive. But a comparison to Vega 11 would be interesting, despite its memory bandwidth disadvantage.

I have to say when I saw the SPEC numbers I was like, wow, looks like it might be a winner. Then came the real world tests and it's much more meh. As a whole package, it's pretty good. But only looking at CPU performance it's rather... meh. I think that 10nm+ is killing them clock speed wise.

Then there's the fact that it looks like there may be no desktop 10nm. Everything I've seen points to Intel sticking with 14nm++ until 7nm drops in 2021, assuming they make the date this time.
 
I'll give you all a shrug out of a shrug.

"
IbXgVw0.png


Member of the CPU Logic Design team, worked on the uarch and RTL design of a next-generation VISC CPU core. => Part of Intel’s Big Core team, worked on next-generation, high-performance x86-based CPU cores.

From Soft Machines to Intel CPU Core group, new generation Core projects"
----
I fully expect a monstrosity of the likes the world has never seen before. The patents related to the supposed architecture are pending or granted at Intel. For what its worth its been done since the end of 2018. So, how long does it usually take to see an architecture?

16nm High Performance, High IPC VISC core w/ ARMv8-64 compatibility to Intel x86-64.
-> More IPC than Skylake with similar clock ranges of the then M-series/Y-series, and U-series.
 
Last edited:
I can't be bothered to link Andrei so whatever.

You see, your memory does not go faster if you up the CPU clockrate.
Hence why IPC varies with it.
Are you talking memory frequency or CPU frequency? IPC can be influenced by memory frequency, but not by CPU frequency. That has to do with latency reduction and keeping cache feed through bandwidth, etc.

I will admit, I was skimming earlier. But if you meant CPU frequency effects IPC, you are incorrect.
 
I don't know what's funny because 1 CYCLE = 1 CLOCK CYCLE. Power budget is not a consideration in an IPC test. Jesus!!

I dunno what he was talking about, but IPC varies with clocks.
I'm happy you said this, at least, and not that IPC is determined by power budget.

CPU freq affects your average IPC because your memory doesn't go faster together with CPU.
This is easily solved by making sure the memory subsystem is not a bottleneck in your test. Or, you could simply stick to manufacturer specs.
 
This is easily solved by making sure the memory subsystem is not a bottleneck in your test.

Depends on whether you want synthetic or "real world" data.

Some workloads can fit in cache. Assuming the cache scales with clockspeed, then the memory subsystem is not going to affect IPC testing.
Some workloads do not fit in cache. Now the memory subsystem is going to affect testing. If existing implementations of a particular uarch feature poor memory controller performance then obviously that will affect IPC testing.

Something like SuperPi 32m is affected by memory. Something like CBR20, less so.
 
CPU freq affects your average IPC because your memory doesn't go faster together with CPU.

I was always curious of this notion so today I quickly tested with CPU-z to check if the myth stands (on a 8700k):

IPCscaling.png

There's at best a 2% difference in scores running from 4.5 to 2.5 GHz fixed in bios, same RAM speed and timings. Myth busted?
Well for this benchmark sure, maybe at 5GHz it decreases noticeably but I won't test that with my crappy cooling… anyone interested open another thread and find out with more benches, also more CPUs!
 
There's at best a 2% difference in scores running from 4.5 to 2.5 GHz fixed in bios, same RAM speed and timings. Myth busted?
Well for this benchmark sure, maybe at 5GHz it decreases noticeably but I won't test that with my crappy cooling… anyone interested open another thread and find out with more benches, also more CPUs!


Cinebench isn't memory sensitive, there is no surprise in your test.
 
I was always curious of this notion so today I quickly tested with CPU-z to check if the myth stands (on a 8700k):

View attachment 9143

There's at best a 2% difference in scores running from 4.5 to 2.5 GHz fixed in bios, same RAM speed and timings. Myth busted?
Well for this benchmark sure, maybe at 5GHz it decreases noticeably but I won't test that with my crappy cooling… anyone interested open another thread and find out with more benches, also more CPUs!
Yes CPU-Z and CB - both known to have tons of memory pressure amirite?

IPC should only be measured at peak performance of a chip because that's the only data-point that matters. Everything below that will artificially inflate IPC because you're essentially improving memory cycles by an equal amount to the clock reduction.
 
Back
Top