• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Leading Edge Foundry Node advances (TSMC, Samsung Foundry, Intel) - [2020 - 2025]

Page 260 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
TSMC's N7 EUV is now in its second year of production and N5 is contributing to revenue for TSMC this quarter. N3 is scheduled for 2022 and I believe they have a good chance to reach that target.

1587737990547.png
N7 performance is more or less understood.
1587739093721.png

This year and next year TSMC is mainly increasing capacity to meet demands.

For Samsung the nodes are basically the same from 7LPP to 4 LPE, they just add incremental scaling boosters while the bulk of the tech is the same.

Samsung is already shipping 7LPP and will ship 6LPP in H2. Hopefully they fix any issues if at all.
They have two more intermediate nodes in between before going to 3GAE, most likely 5LPE will ship next year but for 4LPE it will probably be back to back with 3GAA since 3GAA is a parallel development with 7LPP enhancements.


1587739615344.png

Samsung's 3GAA will go for HVM in 2022 most likely, similar timeframe to TSMC's N3.
There are major differences in how the transistor will be fabricated due to the GAA but density for sure Samsung will be behind N3.
But there might be advantages for Samsung with regards to power and performance, so it may be better suited for some applications.
But for now we don't know how much of this is true and we can only rely on the marketing material.

This year there should be a lot more available wafers due to lack of demand from Smartphone vendors and increased capacity from TSMC and Samsung.
Lots of SoCs which dont need to be top end will be fabbed with N7 or 7LPP/6LPP instead of N5, so there will be lots of wafers around.

Most of the current 7nm designs are far from the advertized density from TSMC and Samsung. There is still potential for density increase compared to currently shipping products.
N5 is going to be the leading foundry node for the next couple of years.

For a lot of fabless companies out there, the processes and capacity available are quite good.

---------------------------------------------------------------------------------------------------------------------------------------------------


FEEL FREE TO CREATE A NEW THREAD FOR 2025+ OUTLOOK, I WILL LINK IT HERE
 
Last edited:
1772746458874.png
Kurnal on twitter
E-core is ~95% the area, however the P-core actually shrunk a good bit it looks like, and is ~90% the area.
Hopefully we get better die shots soon from Kurnal (who claims he will grind down the CPU tile for a better shot) to look at what parts of the core this shrink is better attributed to- the FPU, SRAM, L1 caches, etc etc.
 
It seems that the bulk of Intel's "Intel 7" capacity (according to DKR), the capacity that is going to provide the upside in the red-hot server CPU market - that capacity happens to be in a war zone.


Let's be honest, Intel has been working in a war zone for decades now. They're used to it. Also Intel 7 isn't doing anything newer than Emerald Rapids so it becomes less-relevant to the server room by the day.
 
Let's be honest, Intel has been working in a war zone for decades now. They're used to it. Also Intel 7 isn't doing anything newer than Emerald Rapids so it becomes less-relevant to the server room by the day.
Well EMR is still decent as a CPU and it's Cheap
 
If you're a fan of hilarious AI slop, I found a good one for you. I was curious if there have been any updates on TSMC A14's schedule (i.e. will it end up being basically three full years between N2 and A14 as was the case for N3 and N5, as I assume it will) so I did a search (I use DDG) for 'tsmc a14 risk production' and below was the first result.

The first sentence is promising, a bit too specific on yield but I could believe A14 is yielding that on SRAM test chips when the article was written in December. But quickly flies off the rails, you have to read it yourself. Just look out if you want an iPhone with an A20, that's not coming out until late 2027 since it is being made on A14 and oh by the way it will have 240 billion transistors in a 480 mm^2 SoC so those iPhones are gonna run HOT lol!

https://techcrawlr.com/tsmc-begins-risk-production-on-1-4-nm-a14-process-node/
 
not sure if anyone posted this before

Tesla open to Intel partnership for ‘Terafab’ AI chip manufacturing project​

Tesla CEO Elon Musk said the company may need to construct “a gigantic chip fab” to meet future artificial intelligence computing demand, adding that he is open to “having discussions with Intel” about potential collaboration.


Tesla is developing its fifth-generation AI chip, dubbed AI5, which will power its autonomous driving systems and robotics programs. Musk stated at Tesla’s annual meeting that the chip would be produced initially in small quantities in 2026, ramping to volume production by 2027. “Even when we extrapolate the best-case scenario for chip production from our suppliers, it’s still not enough,” he said, adding that “we may have to do a Tesla terafab. It’s like giga but way bigger”, as reported by Reuters.

According to Musk, the new AI5 processor will be “inexpensive, power-efficient and optimised for Tesla’s own software”, consuming about one-third of the power of Nvidia’s Blackwell chip at roughly 10% of the production cost, according to Tech Republic.

He said the fab would be capable of producing at least 100,000 wafer starts per month, indicating large-scale volume comparable to leading-edge foundries. Tesla is currently sourcing chips from TSMC and Samsung, but Musk indicated that supplier capacity “is still not enough.”

The AI6 chip, expected in 2028, will reuse the same fabrication infrastructure and deliver roughly twice the performance of AI5, according to Musk’s social media statements cited by Tech Report.

 
not sure if anyone posted this before

Tesla open to Intel partnership for ‘Terafab’ AI chip manufacturing project​

Tesla CEO Elon Musk said the company may need to construct “a gigantic chip fab” to meet future artificial intelligence computing demand, adding that he is open to “having discussions with Intel” about potential collaboration.


Tesla is developing its fifth-generation AI chip, dubbed AI5, which will power its autonomous driving systems and robotics programs. Musk stated at Tesla’s annual meeting that the chip would be produced initially in small quantities in 2026, ramping to volume production by 2027. “Even when we extrapolate the best-case scenario for chip production from our suppliers, it’s still not enough,” he said, adding that “we may have to do a Tesla terafab. It’s like giga but way bigger”, as reported by Reuters.

According to Musk, the new AI5 processor will be “inexpensive, power-efficient and optimised for Tesla’s own software”, consuming about one-third of the power of Nvidia’s Blackwell chip at roughly 10% of the production cost, according to Tech Republic.

He said the fab would be capable of producing at least 100,000 wafer starts per month, indicating large-scale volume comparable to leading-edge foundries. Tesla is currently sourcing chips from TSMC and Samsung, but Musk indicated that supplier capacity “is still not enough.”

The AI6 chip, expected in 2028, will reuse the same fabrication infrastructure and deliver roughly twice the performance of AI5, according to Musk’s social media statements cited by Tech Report.

Oh great. Thank goodness this POS can’t influence TSMC.

Anyway, good luck Intel dealing with Musk
 
not sure if anyone posted this before

Tesla open to Intel partnership for ‘Terafab’ AI chip manufacturing project​

Tesla CEO Elon Musk said the company may need to construct “a gigantic chip fab” to meet future artificial intelligence computing demand, adding that he is open to “having discussions with Intel” about potential collaboration.


Tesla is developing its fifth-generation AI chip, dubbed AI5, which will power its autonomous driving systems and robotics programs. Musk stated at Tesla’s annual meeting that the chip would be produced initially in small quantities in 2026, ramping to volume production by 2027. “Even when we extrapolate the best-case scenario for chip production from our suppliers, it’s still not enough,” he said, adding that “we may have to do a Tesla terafab. It’s like giga but way bigger”, as reported by Reuters.

According to Musk, the new AI5 processor will be “inexpensive, power-efficient and optimised for Tesla’s own software”, consuming about one-third of the power of Nvidia’s Blackwell chip at roughly 10% of the production cost, according to Tech Republic.

He said the fab would be capable of producing at least 100,000 wafer starts per month, indicating large-scale volume comparable to leading-edge foundries. Tesla is currently sourcing chips from TSMC and Samsung, but Musk indicated that supplier capacity “is still not enough.”

The AI6 chip, expected in 2028, will reuse the same fabrication infrastructure and deliver roughly twice the performance of AI5, according to Musk’s social media statements cited by Tech Report.

Guessing this will be built on the moon or in a slowly decaying orbit.
 
Well Musk did say that he wasn't going to build cleanrooms, so the only way he can do his terafab fantasy is to be the bank for others to build and operate them for him.

Intel's biggest obstacle to competing with TSMC is lack of capital, so if Musk wants to provide that capital that's good for Intel. They could get that stalled Ohio fab cluster going, so long as they are smart enough to take the money not as direct loans but loans in lieu - essentially saying "we will pay this off by delivering wafers to you". That way if Musk's ketamine dreams of taking over taxis, AI and building a robot army don't come to fruition Intel isn't on the hook to pay back massive loans on empty fabs.

And yes I say capital is the biggest obstacle to competing with TSMC, knowing that some will say "no Intel being behind on process is" but that's not true. Even if Intel is a generation behind or has worse yields demand is so high right now that Intel would be able to sell everything they can make. They just can't make much right now because they've been stuck in a chicken and egg situation - they don't have big customers signed up so they can't afford to build fab space for them, and having to wait years for fab space to be built means they can't sign up big customers.
 
Well Musk did say that he wasn't going to build cleanrooms, so the only way he can do his terafab fantasy is to be the bank for others to build and operate them for him.

Intel's biggest obstacle to competing with TSMC is lack of capital, so if Musk wants to provide that capital that's good for Intel. They could get that stalled Ohio fab cluster going, so long as they are smart enough to take the money not as direct loans but loans in lieu - essentially saying "we will pay this off by delivering wafers to you". That way if Musk's ketamine dreams of taking over taxis, AI and building a robot army don't come to fruition Intel isn't on the hook to pay back massive loans on empty fabs.

And yes I say capital is the biggest obstacle to competing with TSMC, knowing that some will say "no Intel being behind on process is" but that's not true. Even if Intel is a generation behind or has worse yields demand is so high right now that Intel would be able to sell everything they can make. They just can't make much right now because they've been stuck in a chicken and egg situation - they don't have big customers signed up so they can't afford to build fab space for them, and having to wait years for fab space to be built means they can't sign up big customers.
We don't need no stinking cleanroom in space!

1000016462.jpg
 
Last edited:
TSMC says 7-10% density improvement for A16 vs N2P. BSPDN helps density, especially in the type of wire dense stuff like Nvidia's AI chips.
Fair. It does remove the power source from front to back so that does free up space for more front routing.
 
If you're a fan of hilarious AI slop, I found a good one for you. I was curious if there have been any updates on TSMC A14's schedule (i.e. will it end up being basically three full years between N2 and A14 as was the case for N3 and N5, as I assume it will) so I did a search (I use DDG) for 'tsmc a14 risk production' and below was the first result.

The first sentence is promising, a bit too specific on yield but I could believe A14 is yielding that on SRAM test chips when the article was written in December. But quickly flies off the rails, you have to read it yourself. Just look out if you want an iPhone with an A20, that's not coming out until late 2027 since it is being made on A14 and oh by the way it will have 240 billion transistors in a 480 mm^2 SoC so those iPhones are gonna run HOT lol!

https://techcrawlr.com/tsmc-begins-risk-production-on-1-4-nm-a14-process-node/
I actually believe that's the author writing it, he probably got the info from AI though, like 99% of the people nowadays. There's many tech writers that don't know what they are writing about. 480mm2 in itself isn't believable if it's a Max successor not phones.

I have yet to see a recent comment that hasn't directly referenced from AI.

The reference about carbon nanotube interconnect pilots seem suspicious to me. I can't find any reference to Carbon Nanotubes and A14.

It also talks about a 5.8GHz all-core boost on a 120-core ARM processor using 285W. That doesn't seem likely either.
 
Last edited:
THe difference between A16 and A14 is single digit in perf/watt
Mind you this is also a potential gain. They can choose to use that entirely to reduce design complexity and relax pitches and/or take other routes to make it easier, and faster time to market.

It can also be used for lower cost processors by using the little frequency advantage for not caring about binning and other techniques that increase frequency.

I hypothesize that Intel may have took this route as well, as the bigger and more complex P core got a greater shrink over the E core. Perhaps the potential ease of routing affected the "low skill" P core design more. Moore's Law has always been about what they call "democratizing computing", a general word for describing bring things to masses.

You can think of BSPDN similar to strained silicon, Hi-K dielectric, and copper interconnects in that it helps keep Moore's Law going, not a primary driver of it.
 
Mind you this is also a potential gain. They can choose to use that entirely to reduce design complexity and relax pitches and/or take other routes to make it easier, and faster time to market.
Well BSPDN is a complexity and issue adder
 
I actually believe that's the author writing it, he probably got the info from AI though, like 99% of the people nowadays. There's many tech writers that don't know what they are writing about. 480mm2 in itself isn't believable if it's a Max successor not phones.

Sure 480 mm^2 would be believable if Apple was still doing monolithic dies for the Max, but I still think that's AI slop because of the number of very specific but quite wrong numbers and dates that it includes.

A tech writer might be clueless about the typical size of a smartphone SoC and think 480 mm^2 is a reasonable number, but to ALSO be wildly wrong on N2's density, the year A20/M6 will appear, number of N2 wpm Apple will consume and so forth. Only AI is confidently wrong about so many things at once.
 
Back
Top