AMD ZEN 6 — Next-gen Chiplets & Packaging
Summary
TLDRThis video script discusses the evolution of AMD's Ryzen CPU generations, highlighting the technological advancements and performance improvements from Zen 2 to the anticipated Zen 6. It delves into the interconnect and packaging design, comparing the simplicity and cost-effectiveness of AMD's current chiplet architecture with the potential benefits and complexities of future technologies like silicon interposers and organic redistribution layers. The script speculates on the likely direction AMD will take with Zen 6, suggesting that an organic interposer with fan-out interconnects, similar to TSMC's InFO_R, could be the most feasible and balanced solution for enhancing bandwidth, reducing latency, and improving energy efficiency across both consumer and server-grade CPUs.
Takeaways
- 🔍 AMD's Ryzen CPUs have seen three generations of Zen architecture with Zen 2, Zen 3, and Zen 4, but the interconnect and packaging design has remained consistent since Zen 2's introduction in 2019.
- 🌐 Zen 2's chiplet architecture used a simple and cost-effective method of connecting chiplets through traces in the PCB, a method that has been in use for decades.
- 🚀 The simplicity of AMD's interconnect technology comes with drawbacks such as low bandwidth, high latency, and higher energy consumption.
- 🔄 Zen 6 is expected to introduce significant changes to layout, packaging, and interconnect design to meet the demands of future Ryzen and EPYC generations.
- 💡 Silicon interposers offer higher bandwidth, lower latency, and better energy efficiency but are complex and expensive to implement.
- 🌉 Silicon bridges, like Intel's EMIB, aim to achieve similar benefits as silicon interposers but with lower complexity and cost.
- 🔗 Organic Redistribution Layers (RDL) with fan-out interconnects, such as TSMC's InFO_R, offer a balance between performance and cost, using organic compounds instead of silicon.
- 🎯 AMD's Infinity Links technology, used in Navi 31 and 32, provides 10x the bandwidth density of Infinity Fabric On-Package with significant power consumption reduction.
- 💻 For desktop Ryzen CPUs, Zen 6's potential shift to Infinity Links could improve latency, benefiting performance in latency-sensitive applications like gaming.
- 🛠️ Server EPYC CPUs could see substantial improvements in interconnect efficiency with Infinity Links, potentially enabling lower TDPs or allowing for more cores and higher clock speeds.
Q & A
What are the Ryzen generations discussed in the transcript?
-The Ryzen generations discussed are Zen 2 (Ryzen 5 3600), Zen 3 (Ryzen 5 5600), and Zen 4 (Ryzen 7 7800X3D).
What is the main visual difference between the Ryzen 7 7800X3D and its predecessors?
-The main visual difference of the Ryzen 7 7800X3D is the capacitors surrounding it, not the chiplets.
What has remained unchanged in AMD's Ryzen CPUs since the introduction of Zen 2?
-The interconnect and packaging design of AMD's Ryzen CPUs has remained unchanged since the introduction of Zen 2.
What significant change is Zen 6 expected to introduce?
-Zen 6 is expected to introduce sweeping changes to layout, packaging, and interconnect design.
What are the drawbacks of using PCB for connecting chiplets as mentioned in the script?
-The drawbacks include low bandwidth, high latency, and high energy consumption.
What are silicon interposers and their advantages?
-Silicon interposers are pieces of silicon placed between the substrate and chiplets, offering higher interconnect density, more bandwidth, lower latency, and reduced energy use.
What is Intel's EMIB and how does it compare to silicon interposers?
-Intel's EMIB (Embedded Multi-Die Interconnect Bridge) is a technology that achieves similar benefits to silicon interposers at lower complexity and cost by using smaller pieces of silicon where chiplets meet.
What potential technology might AMD use for Zen 6 according to the transcript?
-AMD might use a technology involving organic redistribution layers (RDL) with a fanout interconnect for Zen 6, similar to what is used in Navi 31 & 32.
What are the benefits of using an organic interposer with fan-out interconnects?
-Benefits include higher bandwidth, lower latency, and better energy efficiency compared to current Infinity Fabric On-Package.
How might Zen 6 differ in terms of chiplet placement compared to previous generations?
-For desktop Ryzen CPUs, Zen 6 might have CPU chiplets placed right next to the IO-die, differing from previous generations where they were located away.
Outlines
🔍 AMD Ryzen CPU Generations Comparison
This paragraph discusses the visual comparison of AMD's Ryzen CPUs across three generations: Zen 2 (Ryzen 5 3600), Zen 3 (Ryzen 5 5600), and Zen 4 (Ryzen 7 7800X3D). It highlights the similarities in design, with a focus on the IO-die and CPU chiplet structure, and notes the lack of significant visual changes despite technological advancements. The paragraph also touches on the potential for Zen 6 to introduce significant changes in layout, packaging, and interconnect design.
🛠️ Evolution of AMD's Chiplet Architecture
The paragraph delves into the evolution of AMD's chiplet architecture, starting with Zen 2's introduction in 2019. It explains the simplicity and cost-effectiveness of AMD's interconnect and packaging technologies, which use PCB traces to connect chiplets. Despite the drawbacks of low bandwidth and high latency, the paragraph emphasizes the positive aspects of cost and simplicity. It also discusses the need for a more advanced interconnect technology for future Ryzen generations, considering the trade-offs between performance, efficiency, and cost.
🔧 Exploring Advanced Interconnect Technologies
This section explores the advanced interconnect technologies, focusing on silicon interposers and their benefits, such as higher bandwidth, lower latency, and improved energy efficiency. It discusses the challenges and costs associated with implementing silicon interposers, including size limitations, mask stitching technology, and the fragility of the interposers. The paragraph also introduces silicon bridges as a more cost-effective alternative, explaining their concept and potential advantages over silicon interposers.
🌐 Potential Interconnect Solutions for Zen 6
The paragraph discusses potential interconnect solutions for AMD's Zen 6 architecture, including silicon interposers, silicon bridges, and organic redistribution layers (RDL) with fan-out interconnects. It highlights the benefits and drawbacks of each technology, with a focus on the organic RDL as the most likely contender for Zen 6 due to its balance of performance and cost. The paragraph also speculates on how these technologies might affect the layout and performance of future desktop and server CPUs.
🚀 The Future of AMD's Chiplet Interconnect
The final paragraph reflects on the legacy of Zen 2 and the potential for AMD's next-gen interconnect technology. It suggests that while the new technology will be more expensive, it will offer significant benefits in bandwidth, latency, and energy efficiency. The paragraph also contemplates whether AMD should follow Intel's approach with silicon interposers or maintain a balance between cost and performance, ultimately suggesting that an organic interposer solution like TSMC's InFO_R is the most likely path for AMD's future chiplets.
Mindmap
Keywords
💡Chiplet Architecture
💡Interconnect Technology
💡Silicon Interposer
💡Silicon Bridge
💡Organic Redistribution Layer (RDL)
💡Infinity Fabric
💡Bandwidth
💡Latency
💡Energy Efficiency
💡TSMC
💡Zen 6
Highlights
Visual comparison of AMD's Ryzen generations from Zen 2 to Zen 4.
Zen 2, 3, and 4 CPUs have a consistent design with one large IO-die and a single CPU chiplet.
AMD's Zen 5 is expected to maintain the same design as previous generations.
Zen 6 is anticipated to introduce significant changes in layout, packaging, and interconnect design.
Zen 2's impact was due to its simple and cost-effective interconnect and packaging technologies.
AMD's chiplet architecture uses traces through the PCB, a method dating back decades.
The simplicity of AMD's PCB design comes with drawbacks like low bandwidth and high latency.
Silicon interposers offer higher interconnect density, bandwidth, and lower latency at the cost of increased complexity and expense.
Silicon bridges, like Intel's EMIB, aim to achieve similar benefits as silicon interposers but with lower complexity and cost.
AMD's Navi 31 and 32 use an organic RDL interposer with fan-out interconnects, known as Infinity Links.
Infinity Links offer 10x the bandwidth density of Infinity Fabric On-Package with significant power consumption reduction.
For desktop Ryzen CPUs, Zen 6 could place CPU chiplets next to the IO-die for reduced latency.
EPYC servers could benefit from increased interconnect efficiency and reduced energy consumption with Zen 6.
AMD's next-gen interconnect technology is expected to be more expensive but offer higher bandwidth, lower latency, and better energy efficiency.
AMD may implement different interconnect solutions for client and desktop, like a silicon bridge for EPYC and an organic RDL for Ryzen.
TSMC's InFO_R is a likely contender for AMD's next-gen chiplet interconnect architecture.
AMD's Infinity Links could be the future of interconnect technology, offering a balance of cost and performance.
Transcripts
What you are looking at is a visual comparison of AMD's last three Ryzen generations. From
left to right, we have Zen 2 in the form of a Ryzen 5 3600, Zen 3 as a Ryzen 5 5600 and
a Zen 4 based Ryzen 7 7800X3D. That's three CPU generations on two different sockets,
with some pretty substantial changes in technology and performance. But looking at these gorgeous
near infrared pictures from Fritzchens Fritz, it's difficult to spot any visual differences.
3600 and 5600 are pretty much identical and the main visual difference of the 7800X3D
is not the chiplets, but rather the capacitors surrounding it. All three CPUs basically look
the same: one large IO-die and a single CPU chiplet, with room for one more. That's because
ever since the introduction of Zen 2 in 2019, the interconnect and packaging design of AMD's
Ryzen CPUs hasn't changed at all. And from what we know, Zen 5 will still use this very
same design. They say don't fix what ain't broken, but
after soon to be four generations on the same chiplet architecture, it's time for something
new: Zen 6 is supposed to introduce sweeping changes to layout, packaging and interconnect
design. But what exactly is supposed to change, which technologies could be used and how will
Zen 6 benefit? Zen 2 wasn't only the first mass market chiplet
architecture, it was arguably the most impactful one, even tho its interconnect and packaging
technologies are rather simple. Instead of using complex and expensive technologies,
AMD is connecting the individual chiplets by running traces through the PCB, something
that has been done in semiconductors for many decades and isn't only easy to implement but
also a flexible and very cost effective way to design a multi-chip-module.
This image of a Zen 2 PCB shows all the traces within the 12-layer substrate. And while it
might look complex at first, once you - quiet literally - connect the dots, the simplicity
becomes apparent. We can make out the area for the two CPU chiplets in the upper half
of the PCB and the IO-die below that in the center. The CPU chiplets are only connected
to the IO-die, all of the other traces are routed directly from and to the IO-die, which
handles input and output, such as system memory and PCI-Express. Even tho the CPU chiplets
are placed in close proximity to each other, they are not directly connected, every communication
has to go through the IO-die first. Yes, there are many traces on the PCB, but
the technology behind them is simple. On face value, it's literally just tiny copper wires
embedded into the PCB, not unlike your motherboard or any other electronic device with a printed
circuit board. Sometimes, PCBs are also called "printed wiring boards", which I feel like
is a even more fitting name, as they are a medium used to connect different components
with tiny wires. That's the technology AMD has been using for
its breakthrough chiplet architecture and is still using today. Even the Infinity Fabric
protocol that is used to transfer the data over these wires is based in large parts on
PCI-Express, a proven technology. But this simplicity doesn't come without drawbacks:
transporting data via the PCB results in low bandwidth, high latency and also consumes
a lot of energy. It's so simple to design and cheap to produce, because it is a low-tech
implementation. The positive aspect are only related to the cost side, performance and
efficiency take a backseat. Going forward, AMD needs a interconnect technology
that is able to meet the demands of future Ryzen and EPYC generations: more bandwidth,
lower latency and reduced energy cost when transporting data. On the flip side, this
new technology will certainly be more expensive to design, implement and manufacture. Let's
take a look at the available options, weigh their pros & cons and see which one makes
the most sense for Zen 6. The most advanced interconnect technologies
use so called silicon interposers, which are pieces of silicon placed in-between the substrate
and the chiplets. This image from TSMC depicts their CoWoS technology, but it's true for
any other silicon interposer packaging method. Funny enough, the image on the left looks
strikingly similar to AMD's current chiplet design, maybe the one on the right shows its
future? The advantage of a silicon interposer is that
instead of using copper wires on the PCB, the chiplets are sitting on top of the same
piece of silicon, with means data connections between chiplets never have to leave the silicon.
Running data paths through sillicon allows for a much higher interconnect density, resulting
in more bandwidth and lower latency, while at the same time also using less energy, because
interposers provide a much larger channel for electrical signals, reducing the amount
of energy needed to drive the data signals. AMD's current low-tech solution uses around
1.5 to 2 picojules per bit, while a silicon interposer interconnect is in the ballpark
of around 0.2 to 0.5 pJ/b. A huge increase in power efficiency.
So you get higher bandwidth, lower latency and it uses less energy? What's not to like?
Well, there's a trade-off. This time the solution is high-tech, but the costs are high too.
Not only does it take more engineering resources to design and implement a silicon interposer
design, but the interposer itself also doesn't come cheap. Size is a problem too, as the
silicon interposer has to be big enough to fit a large number of chiplets. This requires
special technology, so called mask stitching, to produce interposers above the current EUV
reticle limit of 858 mm², which further increases cost.
The packaging also requires more precision, since the interposer is very fragile and breaks
easily. And then you have to completely rethink your power and data routing, since you can't
just connect straigh to the chiplets, because the interposer is in the way. For that you
need so called TSVs, through-silicon vias, another cost factor.
In a nutshell, silicon interposers are great from a pure performance and efficiency perspective,
but they are also very complex and expensive to implement. Especially in todays market,
manufacturing capacities are limited due to the huge demand for AI GPUs like Nvidias H100
and AMDs MI300, which all use silicon interposers. AMD clearly has the ability to produce complex
interposer designs, but in my opinion it's not a feasible solution for the consumer market
and even HPC CPUs. Zen 6 very likely won't use a silicon interposer.
The next best thing are so called "silicon bridges", with Intels EMIB being the most
famous technology of this type. The idea behind silicon bridges is pretty simple: achieve
the same benefits of a silicon interposer at lower complexity and lower cost. So, how
do you do that? The name is a pretty good description of the
concept: instead of one large interposer that covers all of the chiplets you want to connect,
you place smaller pieces of silicon right where two chiplets meet. Intel's EMIB, which
is short for Embedded Multi-Die Interconnect Bridge, places the silicon bridge inside the
package substrate, while other solutions like InFO_LSI or CoWoS_L from TSMC raise the silicon
bridge above the substrate, a technology used for AMD's MI200, where is was dubbed "Elevated
Fanout Bridge". In theory, a silicon brige offers similar
benefits to a silicon interposer, without the high costs a single piece of silicon incurs
and the smaller bridges also don't block access to the individual chiplets from below, reducing
the use of expensive TSVs. But placing the silicon bridges isn't easy, especially when
you need a lot of them, like when connecting multiple CPU chiplets with multiple IO-dies.
Each placement becomes a potential point of failure during packaging, if one interconnect
fails, the whole chip might be wasted. A while back, Tom from Moore's Law Is Dead
leaked early Zen 6 design abstractions. In his video, which I've linked below, he showed
slightly altered versions of internal AMD slides, outlining the layout of Zen 6 based
server CPUs. In Tom's video we can see a combination of closely connected IO-dies and CPU chiplets,
which at first glance look very similar to a silicon bridge implementation. The chiplets
are placed right next to each other and the interconnect areas seem to overlap, resembling
a embedded or elevated bridge. There are a few options for AMD to choose
from, if they want to use silicon bridges. TSMC offers the already mentioned InFO_LSI
and CoWoS_L, which are bascially the same technology with a different order of packaging.
Integrated Fanout, or InFO, is a chip first process, where the chips are placed first
and then the interposer or bridge layer is build up second. CoWoS is a chip last process,
where the interposer or bridge layers are build up first and then the chips are connected
in a second step. Placing the chips last is more easy on the chips, which makes it very
valuable for fragile chips like HBM, that's why basically all HBM based chips are using
CoWoS packaging technology. Since Zen 6 very likely won't use HBM, a InFO variant would
be the more likely choice for AMD. And TSMC isn't the only provider, Outsourced
Semiconductor and Test companies (OSATs) offer similar technologies, in the case of ASE for
example FOCoS-Bridge. The point is, using a silicon bridge technology wouldn't be limited
by high CoWoS demand and there are other providers aside from TSMC. As such, a silicon bridge
is a much more likely contender for Zen 6 interconnect technology. It offers similar
benefits as a interposer while at the same time reducing some of its greatest drawbacks.
And while I wouldn't be completely surprised to see a brige interconnect used for Zen 6,
there's one more technology that I personally consider to be the most likely contender for
AMD's next-gen chiplet design: a organic redistribution layer, RDL for short, with a fanout interconnect.
This technology takes a page out of the silicon interposer playbook, by also building up a
interposer that sits below all the chiplets. But instead of using silicon as the material
of choice, other organic compounds are used, mostly composite materials.
The idea is simple: to achieve benefits like higher bandwidth, lower latency and especially
more energy efficiency, you need to increase interconnect density so you can create larger
channels for electrical signals. Basically, you need to create more space for more interconnects.
Yes, silicon would be the perfect material for such a solution, but as we discussed before,
silicon is expensive and comes with a lot of other packaging related drawbacks. If you
are using a less capable material the benefits won't be as great, but you can still achieve
a more dense connection nonetheless. And AMD already has experience with organic
interposers. Both Navi 31 and 32 use a form of TSMCs InFO_R, short for Integrated Fanout
RDL, sometimes also called InFO_oS for "on-Substrate". AMD calls it "Infinity Links". Here's how
it works: a organic RDL interposer with at least four layers is placed below the GCD
and the MCDs. At the intersection of each chip, the RDL is used to quiet literally "fan
out" the interconnects onto a much larger area, which woldn't be possible without the
extra space the RDL interposer creates. It's called "fan out", because instead of straight
point-to-point connections, these high density interconnects look like a old fashioned folding
fan. This visual comparison by AES is pretty good
at explaining how fan-out works. On the left you can see standard fan-in, where the traces
are routed within the area of the chip, which means the chip itself limits how many interconnects
you can create. Fan-out on the right side shows that traces are routed outwards from
the chip, to build connection points beyond the limit of the individual chip. And now
imagine the effect you create when you have more than one layer available, the interconnect
density greatly increases. AMD claims that these Infinity Links offer
about 10x the bandwidth density of Infinity Fabric On-Package, with a staggering 5.3TB/s,
and also reduce power consumption (pJ/bit) by up to 80% at the same time. In their RDNA3
presentation AMD even compared a current gen EPYC CPU using Infinity Fabric to their new
Infinity Links, which is a pretty huge hint in my opinion. On the left we can see 25 wires
on a standard organic package, like used by Infinity Fabric, and the tiny image on the
right shows 50 wires using the new Infinity Link technology with a organic interposer.
The images are to scale, so you can see how much smaller and more dense the Infinity Links
actually are, the difference is staggering. Of course a organic interposer also comes
with some of the same drawbacks as a silicon one, but just like the performance benefits,
the drawbacks are also less pronounced. Like CoWoS, InFO also offers large interposers,
multiple times the EUV reticle limit, which should be enough for even a Zen 6 based EPYC
sever CPU. Just like a silicon interposer, a organic RDL is also somewhat blocking access
to the dies from below, as it sits in-between the substrate and the chiplets, but it's also
easier to run connections through organic material.
In a nutshell, a organic interposer with fan-out interconnects strikes a perfect balance. It
offers higher bandwidth, lower latency and better energy efficiency compared to the current
Infinity Fabric On-Package, while it's downsides are not as pronounced as with a silicon interposer,
meaning costs are manageable. Navi 31 and 32 are proof that AMD can scale such a technology
for standard consumer products without hurting margins. In the end, TSMCs InFO_R is the most
likely contented, but solutions such as FOCoS from AES are also a possibility. A organic
interposer is definitely my number one contender for AMD's next-gen chiplet interconnect architecture.
Now, let's assume Zen 6 will replace the current Infinity Fabric interconnect with Infinity
Links, similar to Navi 31 & 32, what would the actual benefits be and how different would
the packaging and layout look like? For desktop Ryzen CPUs, the most obvious change
would be the placement of the CPU chiplets. Instead of being located away from the IO-die,
they would sit right next to it, either one on each side or both on the same side, depending
of how AMD designs the desktop IO-die. When it comes to benefits like bandwidth,
latency and efficiency, desktop Ryzen parts will see the most benefit from a decrease
in latency. Just like Zen 3 improved latency by switching to a unified 32MB L3 cache, instead
of two separated 16MB blocks on Zen 2, Infiniy Links will cut down the latency between IO-die
and CPU chiplets, improving performance in latency dependent applications like gaming.
With only dual-channel DDR5 and not that many cores to feed, bandwidth isn't a problem to
begin with and while a more efficient interconnect is always nice to have, it's also not a major
concern for Ryzen. It's the latency that will elevate Zen 6 on desktop.
Servers on the other hand are a whole different ball game. With EPYC, the most important improvement
will be the increased interconnect efficiency. Connecting sixteen or more chiplets with Infinity
Fabric On-Package consumes a large portion of the total TDP. Infinity Link would drastically
reduce the amount of energy required for the interconnects, either enabling lower TDPs
or leaving headroom for more cores and higher clock speeds. I'm pretty sure AMD will spend
every additional watt on more cores. Zen 5 is already rumored to scale to 128 standard
and 192 compact cores, Zen 6 could go well beyond that with a new interconnect design.
And when it comes to servers, bandwidth is also a important factor. First of all, servers
have a lot more memory channels to feed its many cores and server CPU chiplets also contain
more cores. Zen4c increased core count per chiplet to 16 and by Zen 6 we might even see
24- or 32-core chiplets, these cores need a lot of bandwidth when communicating with
the IO-die, Infinity Link would be a huge boon.
Zen 2 was a breakthrough for AMD and its legacy is still felt today, especially the Infinity
Fabric On-Package interconnect AMD has been using every since. It's actually crazy to
think that Zen 5, which will release later this year, is still going to use the same
technology. AMD's next-gen interconnect will be more expensive,
but deliver important benefits across the board. Higher bandwidth, lower latency and
increased energy efficiency will allow AMD to scale their chiplet architectures even
further while reducing the performance drawbacks introduced with Zen 2.
There's always the slight chance that AMD will implement different solutions for client
and desktop, for example a more expensive silicon bridge interconnect for EPYC and a
organic RDL for Ryzen, but going by AMD's previous track record, they like to keep it
simple, which is why I think we will see the same technology on all chiplet platforms.
I'm convinced that a organic interposer solution, such as TSMCs InFO_R used by Navi 31 & 32,
is the most likely contender for AMD's next-gen chiplets. It's cheaper than silicon based
interconnects while still offering strong benefits over the current solution. AMD's
Infinity Links will be the future. I would like to know what kind of interconnect
technology you would like to see? Intel is going straight to to silicon interposers with
Meteor and Arrow Lake, should AMD do the same, or is a balance of cost and performance the
better choice? Let me know in the comments below.
I hope you found this video interesting and see you in the next one.
浏览更多相关视频
![](https://i.ytimg.com/vi/Ut9ghi1huGU/hq720.jpg)
НОВЫЕ ВИДЕОКАРТЫ И ПРОЦЕССОРЫ В 2024! / NVIDIA RTX 5090 И AMD RX 8000!
![](https://i.ytimg.com/vi/maNjt8D6tfw/hq720.jpg)
Intel's WORST NIGHTMARE!
![](https://i.ytimg.com/vi/MkKQ4VCDL7M/hq720.jpg)
Why AMD’s Bad Benchmarks Are BAD! Investigating The Lie
![](https://i.ytimg.com/vi/pYrcT181zkk/hq720.jpg)
Kavga Başladı - EN UCUZ 16GB EKRAN KARTI ÇIKTI - Radeon RX 7600 XT 16GB
![](https://i.ytimg.com/vi/RHHKPrMykcY/hq720.jpg)
The Future of AI is now built into your PC with Ryzen AI
![](https://i.ytimg.com/vi/H6_gMc10Iuw/hq720.jpg)
☝🏻КРИЗИС NVIDIA: мощь Ryzen 9000, SoC Nintendo Switch 2, Intel в Узбекистане
5.0 / 5 (0 votes)