AMD ZEN 6 — Next-gen Chiplets & Packaging

High Yield
11 Feb 202416:37

Summary

TLDRThis video script discusses the evolution of AMD's Ryzen CPU generations, highlighting the technological advancements and performance improvements from Zen 2 to the anticipated Zen 6. It delves into the interconnect and packaging design, comparing the simplicity and cost-effectiveness of AMD's current chiplet architecture with the potential benefits and complexities of future technologies like silicon interposers and organic redistribution layers. The script speculates on the likely direction AMD will take with Zen 6, suggesting that an organic interposer with fan-out interconnects, similar to TSMC's InFO_R, could be the most feasible and balanced solution for enhancing bandwidth, reducing latency, and improving energy efficiency across both consumer and server-grade CPUs.

Takeaways

  • 🔍 AMD's Ryzen CPUs have seen three generations of Zen architecture with Zen 2, Zen 3, and Zen 4, but the interconnect and packaging design has remained consistent since Zen 2's introduction in 2019.
  • 🌐 Zen 2's chiplet architecture used a simple and cost-effective method of connecting chiplets through traces in the PCB, a method that has been in use for decades.
  • 🚀 The simplicity of AMD's interconnect technology comes with drawbacks such as low bandwidth, high latency, and higher energy consumption.
  • 🔄 Zen 6 is expected to introduce significant changes to layout, packaging, and interconnect design to meet the demands of future Ryzen and EPYC generations.
  • 💡 Silicon interposers offer higher bandwidth, lower latency, and better energy efficiency but are complex and expensive to implement.
  • 🌉 Silicon bridges, like Intel's EMIB, aim to achieve similar benefits as silicon interposers but with lower complexity and cost.
  • 🔗 Organic Redistribution Layers (RDL) with fan-out interconnects, such as TSMC's InFO_R, offer a balance between performance and cost, using organic compounds instead of silicon.
  • 🎯 AMD's Infinity Links technology, used in Navi 31 and 32, provides 10x the bandwidth density of Infinity Fabric On-Package with significant power consumption reduction.
  • 💻 For desktop Ryzen CPUs, Zen 6's potential shift to Infinity Links could improve latency, benefiting performance in latency-sensitive applications like gaming.
  • 🛠️ Server EPYC CPUs could see substantial improvements in interconnect efficiency with Infinity Links, potentially enabling lower TDPs or allowing for more cores and higher clock speeds.

Q & A

  • What are the Ryzen generations discussed in the transcript?

    -The Ryzen generations discussed are Zen 2 (Ryzen 5 3600), Zen 3 (Ryzen 5 5600), and Zen 4 (Ryzen 7 7800X3D).

  • What is the main visual difference between the Ryzen 7 7800X3D and its predecessors?

    -The main visual difference of the Ryzen 7 7800X3D is the capacitors surrounding it, not the chiplets.

  • What has remained unchanged in AMD's Ryzen CPUs since the introduction of Zen 2?

    -The interconnect and packaging design of AMD's Ryzen CPUs has remained unchanged since the introduction of Zen 2.

  • What significant change is Zen 6 expected to introduce?

    -Zen 6 is expected to introduce sweeping changes to layout, packaging, and interconnect design.

  • What are the drawbacks of using PCB for connecting chiplets as mentioned in the script?

    -The drawbacks include low bandwidth, high latency, and high energy consumption.

  • What are silicon interposers and their advantages?

    -Silicon interposers are pieces of silicon placed between the substrate and chiplets, offering higher interconnect density, more bandwidth, lower latency, and reduced energy use.

  • What is Intel's EMIB and how does it compare to silicon interposers?

    -Intel's EMIB (Embedded Multi-Die Interconnect Bridge) is a technology that achieves similar benefits to silicon interposers at lower complexity and cost by using smaller pieces of silicon where chiplets meet.

  • What potential technology might AMD use for Zen 6 according to the transcript?

    -AMD might use a technology involving organic redistribution layers (RDL) with a fanout interconnect for Zen 6, similar to what is used in Navi 31 & 32.

  • What are the benefits of using an organic interposer with fan-out interconnects?

    -Benefits include higher bandwidth, lower latency, and better energy efficiency compared to current Infinity Fabric On-Package.

  • How might Zen 6 differ in terms of chiplet placement compared to previous generations?

    -For desktop Ryzen CPUs, Zen 6 might have CPU chiplets placed right next to the IO-die, differing from previous generations where they were located away.

Outlines

00:00

🔍 AMD Ryzen CPU Generations Comparison

This paragraph discusses the visual comparison of AMD's Ryzen CPUs across three generations: Zen 2 (Ryzen 5 3600), Zen 3 (Ryzen 5 5600), and Zen 4 (Ryzen 7 7800X3D). It highlights the similarities in design, with a focus on the IO-die and CPU chiplet structure, and notes the lack of significant visual changes despite technological advancements. The paragraph also touches on the potential for Zen 6 to introduce significant changes in layout, packaging, and interconnect design.

05:03

🛠️ Evolution of AMD's Chiplet Architecture

The paragraph delves into the evolution of AMD's chiplet architecture, starting with Zen 2's introduction in 2019. It explains the simplicity and cost-effectiveness of AMD's interconnect and packaging technologies, which use PCB traces to connect chiplets. Despite the drawbacks of low bandwidth and high latency, the paragraph emphasizes the positive aspects of cost and simplicity. It also discusses the need for a more advanced interconnect technology for future Ryzen generations, considering the trade-offs between performance, efficiency, and cost.

10:07

🔧 Exploring Advanced Interconnect Technologies

This section explores the advanced interconnect technologies, focusing on silicon interposers and their benefits, such as higher bandwidth, lower latency, and improved energy efficiency. It discusses the challenges and costs associated with implementing silicon interposers, including size limitations, mask stitching technology, and the fragility of the interposers. The paragraph also introduces silicon bridges as a more cost-effective alternative, explaining their concept and potential advantages over silicon interposers.

15:11

🌐 Potential Interconnect Solutions for Zen 6

The paragraph discusses potential interconnect solutions for AMD's Zen 6 architecture, including silicon interposers, silicon bridges, and organic redistribution layers (RDL) with fan-out interconnects. It highlights the benefits and drawbacks of each technology, with a focus on the organic RDL as the most likely contender for Zen 6 due to its balance of performance and cost. The paragraph also speculates on how these technologies might affect the layout and performance of future desktop and server CPUs.

🚀 The Future of AMD's Chiplet Interconnect

The final paragraph reflects on the legacy of Zen 2 and the potential for AMD's next-gen interconnect technology. It suggests that while the new technology will be more expensive, it will offer significant benefits in bandwidth, latency, and energy efficiency. The paragraph also contemplates whether AMD should follow Intel's approach with silicon interposers or maintain a balance between cost and performance, ultimately suggesting that an organic interposer solution like TSMC's InFO_R is the most likely path for AMD's future chiplets.

Mindmap

Keywords

💡Chiplet Architecture

Chiplet architecture refers to a design approach where a processor is composed of multiple smaller chips, or 'chiplets,' each with specific functions. In the video, AMD's Ryzen CPUs have been using this architecture since Zen 2, with separate chiplets for CPU and IO functions. This approach allows for flexibility in design and cost-effectiveness but may have limitations in terms of performance due to interconnect technology.

💡Interconnect Technology

Interconnect technology is the method used to connect different components within a chiplet architecture, facilitating data transfer between them. The video discusses the evolution of AMD's interconnect technology from using PCB traces to potentially adopting silicon interposers or organic RDLs for improved performance and efficiency.

💡Silicon Interposer

A silicon interposer is a piece of silicon that sits between the substrate and the chiplets, providing a direct connection between them for data transfer. This technology offers higher bandwidth, lower latency, and better energy efficiency compared to traditional PCB interconnects. However, it is more complex and expensive to implement.

💡Silicon Bridge

A silicon bridge is a smaller, less complex version of a silicon interposer, placed between two chiplets to facilitate direct connection. It aims to achieve similar benefits to a full silicon interposer but with lower complexity and cost.

💡Organic Redistribution Layer (RDL)

An organic RDL is an interconnect material made of organic compounds, used to create a fan-out interconnect structure that increases interconnect density without the high cost of silicon. This technology balances performance benefits with manufacturing costs.

💡Infinity Fabric

Infinity Fabric is AMD's proprietary interconnect technology used to link different chiplets within their processors. It has been used since the Zen 2 architecture but may be replaced in future generations for improved performance.

💡Bandwidth

Bandwidth in the context of computing refers to the rate at which data can be transferred between components. Higher bandwidth allows for more data to be processed simultaneously, which is crucial for performance in applications like gaming and server operations.

💡Latency

Latency is the delay before a transfer of data begins following an instruction for its transfer. Lower latency means faster response times and improved performance, especially in real-time applications.

💡Energy Efficiency

Energy efficiency refers to the ability of a system to perform tasks using the least amount of energy. In computing, this is often a balance between performance and power consumption.

💡TSMC

TSMC, or Taiwan Semiconductor Manufacturing Company, is a leading semiconductor foundry that produces chips for various companies, including AMD. They offer advanced packaging technologies, such as InFO and CoWoS, which could be used in AMD's future chiplet designs.

💡Zen 6

Zen 6 is the speculated next generation of AMD's Ryzen processor architecture, expected to introduce significant changes in layout, packaging, and interconnect design to improve performance and efficiency.

Highlights

Visual comparison of AMD's Ryzen generations from Zen 2 to Zen 4.

Zen 2, 3, and 4 CPUs have a consistent design with one large IO-die and a single CPU chiplet.

AMD's Zen 5 is expected to maintain the same design as previous generations.

Zen 6 is anticipated to introduce significant changes in layout, packaging, and interconnect design.

Zen 2's impact was due to its simple and cost-effective interconnect and packaging technologies.

AMD's chiplet architecture uses traces through the PCB, a method dating back decades.

The simplicity of AMD's PCB design comes with drawbacks like low bandwidth and high latency.

Silicon interposers offer higher interconnect density, bandwidth, and lower latency at the cost of increased complexity and expense.

Silicon bridges, like Intel's EMIB, aim to achieve similar benefits as silicon interposers but with lower complexity and cost.

AMD's Navi 31 and 32 use an organic RDL interposer with fan-out interconnects, known as Infinity Links.

Infinity Links offer 10x the bandwidth density of Infinity Fabric On-Package with significant power consumption reduction.

For desktop Ryzen CPUs, Zen 6 could place CPU chiplets next to the IO-die for reduced latency.

EPYC servers could benefit from increased interconnect efficiency and reduced energy consumption with Zen 6.

AMD's next-gen interconnect technology is expected to be more expensive but offer higher bandwidth, lower latency, and better energy efficiency.

AMD may implement different interconnect solutions for client and desktop, like a silicon bridge for EPYC and an organic RDL for Ryzen.

TSMC's InFO_R is a likely contender for AMD's next-gen chiplet interconnect architecture.

AMD's Infinity Links could be the future of interconnect technology, offering a balance of cost and performance.

Transcripts

play00:00

What you are looking at is a visual comparison of AMD's last three Ryzen generations. From

play00:06

left to right, we have Zen 2 in the form of a Ryzen 5 3600, Zen 3 as a Ryzen 5 5600 and

play00:13

a Zen 4 based Ryzen 7 7800X3D. That's three CPU generations on two different sockets,

play00:20

with some pretty substantial changes in technology and performance. But looking at these gorgeous

play00:24

near infrared pictures from Fritzchens Fritz, it's difficult to spot any visual differences.

play00:29

3600 and 5600 are pretty much identical and the main visual difference of the 7800X3D

play00:36

is not the chiplets, but rather the capacitors surrounding it. All three CPUs basically look

play00:42

the same: one large IO-die and a single CPU chiplet, with room for one more. That's because

play00:48

ever since the introduction of Zen 2 in 2019, the interconnect and packaging design of AMD's

play00:53

Ryzen CPUs hasn't changed at all. And from what we know, Zen 5 will still use this very

play01:00

same design. They say don't fix what ain't broken, but

play01:02

after soon to be four generations on the same chiplet architecture, it's time for something

play01:07

new: Zen 6 is supposed to introduce sweeping changes to layout, packaging and interconnect

play01:13

design. But what exactly is supposed to change, which technologies could be used and how will

play01:18

Zen 6 benefit? Zen 2 wasn't only the first mass market chiplet

play01:24

architecture, it was arguably the most impactful one, even tho its interconnect and packaging

play01:28

technologies are rather simple. Instead of using complex and expensive technologies,

play01:33

AMD is connecting the individual chiplets by running traces through the PCB, something

play01:38

that has been done in semiconductors for many decades and isn't only easy to implement but

play01:43

also a flexible and very cost effective way to design a multi-chip-module.

play01:48

This image of a Zen 2 PCB shows all the traces within the 12-layer substrate. And while it

play01:54

might look complex at first, once you - quiet literally - connect the dots, the simplicity

play01:59

becomes apparent. We can make out the area for the two CPU chiplets in the upper half

play02:03

of the PCB and the IO-die below that in the center. The CPU chiplets are only connected

play02:09

to the IO-die, all of the other traces are routed directly from and to the IO-die, which

play02:14

handles input and output, such as system memory and PCI-Express. Even tho the CPU chiplets

play02:19

are placed in close proximity to each other, they are not directly connected, every communication

play02:24

has to go through the IO-die first. Yes, there are many traces on the PCB, but

play02:29

the technology behind them is simple. On face value, it's literally just tiny copper wires

play02:34

embedded into the PCB, not unlike your motherboard or any other electronic device with a printed

play02:39

circuit board. Sometimes, PCBs are also called "printed wiring boards", which I feel like

play02:44

is a even more fitting name, as they are a medium used to connect different components

play02:49

with tiny wires. That's the technology AMD has been using for

play02:52

its breakthrough chiplet architecture and is still using today. Even the Infinity Fabric

play02:56

protocol that is used to transfer the data over these wires is based in large parts on

play03:01

PCI-Express, a proven technology. But this simplicity doesn't come without drawbacks:

play03:06

transporting data via the PCB results in low bandwidth, high latency and also consumes

play03:12

a lot of energy. It's so simple to design and cheap to produce, because it is a low-tech

play03:17

implementation. The positive aspect are only related to the cost side, performance and

play03:21

efficiency take a backseat. Going forward, AMD needs a interconnect technology

play03:26

that is able to meet the demands of future Ryzen and EPYC generations: more bandwidth,

play03:31

lower latency and reduced energy cost when transporting data. On the flip side, this

play03:35

new technology will certainly be more expensive to design, implement and manufacture. Let's

play03:40

take a look at the available options, weigh their pros & cons and see which one makes

play03:44

the most sense for Zen 6. The most advanced interconnect technologies

play03:51

use so called silicon interposers, which are pieces of silicon placed in-between the substrate

play03:56

and the chiplets. This image from TSMC depicts their CoWoS technology, but it's true for

play04:01

any other silicon interposer packaging method. Funny enough, the image on the left looks

play04:06

strikingly similar to AMD's current chiplet design, maybe the one on the right shows its

play04:10

future? The advantage of a silicon interposer is that

play04:12

instead of using copper wires on the PCB, the chiplets are sitting on top of the same

play04:17

piece of silicon, with means data connections between chiplets never have to leave the silicon.

play04:22

Running data paths through sillicon allows for a much higher interconnect density, resulting

play04:27

in more bandwidth and lower latency, while at the same time also using less energy, because

play04:32

interposers provide a much larger channel for electrical signals, reducing the amount

play04:37

of energy needed to drive the data signals. AMD's current low-tech solution uses around

play04:42

1.5 to 2 picojules per bit, while a silicon interposer interconnect is in the ballpark

play04:47

of around 0.2 to 0.5 pJ/b. A huge increase in power efficiency.

play04:53

So you get higher bandwidth, lower latency and it uses less energy? What's not to like?

play04:58

Well, there's a trade-off. This time the solution is high-tech, but the costs are high too.

play05:03

Not only does it take more engineering resources to design and implement a silicon interposer

play05:07

design, but the interposer itself also doesn't come cheap. Size is a problem too, as the

play05:13

silicon interposer has to be big enough to fit a large number of chiplets. This requires

play05:17

special technology, so called mask stitching, to produce interposers above the current EUV

play05:22

reticle limit of 858 mm², which further increases cost.

play05:27

The packaging also requires more precision, since the interposer is very fragile and breaks

play05:32

easily. And then you have to completely rethink your power and data routing, since you can't

play05:36

just connect straigh to the chiplets, because the interposer is in the way. For that you

play05:40

need so called TSVs, through-silicon vias, another cost factor.

play05:44

In a nutshell, silicon interposers are great from a pure performance and efficiency perspective,

play05:48

but they are also very complex and expensive to implement. Especially in todays market,

play05:53

manufacturing capacities are limited due to the huge demand for AI GPUs like Nvidias H100

play05:58

and AMDs MI300, which all use silicon interposers. AMD clearly has the ability to produce complex

play06:04

interposer designs, but in my opinion it's not a feasible solution for the consumer market

play06:08

and even HPC CPUs. Zen 6 very likely won't use a silicon interposer.

play06:15

The next best thing are so called "silicon bridges", with Intels EMIB being the most

play06:21

famous technology of this type. The idea behind silicon bridges is pretty simple: achieve

play06:25

the same benefits of a silicon interposer at lower complexity and lower cost. So, how

play06:31

do you do that? The name is a pretty good description of the

play06:34

concept: instead of one large interposer that covers all of the chiplets you want to connect,

play06:39

you place smaller pieces of silicon right where two chiplets meet. Intel's EMIB, which

play06:44

is short for Embedded Multi-Die Interconnect Bridge, places the silicon bridge inside the

play06:48

package substrate, while other solutions like InFO_LSI or CoWoS_L from TSMC raise the silicon

play06:54

bridge above the substrate, a technology used for AMD's MI200, where is was dubbed "Elevated

play07:00

Fanout Bridge". In theory, a silicon brige offers similar

play07:03

benefits to a silicon interposer, without the high costs a single piece of silicon incurs

play07:08

and the smaller bridges also don't block access to the individual chiplets from below, reducing

play07:13

the use of expensive TSVs. But placing the silicon bridges isn't easy, especially when

play07:18

you need a lot of them, like when connecting multiple CPU chiplets with multiple IO-dies.

play07:22

Each placement becomes a potential point of failure during packaging, if one interconnect

play07:27

fails, the whole chip might be wasted. A while back, Tom from Moore's Law Is Dead

play07:31

leaked early Zen 6 design abstractions. In his video, which I've linked below, he showed

play07:35

slightly altered versions of internal AMD slides, outlining the layout of Zen 6 based

play07:41

server CPUs. In Tom's video we can see a combination of closely connected IO-dies and CPU chiplets,

play07:47

which at first glance look very similar to a silicon bridge implementation. The chiplets

play07:51

are placed right next to each other and the interconnect areas seem to overlap, resembling

play07:56

a embedded or elevated bridge. There are a few options for AMD to choose

play08:00

from, if they want to use silicon bridges. TSMC offers the already mentioned InFO_LSI

play08:05

and CoWoS_L, which are bascially the same technology with a different order of packaging.

play08:10

Integrated Fanout, or InFO, is a chip first process, where the chips are placed first

play08:15

and then the interposer or bridge layer is build up second. CoWoS is a chip last process,

play08:21

where the interposer or bridge layers are build up first and then the chips are connected

play08:25

in a second step. Placing the chips last is more easy on the chips, which makes it very

play08:31

valuable for fragile chips like HBM, that's why basically all HBM based chips are using

play08:36

CoWoS packaging technology. Since Zen 6 very likely won't use HBM, a InFO variant would

play08:41

be the more likely choice for AMD. And TSMC isn't the only provider, Outsourced

play08:46

Semiconductor and Test companies (OSATs) offer similar technologies, in the case of ASE for

play08:51

example FOCoS-Bridge. The point is, using a silicon bridge technology wouldn't be limited

play08:56

by high CoWoS demand and there are other providers aside from TSMC. As such, a silicon bridge

play09:01

is a much more likely contender for Zen 6 interconnect technology. It offers similar

play09:06

benefits as a interposer while at the same time reducing some of its greatest drawbacks.

play09:11

And while I wouldn't be completely surprised to see a brige interconnect used for Zen 6,

play09:16

there's one more technology that I personally consider to be the most likely contender for

play09:20

AMD's next-gen chiplet design: a organic redistribution layer, RDL for short, with a fanout interconnect.

play09:29

This technology takes a page out of the silicon interposer playbook, by also building up a

play09:34

interposer that sits below all the chiplets. But instead of using silicon as the material

play09:39

of choice, other organic compounds are used, mostly composite materials.

play09:43

The idea is simple: to achieve benefits like higher bandwidth, lower latency and especially

play09:48

more energy efficiency, you need to increase interconnect density so you can create larger

play09:53

channels for electrical signals. Basically, you need to create more space for more interconnects.

play09:58

Yes, silicon would be the perfect material for such a solution, but as we discussed before,

play10:02

silicon is expensive and comes with a lot of other packaging related drawbacks. If you

play10:06

are using a less capable material the benefits won't be as great, but you can still achieve

play10:11

a more dense connection nonetheless. And AMD already has experience with organic

play10:16

interposers. Both Navi 31 and 32 use a form of TSMCs InFO_R, short for Integrated Fanout

play10:22

RDL, sometimes also called InFO_oS for "on-Substrate". AMD calls it "Infinity Links". Here's how

play10:28

it works: a organic RDL interposer with at least four layers is placed below the GCD

play10:34

and the MCDs. At the intersection of each chip, the RDL is used to quiet literally "fan

play10:40

out" the interconnects onto a much larger area, which woldn't be possible without the

play10:44

extra space the RDL interposer creates. It's called "fan out", because instead of straight

play10:48

point-to-point connections, these high density interconnects look like a old fashioned folding

play10:54

fan. This visual comparison by AES is pretty good

play10:57

at explaining how fan-out works. On the left you can see standard fan-in, where the traces

play11:02

are routed within the area of the chip, which means the chip itself limits how many interconnects

play11:07

you can create. Fan-out on the right side shows that traces are routed outwards from

play11:11

the chip, to build connection points beyond the limit of the individual chip. And now

play11:16

imagine the effect you create when you have more than one layer available, the interconnect

play11:21

density greatly increases. AMD claims that these Infinity Links offer

play11:25

about 10x the bandwidth density of Infinity Fabric On-Package, with a staggering 5.3TB/s,

play11:31

and also reduce power consumption (pJ/bit) by up to 80% at the same time. In their RDNA3

play11:37

presentation AMD even compared a current gen EPYC CPU using Infinity Fabric to their new

play11:42

Infinity Links, which is a pretty huge hint in my opinion. On the left we can see 25 wires

play11:48

on a standard organic package, like used by Infinity Fabric, and the tiny image on the

play11:52

right shows 50 wires using the new Infinity Link technology with a organic interposer.

play11:57

The images are to scale, so you can see how much smaller and more dense the Infinity Links

play12:02

actually are, the difference is staggering. Of course a organic interposer also comes

play12:07

with some of the same drawbacks as a silicon one, but just like the performance benefits,

play12:11

the drawbacks are also less pronounced. Like CoWoS, InFO also offers large interposers,

play12:16

multiple times the EUV reticle limit, which should be enough for even a Zen 6 based EPYC

play12:21

sever CPU. Just like a silicon interposer, a organic RDL is also somewhat blocking access

play12:27

to the dies from below, as it sits in-between the substrate and the chiplets, but it's also

play12:31

easier to run connections through organic material.

play12:34

In a nutshell, a organic interposer with fan-out interconnects strikes a perfect balance. It

play12:39

offers higher bandwidth, lower latency and better energy efficiency compared to the current

play12:44

Infinity Fabric On-Package, while it's downsides are not as pronounced as with a silicon interposer,

play12:50

meaning costs are manageable. Navi 31 and 32 are proof that AMD can scale such a technology

play12:55

for standard consumer products without hurting margins. In the end, TSMCs InFO_R is the most

play13:01

likely contented, but solutions such as FOCoS from AES are also a possibility. A organic

play13:06

interposer is definitely my number one contender for AMD's next-gen chiplet interconnect architecture.

play13:11

Now, let's assume Zen 6 will replace the current Infinity Fabric interconnect with Infinity

play13:17

Links, similar to Navi 31 & 32, what would the actual benefits be and how different would

play13:22

the packaging and layout look like? For desktop Ryzen CPUs, the most obvious change

play13:26

would be the placement of the CPU chiplets. Instead of being located away from the IO-die,

play13:31

they would sit right next to it, either one on each side or both on the same side, depending

play13:35

of how AMD designs the desktop IO-die. When it comes to benefits like bandwidth,

play13:39

latency and efficiency, desktop Ryzen parts will see the most benefit from a decrease

play13:44

in latency. Just like Zen 3 improved latency by switching to a unified 32MB L3 cache, instead

play13:50

of two separated 16MB blocks on Zen 2, Infiniy Links will cut down the latency between IO-die

play13:55

and CPU chiplets, improving performance in latency dependent applications like gaming.

play14:00

With only dual-channel DDR5 and not that many cores to feed, bandwidth isn't a problem to

play14:05

begin with and while a more efficient interconnect is always nice to have, it's also not a major

play14:10

concern for Ryzen. It's the latency that will elevate Zen 6 on desktop.

play14:14

Servers on the other hand are a whole different ball game. With EPYC, the most important improvement

play14:19

will be the increased interconnect efficiency. Connecting sixteen or more chiplets with Infinity

play14:24

Fabric On-Package consumes a large portion of the total TDP. Infinity Link would drastically

play14:29

reduce the amount of energy required for the interconnects, either enabling lower TDPs

play14:34

or leaving headroom for more cores and higher clock speeds. I'm pretty sure AMD will spend

play14:39

every additional watt on more cores. Zen 5 is already rumored to scale to 128 standard

play14:44

and 192 compact cores, Zen 6 could go well beyond that with a new interconnect design.

play14:50

And when it comes to servers, bandwidth is also a important factor. First of all, servers

play14:54

have a lot more memory channels to feed its many cores and server CPU chiplets also contain

play15:00

more cores. Zen4c increased core count per chiplet to 16 and by Zen 6 we might even see

play15:06

24- or 32-core chiplets, these cores need a lot of bandwidth when communicating with

play15:11

the IO-die, Infinity Link would be a huge boon.

play15:14

Zen 2 was a breakthrough for AMD and its legacy is still felt today, especially the Infinity

play15:19

Fabric On-Package interconnect AMD has been using every since. It's actually crazy to

play15:24

think that Zen 5, which will release later this year, is still going to use the same

play15:28

technology. AMD's next-gen interconnect will be more expensive,

play15:31

but deliver important benefits across the board. Higher bandwidth, lower latency and

play15:36

increased energy efficiency will allow AMD to scale their chiplet architectures even

play15:40

further while reducing the performance drawbacks introduced with Zen 2.

play15:44

There's always the slight chance that AMD will implement different solutions for client

play15:48

and desktop, for example a more expensive silicon bridge interconnect for EPYC and a

play15:53

organic RDL for Ryzen, but going by AMD's previous track record, they like to keep it

play15:57

simple, which is why I think we will see the same technology on all chiplet platforms.

play16:02

I'm convinced that a organic interposer solution, such as TSMCs InFO_R used by Navi 31 & 32,

play16:08

is the most likely contender for AMD's next-gen chiplets. It's cheaper than silicon based

play16:12

interconnects while still offering strong benefits over the current solution. AMD's

play16:17

Infinity Links will be the future. I would like to know what kind of interconnect

play16:21

technology you would like to see? Intel is going straight to to silicon interposers with

play16:25

Meteor and Arrow Lake, should AMD do the same, or is a balance of cost and performance the

play16:30

better choice? Let me know in the comments below.

play16:33

I hope you found this video interesting and see you in the next one.

Rate This

5.0 / 5 (0 votes)

Related Tags
AMD RyzenChiplet ArchitectureInterconnect TechZen 6Silicon InterposersSilicon BridgesOrganic RDLInfinity LinksPerformanceCost-EfficiencyTech Advancement