Integrated Circuits & Moore's Law: Crash Course Computer Science #17
Summary
TLDR本视频介绍了计算机硬件的发展历程,从1940年代的电子计算机诞生,到1960年代中期,计算机由单独的离散组件构成,如ENIAC计算机使用了超过17,000个真空管。随着晶体管的商业化,IBM 7090的出现标志着第二代电子计算机的诞生。然而,离散晶体管并未解决组件数量增加带来的复杂性问题。1958年,杰克·基尔比在德州仪器展示了集成电路(IC),将多个电子组件集成到一个单元中,而费尔柴尔德半导体公司则使IC变得实用。集成电路的出现,加上印刷电路板(PCB)的使用,极大地简化了计算机的设计和制造。随着光刻技术的改进,集成电路的尺寸不断缩小,晶体管数量不断增加,这符合戈登·摩尔在1965年提出的摩尔定律。摩尔定律预测,大约每两年,集成电路上可容纳的晶体管数量会翻倍。尽管面临光刻技术和量子隧穿效应的挑战,科学家和工程师仍在努力寻找解决方案,以继续推动硬件技术的发展。
Takeaways
- 🖥️ 从机械代码到面向对象的编程语言,软件在短短50年内复杂性大增。
- 🔧 硬件的进步是软件复杂性增长的关键因素。
- 📅 1940年代到1960年代,计算机使用独立部件逐个构建。
- 🔍 转向晶体管技术显著提高了计算机的速度和可靠性,代表了第二代电子计算机。
- 📦 集成电路的发明解决了部件众多导致的设计与制造困难,即“数字的暴政”。
- 🌐 硅的稳定性和丰富性使其成为制造集成电路的首选材料。
- 🔬 光刻技术的应用极大地提高了集成电路的设计复杂性和生产效率。
- ⚙️ 微处理器的发展和大规模集成电路的使用标志着计算机第三代和第四代的到来。
- 📉 摩尔定律描述了集成电路上晶体管数量大约每两年翻一番的趋势。
- 🛑 尽管面临光刻技术的波长限制和量子隧穿问题,科学家们依然在努力推动晶体管技术的极限。
Q & A
软件从机器代码发展到面向对象编程语言的过程中,硬件的哪些改进起到了关键作用?
-硬件的改进包括从使用电子管到晶体管的转变,再到集成电路(ICs)的发明。这些改进使得计算机的性能得到显著提升,同时成本降低,尺寸缩小,为软件的复杂性增长提供了可能。
ENIAC计算机的构造中包含了哪些主要组件?
-ENIAC计算机的构造中包含了超过17,000个真空管、70,000个电阻器、10,000个电容器和7,000个二极管,以及需要500万个手工焊接的连接点。
什么是“Tyranny of Numbers”?
-“Tyranny of Numbers”是指随着计算机性能的提升,需要增加更多的组件,这导致连接点、电线数量以及整体复杂性的增加,使得计算机的设计和制造变得非常繁琐。
IBM 7090相比于其前身有什么改进?
-IBM 7090是将真空管替换为晶体管后的新一代计算机,它比其前身快六倍,成本减半,标志着电子计算的第二代。
集成电路(ICs)是如何改变计算机设计的?
-集成电路(ICs)将多个电子组件封装在一个单一的组件内,从而减少了构建计算机所需的独立组件和连线数量,简化了设计和制造过程,提高了可靠性,并降低了成本。
罗伯特·诺伊斯为什么被广泛认为是现代集成电路之父?
-罗伯特·诺伊斯领导的Fairchild Semiconductor公司使用丰富的硅材料制造了实用的集成电路,硅比之前使用的锗更稳定、更可靠。因此,诺伊斯被视为现代集成电路的开创者,也促成了硅谷的诞生。
印刷电路板(PCBs)是如何提高计算机制造效率的?
-印刷电路板(PCBs)通过在板上蚀刻金属线来代替手工焊接和捆绑大量电线,实现了组件间的连接,可以大规模制造,从而提高了计算机制造的效率和可靠性。
光刻技术在集成电路制造中扮演了什么角色?
-光刻技术使用光源将复杂图案转移到半导体材料上,通过多次曝光、显影和蚀刻过程,能够在硅片上创建微小的晶体管和其他电子元件,极大地提高了集成电路的复杂性和集成度。
摩尔定律是什么?它对集成电路的发展有何影响?
-摩尔定律是观察到的一个趋势,即大约每两年,由于材料和制造技术的进步,可以在相同空间内安装的晶体管数量可以翻倍。这一趋势推动了集成电路性能的快速提升和成本的降低。
英特尔4004微处理器为何被视为一个重要的里程碑?
-英特尔4004微处理器是第一个作为集成电路(IC)发货的处理器,被称为微处理器,因为它非常小巧,包含了2,300个晶体管,标志着CPU集成度的巨大进步。
集成电路的发展对现代电子设备产生了哪些影响?
-集成电路的发展极大地推动了现代电子设备的小型化、性能提升和成本降低。如今,从智能手机的处理器到RAM、显卡、固态硬盘、摄像头传感器等,几乎所有的电子组件都受益于集成电路技术的进步。
为什么说摩尔定律可能即将结束,我们面临哪些挑战?
-摩尔定律可能即将结束的原因是光刻技术在制造更小特征尺寸上遇到了物理极限,同时,当晶体管尺寸变得非常小的时候,量子隧道效应会导致电流泄漏,影响晶体管的开关性能。尽管如此,科学家和工程师仍在努力寻找解决这些问题的方法。
Outlines
💡 电子计算的诞生与硬件发展
本节介绍了计算机硬件的爆炸性增长,从1940年代到1960年代中期,计算机由单独的离散组件构成。例如ENIAC计算机使用了超过17,000个真空管和数以万计的其他电子元件,这导致了所谓的“数字暴政”。1950年代中期,晶体管开始商业化并被整合进计算机中,相较于真空管,晶体管体积更小、速度更快且更可靠。1959年,IBM将基于真空管的709型计算机升级为使用离散晶体管的7090型,这台新机器速度提升了六倍且成本减半。然而,离散晶体管并未解决数字暴政问题。1960年代,计算机内部常常是大量杂乱的电线团。集成电路(IC)的突破发生在1958年,Jack Kilby在德州仪器展示了一种集成所有电路组件的电子部件。Fairchild Semiconductor在Robert Noyce的带领下使IC变得实用,他们使用了更为稳定且丰富的硅材料,取代了Kilby使用的锗。集成电路就像乐高积木,是计算机工程师的构建模块,可以排列成无限可能的设计。此外,工程师们还创新了印刷电路板(PCB),通过在PCB上蚀刻金属线来代替大量手工焊接的电线,从而连接各个组件。
🔍 光刻技术的革新与集成电路的实现
光刻技术是一种使用光源将复杂图案转移到半导体材料上的方法。通过在硅片上添加氧化层和光阻,然后使用光罩和强光照射,可以转移光罩上的图案。未被光照射的光阻和下面的氧化层保持不变,而被照射的光阻会化学反应,之后可以洗去,露出氧化层的特定区域。通过使用酸等特殊化学物质可以移除暴露的氧化层,并蚀刻至原始硅片。这个过程涉及多次光刻和不同的化学处理,以创建具有不同电学特性的复杂电路。最终,通过金属化过程在氧化层上制造通道,并沉积金属层,形成所需的电路设计。随着光刻技术的进步,晶体管尺寸缩小,集成电路密度增加。1960年代初,一个集成电路上很少超过5个晶体管,但到了1960年代中期,市场上开始出现含有超过100个晶体管的集成电路。Gordon Moore观察到,由于材料和制造工艺的进步,大约每两年晶体管数量就能翻倍,这就是摩尔定律。集成电路的价格也随之大幅下降。晶体管越小,切换状态就越快,消耗的功率也越少,更紧凑的电路也意味着信号延迟减少,时钟速度更快。1968年,Robert Noyce和Gordon Moore共同创立了Intel公司,成为今天最大的芯片制造商。
🚀 微处理器的发展与摩尔定律的未来
微处理器的发展标志着计算机进入了第三代,Intel 4004是首款作为集成电路出货的处理器,它包含了2300个晶体管。随后,CPU晶体管数量爆炸性增长,1980年的CPU含有3万个晶体管,1990年超过了100万个,2000年为3000万个,到2010年,一个集成电路上晶体管数量达到了十亿。为了实现这种密度,光刻技术的最高分辨率从大约1万纳米(约人类头发厚度的1/10)提高到今天的约14纳米,这比一个红细胞还要小400倍以上。现代的处理器,如iPhone 7中的A10 CPU,包含了惊人的33亿个晶体管,尺寸仅为1厘米见方。随着技术的发展,工程师们不再手工布局设计,而是使用VLSI软件自动生成芯片设计。然而,摩尔定律可能即将结束,因为光刻技术的光波长限制和量子隧道现象可能导致晶体管尺寸缩小的极限。尽管如此,科学家和工程师们仍在努力寻找解决方案,实验室中已经展示了1纳米大小的晶体管。
Mindmap
Keywords
💡硬件
💡软件工程
💡集成电路(ICs)
💡摩尔定律
💡光刻技术
💡微处理器
💡晶体管
💡印刷电路板(PCBs)
💡超大规模集成电路(VLSI)
💡量子隧道效应
💡纳米技术
Highlights
软件复杂性在50年间从手工穿孔的机器代码发展到面向对象编程语言和集成开发环境。
硬件的改进是软件复杂性增长的关键。
1940年代至1960年代中期,计算机由单独的离散组件构成。
ENIAC计算机由超过17,000个真空管和500万个手工焊接的连接组成。
1950年代中期,晶体管开始商业化并被集成到计算机中。
IBM 7090是第二代电子计算机,比其前身快六倍,成本减半。
1960年代,计算机内部的复杂性达到了临界点,内部充满了大量电线。
集成电路(IC)的出现解决了离散组件的复杂性问题。
杰克·基尔比在1958年展示了第一个集成电路,所有电子电路组件都完全集成在一起。
罗伯特·诺伊斯使用丰富的硅材料使集成电路变得实用,被誉为现代集成电路之父。
集成电路就像计算机工程师的乐高,是构建无限可能设计的构建块。
印刷电路板(PCB)的出现使得大量金属线被蚀刻进板中,减少了组件间的连接复杂性。
光刻技术是制造复杂集成电路的关键,通过光将复杂图案转移到半导体材料上。
摩尔定律预测了集成电路上可容纳的晶体管数量大约每两年翻一番。
英特尔4004微处理器是第一个作为集成电路出货的处理器,标志着第三代计算机的开始。
CPU晶体管数量的爆炸性增长,从1980年的3万个到2010年的10亿个。
光刻技术的进步使得晶体管尺寸缩小,允许更高的密度。
集成电路的成本从1962年的平均50美元下降到1968年的大约2美元。
超大规模集成电路(VLSI)软件的出现使得自动生成芯片设计成为可能。
摩尔定律可能即将结束,因为光刻技术的物理限制和量子隧道现象。
尽管面临挑战,科学家和工程师仍在努力寻找解决方案,以实现更小尺寸的晶体管。
Transcripts
This episode is brought to you by Curiosity Stream.
Hi, I’m Carrie Anne, and welcome to CrashCourse Computer Science!
Over the past six episodes, we delved into software, from early programming efforts to
modern software engineering practices.
Within about 50 years, software grew in complexity from machine code punched by hand onto paper
tape, to object oriented programming languages, compiled in integrated development environments.
But this growth in sophistication would not have been possible without improvements in hardware.
INTRO
To appreciate computing hardware’s explosive growth in power and sophistication, we need
to go back to the birth of electronic computing.
From roughly the 1940’s through the mid-1960s, every computer was built from individual parts,
called discrete components, which were all wired together.
For example, the ENIAC, consisted of more than 17,000 vacuum tubes, 70,000 resistors,
10,000 capacitors, and 7,000 diodes, all of which required 5 million hand-soldered connections.
Adding more components to increase performance meant more connections, more wires, and just
more complexity, what was dubbed the Tyranny of Numbers.
By the mid 1950s, transistors were becoming commercially available and being incorporated
into computers.
These were much smaller, faster and more reliable than vacuum tubes, but each transistor was
still one discrete component.
In 1959, IBM upgraded their vacuum-tube-based “709” computers to transistors by replacing
all the discrete vacuum tubes with discrete transistors.
The new machine, the IBM 7090, was six times faster and half the cost.
These transistorized computers marked the second generation of electronic computing.
However, although faster and smaller, discrete transistors didn’t solve the Tyranny of
Numbers.
It was getting unwieldy to design, let alone physically manufacture computers with hundreds
of thousands of individual components.
By the the 1960s, this was reaching a breaking point.
The insides of computers were often just huge tangles of wires.
Just look at what the inside of a PDP-8 from 1965 looked like!
The answer was to bump up a new level of abstraction, and package up underlying complexity!
The breakthrough came in 1958, when Jack Kilby, working at Texas Instruments, demonstrated
such an electronic part, “wherein all the components of the electronic circuit are completely
integrated."
Put simply: instead of building computer parts out of many discrete components and wiring
them all together, you put many components together, inside of a new, single component.
These are called Integrated Circuits, or ICs.
A few months later in 1959, Fairchild Semiconductor, lead by Robert Noyce, made ICs practical.
Kilby built his ICs out of germanium, a rare and unstable material.
But, Fairchild used the abundant silicon, which makes up about a quarter of the earth's crust!
It’s also more stable, therefore more reliable.
For this reason, Noyce is widely regarded as the father of modern ICs, ushering in the
electronics era... and also Silicon Valley, where Fairchild was based and where many other
semiconductor companies would soon pop up.
In the early days, an IC might only contain a simple circuit with just a few transistors,
like this early Westinghouse example.
But even this allowed simple circuits, like the logic gates from Episode 3, to be packaged
up into a single component.
ICs are sort of like lego for computer engineers “building blocks” that can be arranged
into an infinite array of possible designs.
However, they still have to be wired together at some point to create even bigger and more
complex circuits, like a whole computer.
For this, engineers had another innovation: printed circuit boards, or PCBs.
Instead of soldering and bundling up bazillions of wires, PCBs, which could be mass manufactured,
have all the metal wires etched right into them* to connect components together.
By using PCBs and ICs together, one could achieve exactly the same functional circuit
as that made from discrete components, but with far fewer individual components and tangled
wires.
Plus, it’s smaller, cheaper and more reliable.
Triple win!
Many early ICs were manufactured using teeny tiny discrete components packaged up as a
single unit, like this IBM example from 1964.
However, even when using really really itty-bitty components, it was hard to get much more than
around five transistors onto a single IC.
To achieve more complex designs, a radically different fabrication process was needed that
changed everything: Photolithography!
In short, it’s a way to use light to transfer complex patterns to a material, like a semiconductor.
It only has a few basic operations, but these can be used to create incredibly complex circuits.
Let’s walk through a simple, although extensive example, to make one of these!
We start with a slice of silicon, which, like a thin cookie, is called a wafer.
Delicious!
Silicon, as we discussed briefly in episode 2, is special because it’s a semiconductor,
that is, a material that can sometimes conduct electricity and other times does not.
We can control where and when this happens, making Silicon the perfect raw material for
making transistors.
We can also use a wafer as a base to lay down complex metal circuits, so everything is integrated,
perfect for... integrated circuits!
The next step is to add a thin oxide layer on top of the silicon, which acts as a protective
coating.
Then, we apply a special chemical called a photoresist.
When exposed to light, the chemical changes, and becomes soluble, so it can be washed away
with a different special chemical.
Photoresists aren’t very useful by themselves, but are super powerful when used in conjunction
with a photomask.
This is just like a piece of photographic film, but instead of a photo of a hamster
eating a tiny burrito, it contains a pattern to be transferred onto the wafer.
We do this by putting a photomask over the wafer, and turning on a powerful light.
Where the mask blocks the light, the photoresist is unchanged.
But where the light does hit the photoresist it changes chemically which lets us wash away
only the photoresist that was exposed to light, selectively revealing areas of our oxide layer.
Now, by using another special chemical, often an acid, we can remove any exposed oxide,
and etch a little hole the entire way down to the raw silicon.
Note that the oxide layer under the photoresist is protected.
To clean up, we use yet another special chemical that washes away any remaining photoresist.
Yep, there are a lot of special chemicals in photolithography, each with a very specific
function!
So now we can see the silicon again, we want to modify only the exposed areas to better
conduct electricity.
To do that, we need to change it chemically through a process called: doping.
I’m not even going to make a joke.
Let’s move on.
Most often this is done with a high temperature gas, something like Phosphorus, which penetrates
into the exposed area of silicon.
This alters its electrical properties.
We’re not going to wade into the physics and chemistry of semiconductors, but if you’re
interested, there’s a link in the description to an excellent video by our friend Derek
Muller from Veritasium.
But, we still need a few more rounds of photolithography to build a transistor.
The process essentially starts again, first by building up a fresh oxide layer ...which
we coat in photoresist.
Now, we use a photomask with a new and different pattern, allowing us to open a small window
above the doped area.
Once again, we wash away remaining photoresist.
Now we dope, and avoid telling a hilarious joke, again, but with a different gas that
converts part of the silicon into yet a different form.
Timing is super important in photolithography in order to control things like doping diffusion
and etch depth.
In this case, we only want to dope a little region nested inside the other.
Now we have all the pieces we need to create our transistor!
The final step is to make channels in the oxide layer so that we can run little metal
wires to different parts of our transistor.
Once more, we apply a photoresist, and use a new photomask to etch little channels.
Now, we use a new process, called metalization, that allows us to deposit a thin layer of
metal, like aluminium or copper.
But we don’t want to cover everything in metal.
We want to etch a very specific circuit design.
So, very similar to before, we apply a photoresist, use a photomask, dissolve the exposed resist,
and use a chemical to remove any exposed metal.
Whew!
Our transistor is finally complete!
It has three little wires that connect to three different parts of the silicon, each
doped a particular way to create, in this example, what’s called a bipolar junction transistor.
Here’s the actual patent from 1962, an invention that changed our world forever!
Using similar steps, photolithography can create other useful electronic elements, like
resistors and capacitors, all on a single piece of silicon (plus all the wires needed
to hook them up into circuits).
Goodbye discrete components!
In our example, we made one transistor, but in the real world, photomasks lay down millions
of little details all at once.
Here is what an IC might look like from above, with wires crisscrossing above and below each
other, interconnecting all the individual elements together into complex circuits.
Although we could create a photomask for an entire wafer, we can take advantage of the
fact that light can be focused and projected to any size we want.
In the same way that a film can be projected to fill an entire movie screen, we can focus
a photomask onto a very small patch of silicon, creating incredibly fine details.
A single silicon wafer is generally used to create dozens of ICs.
Then, once you’ve got a whole wafer full, you cut them up and package them into microchips,
those little black rectangles you see in electronics all the time.
Just remember: at the heart of each of those chips is one of these small pieces of silicon.
As photolithography techniques improved, the size of transistors shrunk, allowing for greater
densities.
At the start of the 1960s, an IC rarely contained more than 5 transistors, they just couldn’t
possibly fit.
But, by the mid 1960s, we were starting to see ICs with over 100 transistors on the market.
In 1965, Gordon Moore could see the trend: that approximately every two years, thanks
to advances in materials and manufacturing, you could fit twice the number of transistors
into the same amount of space.
This is called Moore’s Law.
The term is a bit of a misnomer though.
It’s not really a law at all, more of a trend.
But it’s a good one.
IC prices also fell dramatically, from an average of $50 in 1962 to around $2 in 1968.
Today, you can buy ICs for cents.
Smaller transistors and higher densities had other benefits too.
The smaller the transistor, the less charge you have to move around, allowing it to switch
states faster and consume less power.
Plus, more compact circuits meant less delay in signals resulting in faster clock speeds.
In 1968, Robert Noyce and Gordon Moore teamed up and founded a new company, combining the
words Integrated and Electronics...
Intel... the largest chip maker today.
The Intel 4004 CPU, from Episodes 7 and 8, was a major milestone.
Released in 1971, it was the first processor that shipped as an IC, what’s called a microprocessor,
because it was so beautifully small!
It contained 2,300 transistors.
People marveled at the level of integration, an entire CPU in one chip, which just two
decades earlier would have filled an entire room using discrete components.
This era of integrated circuits, especially microprocessors, ushered in the third generation
of computing.
And the Intel 4004 was just the start.
CPU transistor count exploded!
By 1980, CPUs contained 30 thousand transistors.
By 1990, CPUs breached the 1 million transistor count.
By 2000, 30 million transistors, and by 2010,
ONE. BILLION. TRANSISTORS. IN ONE. IC. OMG!
To achieve this density, the finest resolution possible with photolithography has improved
from roughly 10 thousand nanometers, that’s about 1/10th the thickness of a human hair,
to around 14 nanometers today.
That’s over 400 times smaller than a red blood cell!
And of course, CPU’s weren’t the only components to benefit.
Most electronics advanced essentially exponentially: RAM, graphics cards, solid state hard drives,
camera sensors, you name it.
Today’s processors, like the A10 CPU inside Of an iPhone 7, contains a mind melting 3.3 BILLION
transistors in an IC roughly 1cm by 1cm.
That’s smaller than a postage stamp!
And modern engineers aren’t laying out these designs by hand, one transistor at a time
- it’s not humanly possible.
Starting in the 1970’s, very-large-scale integration, or VLSI software, has been used
to automatically generate chip designs instead.
Using techniques like logic synthesis, where whole, high-level components can be laid down,
like a memory cache, the software generates the circuit in the most efficient way possible.
Many consider this to be the start of fourth generation computers.
Unfortunately, experts have been predicting the end of Moore’s Law for decades, and
we might finally be getting close to it.
There are two significant issues holding us back from further miniaturization.
First, we’re bumping into limits on how fine we can make features on a photomask and
it’s resultant wafer due to the wavelengths of light used in photolithography.
In response, scientists have been developing light sources with smaller and smaller wavelengths
that can project smaller and smaller features.
The second issue is that when transistors get really really small, where electrodes
might be separated by only a few dozen atoms, electrons can jump the gap, a phenomenon called
quantum tunneling.
If transistors leak current, they don’t make very good switches.
Nonetheless, scientists and engineers are hard at work figuring out ways around these problems.
Transistors as small as 1 nanometer have been demonstrated in research labs.
Whether this will ever be commercially feasible remains MASKED in mystery.
But maybe we’ll be able to RESOLVE it in the future.
I’m DIEING to know.
See you next week.
Hey guys, this week’s episode was brought to you by CuriosityStream
which is a streaming service full of documentaries and nonfiction titles from
some really great filmmakers, including exclusive originals.
Like a short documentary called “Birth of The Internet”
that tells the story of the first ever Internet message transferred in 1969 between UCLA and Stanford University.
This was a pivotal moment in computing history,
but unlike Samuel Morse’s first telegraph or Neil Armstrong’s famous words on the moon
the first message wasn’t quite so...ambitious.
Anyway, get unlimited access today, and your first two months are free
if you sign up at curiositystream.com/crashcourse
and use the promo code "crashcourse" during the sign-up process.
浏览更多相关视频
Electronic Computing: Crash Course Computer Science #2
The Cold War and Consumerism: Crash Course Computer Science #24
The Personal Computer Revolution: Crash Course Computer Science #25
Advanced CPU Designs: Crash Course Computer Science #9
Keyboards & Command Line Interfaces: Crash Course Computer Science #22
Graphical User Interfaces: Crash Course Computer Science #26
5.0 / 5 (0 votes)