Electronic Computing: Crash Course Computer Science #2

CrashCourse
1 Mar 201710:43

Summary

TLDR本视频回顾了20世纪初至中期计算机技术的发展历程。早期的计算机设备,如制表机,对政府和商业活动产生了巨大影响,帮助执行或替代了重复性手动任务。随着人口增长和两次世界大战的爆发,全球贸易和交通网络的互联,以及工程和科学努力的复杂性增加,对自动化和计算的需求不断上升。早期的机电计算机逐渐演变为占据整个房间的大型机器,维护成本高昂且容易出错。1944年,IBM为盟军在第二次世界大战期间完成了哈佛马克I号,这是最大的机电计算机之一,包含76.5万个部件和500英里的电线。这些计算机的“大脑”是继电器,一种电控机械开关。然而,继电器的机械臂有质量,不能瞬间开关,限制了计算速度。哈佛马克I号每秒能执行3次加法或减法,乘法需要6秒,除法需要15秒。此外,机械部件的磨损和故障也是问题。1947年,哈佛马克II号的运营商从故障的继电器中取出了一只死蛾,这导致了“计算机bug”一词的诞生。为了推动计算技术的进步,需要一种更快、更可靠的替代继电器的方法,幸运的是,这种替代方法已经存在——电子管。1904年,英国物理学家约翰·安布罗斯·弗莱明开发了第一个电子管,随后在1906年,美国发明家李·德福雷斯特添加了第三个“控制”电极,创造了三极管真空管,它没有移动部件,可以每秒切换数千次。这标志着从机电计算向电子计算的转变。视频还介绍了第一台大规模使用真空管的计算机——巨人马克1号,以及世界上第一台通用的可编程电子计算机ENIAC。到了1950年代,即使是基于真空管的计算机也达到了其极限,迫切需要一种新的电子开关——晶体管。1947年,贝尔实验室的科学家发明了晶体管,开启了计算的新纪元。晶体管比真空管更小、更快、更可靠,使得计算机变得更小、更便宜。许多晶体管和半导体的发展发生在加利福尼亚的圣克拉拉谷,也就是今天的硅谷。视频最后提出了一个问题:我们如何从晶体管过渡到实际的计算,尤其是在没有电机和齿轮的情况下?这个问题将在接下来的几集中解答。

Takeaways

  • 📈 20世纪初,特殊用途的计算设备如制表机对政府和商业产生了巨大帮助,它们辅助甚至取代了重复的手工任务。
  • 🌐 20世纪上半叶,世界人口几乎翻倍,两次世界大战动员了数千万人参战,全球贸易和交通网络前所未有地相互连接。
  • 🚀 随着复杂性、官僚机构和数据的爆炸性增长,对自动化和计算的需求日益增加。
  • 💻 早期的机电计算机逐渐发展成占据整个房间的巨型机器,维护成本高昂且容易出错。
  • 🔌 继电器是这些巨型机电计算机的“大脑”,它们是电控机械开关,但存在速度慢和磨损的问题。
  • 🔢 哈佛马克一号计算机每秒能执行3次加法或减法,乘法需要6秒,除法需要15秒。
  • 🐞 1947年,哈佛马克二号计算机的故障中发现了一只死蛾,这引出了“计算机bug”一词。
  • 🌟 约翰·安布罗斯·弗莱明在1904年开发了第一个真空管,这是一种新型的电子元件。
  • 🛠️ 李·德弗雷斯特在1906年添加了第三个“控制”电极,创造了三极管真空管,它没有运动部件,开关速度更快。
  • 📡 科洛萨斯计算机是第一个大规模使用真空管的计算机,它是第一台可编程的电子计算机。
  • 🔢 ENIAC是第一台真正通用的可编程电子计算机,每秒能执行5000个十位数的加法或减法。
  • 🚨 20世纪50年代,真空管计算机也达到了其物理极限,需要一种新的电子开关。
  • 📍 1947年,贝尔实验室的科学家发明了晶体管,开启了计算的新时代。
  • ⚙️ 晶体管是一种开关,可以通过控制线施加电力来打开或关闭,它们比真空管更小、更快、更可靠。
  • 🏭 加州的圣克拉拉谷成为了半导体发展的中心,后来被称为硅谷。
  • 📉 从继电器到真空管再到晶体管,计算技术的发展极大地提高了电的开关速度,为后续的计算机发展奠定了基础。

Q & A

  • 20世纪初期,特殊用途的计算设备对政府和商业有什么影响?

    -20世纪初期,特殊用途的计算设备,如制表机,对政府和商业产生了巨大的推动作用,它们帮助并有时取代了机械的手工任务,提高了数据处理的效率。

  • 为什么20世纪上半叶对自动化和计算的需求不断增加?

    -20世纪上半叶,由于人口规模的快速增长、两次世界大战的动员、全球贸易和交通网络的互联以及工程和科学事业的复杂性增加,导致了对自动化和计算的日益增长的需求。

  • 哈哈佛马克一号(Harvard Mark I)是由哪家公司为哪个目的建造的?

    -哈佛马克一号是由IBM公司为第二次世界大战期间的盟友建造的,它包含了765,000个组件、三百万次连接和五百英里的电线。

  • 什么是中继器(relay),它如何工作?

    -中继器是一种电控机械开关,它通过控制线来决定电路是打开还是关闭。控制线连接到中继器内部的线圈,当电流流过线圈时,会产生一个电磁场,吸引中继器内部的金属臂,使其闭合电路。

  • 哈佛马克一号的计算速度有多快?

    -哈佛马克一号每秒可以执行3次加法或减法运算,乘法需要6秒,除法需要15秒。更复杂的操作,如三角函数,可能需要超过一分钟。

  • 为什么机械中继器的移动速度不够快?

    -机械中继器内部的机械臂具有质量,因此不能在打开和关闭状态之间瞬间移动。即使在1940年代性能良好的中继器也只能每秒来回切换五十次。

  • 计算机中“bug”一词是如何来的?

    -1947年9月,哈佛马克二号的操作员从一个故障的中继器中取出了一只死蛾。随后,计算机科学家Grace Hopper注意到这一点,并创造了“bug”一词来描述计算机中的错误。

  • 约翰·安布罗斯·弗莱明在1904年开发了哪种电子元件?

    -约翰·安布罗斯·弗莱明在1904年开发了一种名为热离子阀门的电子元件,这是第一个真空管。

  • 三极管真空管相比机械中继器有哪些优势?

    -三极管真空管没有移动部件,这意味着磨损更少,而且它们可以每秒切换数千次,比机械中继器快得多。

  • 科洛萨斯马克1(Colossus Mk 1)是什么时候由谁设计的?

    -科洛萨斯马克1是在1943年12月由工程师汤米·弗劳尔斯设计的,它被安装在英国的布莱切利公园,帮助解密纳粹通信。

  • ENIAC是世界上第一台真正通用的可编程电子计算机,它是在哪一年完成的?

    -ENIAC是在1946年在宾夕法尼亚大学完成的,由约翰·莫克利和J.普雷舍·埃克特设计。

  • 晶体管相比真空管有哪些改进?

    -晶体管是固态组件,比真空管更坚固、更不易损坏,而且体积更小,可以每秒切换状态一万次,比真空管快得多。

  • 为什么硅谷被称为硅谷?

    -硅谷位于加利福尼亚州的旧金山和圣何塞之间,由于该地区用于制造半导体的最常见材料是硅,因此这个地区被称为硅谷。

  • IBM 608是何时发布的,它有什么特点?

    -IBM 608是在1957年发布的,它是第一台完全由晶体管驱动的商业可用计算机。它包含了3000个晶体管,每秒可以执行4500次加法或大约80次乘法或除法运算。

Outlines

00:00

😀 20世纪初的计算设备与自动化需求

20世纪初,随着人口激增和两次世界大战的爆发,全球贸易和交通网络的互联以及工程和科学事业的复杂性增加,对自动化和计算的需求日益增长。早期的专用计算设备,如制表机,对政府和商业操作的自动化起到了巨大的推动作用。然而,这些设备逐渐演变为庞大且昂贵的电子机械计算机,它们不仅维护成本高昂,而且容易出错。其中,哈佛大学的Mark I计算机是最大的电子机械计算机之一,它在第二次世界大战期间为盟军完成,包含了765,000个组件和五百万米的电线。这些计算机的核心是继电器,一种电气控制的机械开关,但由于其机械臂的质量,导致其开关速度不够快,无法有效解决大型复杂问题。此外,机械部件的磨损和故障率随数量增加而增加,这使得维护变得困难。

05:03

🤖 电子管与早期电子计算机的发展

为了解决电子机械计算机的局限性,科学家们寻求更快、更可靠的替代方案,这导致了电子管的发明。电子管,也称为热离子阀,是第一个真空管,由约翰·安布罗斯·弗莱明在1904年开发。随后,李·德福雷斯特在1906年添加了第三个“控制”电极,从而创造了一个可以快速开关的电子开关。电子管没有移动部件,因此磨损少,能在每秒开关数千次,这使得它们成为无线电、长途电话和其他电子设备的基础。尽管电子管比机械继电器有所改进,但它们仍然昂贵且易碎。第二次世界大战期间,工程师汤米·弗劳尔斯设计的Colossus Mk 1是第一个大规模使用电子管的计算机,它帮助解密了纳粹通信。ENIAC,由约翰·莫克利和J.普雷斯珀·埃克特设计,是世界首台通用的可编程电子计算机,其计算速度远远超过之前的任何机器。然而,由于电子管的故障率,ENIAC通常每天只能运行半天。到了1950年代,即使是基于电子管的计算机也达到了其极限,这促使科学家们寻求新的电子开关,即晶体管。

10:06

🏆 晶体管的发明与硅谷的崛起

1947年,贝尔实验室的科学家约翰·巴丁、沃尔特·布拉顿和威廉·肖克利发明了晶体管,开启了计算的新时代。晶体管的工作原理依赖于量子力学,它是一种可以通过控制线施加电力来开启或关闭的开关。晶体管使用半导体材料,通过改变“门”电极的电荷,可以操纵材料的导电性,从而允许电流流动或停止。晶体管比电子管更小、更快、更可靠,它们可以每秒切换状态一万次。晶体管的发明导致了计算机尺寸的显著减小和成本的降低,例如IBM 608是1957年发布的第一台完全晶体管化的商业计算机。晶体管和半导体的发展大多发生在加利福尼亚州的圣克拉拉谷,也就是今天的硅谷。硅谷成为了半导体和计算机芯片制造的中心,威廉·肖克利在那里创立了肖克利半导体,其员工后来创立了仙童半导体,再之后是英特尔——今天世界上最大的计算机芯片制造商。

📺 下一集预告

视频的最后一个段落是一个简短的预告,告知观众下一集的内容将继续探讨从晶体管到实际计算的过程,特别是没有电机和齿轮的情况下如何进行计算。

Mindmap

Keywords

💡打孔制表机

打孔制表机是20世纪初的早期计算机设备,主要用于辅助或替代手工处理数据。在视频中,打孔制表机作为早期计算机技术的代表,展示了计算设备如何帮助政府和企业处理大量数据,是自动化和计算需求增长的起点。

💡哈哈佛马克I

哈佛马克I是IBM在1944年为第二次世界大战中的盟军建造的大型电子机械计算机。它包含76.5万个组件、300万个连接和500英里的电线。这台计算机是视频中讨论的早期计算机发展的重要里程碑,展示了当时技术的发展水平和复杂性。

💡继电器

继电器是早期计算机中使用的电气控制机械开关,通过控制线圈中的电流来开启或关闭电路。在视频中,继电器被比喻为水阀,说明了它们的工作原理,同时指出了它们的局限性,如速度慢和易磨损,这些特性限制了计算机处理复杂问题的能力。

💡计算机故障

在视频中,哈佛马克II计算机的一次故障中,操作员从一个故障的继电器中取出了一只死蛾,这导致了“计算机故障”(computer bug)这个术语的诞生。这个词现在广泛用于描述计算机系统中的错误或问题。

💡真空管

真空管是由约翰·安布罗斯·弗莱明在1904年开发的电子元件,是第一个使用真空技术的电子开关。在视频中,真空管的出现标志着从机械式继电器向更快速、更可靠的电子开关的转变,为后来的计算机发展奠定了基础。

💡三极管

三极管是李·德弗雷斯特在1906年对弗莱明的真空管设计进行改进后的产物,通过添加第三个控制电极来控制电子流的开关。三极管在视频中被提到,作为电子计算设备中的关键组件,它比真空管更小、更快,且没有移动部件,从而减少了磨损。

💡巨像计算机

巨像计算机(Colossus Mk 1)是1943年由工程师汤米·弗劳尔斯设计并完成的,用于帮助解密纳粹通信。在视频中,巨像计算机被认为是第一台可编程的电子计算机,它使用了1600个真空管,并在第二次世界大战中发挥了重要作用。

💡ENIAC

ENIAC(电子数值积分器和计算机)是1946年在宾夕法尼亚大学完成的世界首台通用的可编程电子计算机。在视频中,ENIAC能够执行5000次十位数的加法或减法每秒,比之前的任何机器都要快得多,它代表了计算机技术的一次巨大飞跃。

💡晶体管

晶体管是1947年由贝尔实验室的科学家约翰·巴丁、沃尔特·布拉顿和威廉·肖克利发明的。晶体管是一种比真空管更小、更快、更可靠的电子开关。在视频中,晶体管的发明标志着计算技术进入了一个新的时代,使得计算机变得更小、更便宜,并最终进入了办公室和家庭。

💡硅谷

硅谷是加利福尼亚州圣克拉拉谷的别称,是半导体和计算机技术发展的中心。在视频中,硅谷的名字来源于该地区用于制造半导体的硅材料。硅谷是许多重要科技公司的发源地,包括仙童半导体和英特尔,后者是世界上最大的计算机芯片制造商。

💡半导体

半导体是一种介于导体和绝缘体之间的材料,能够根据需要导电或不导电。在视频中,半导体是制造晶体管的关键材料,晶体管的控制电极(门极)通过改变半导体材料的电荷来控制电流的流动。半导体技术的发展对现代电子设备和计算机的小型化和性能提升起到了决定性作用。

Highlights

20世纪初,特殊用途的计算设备,如制表机,对政府和商业运作产生了巨大帮助。

20世纪前半叶,世界人口几乎翻倍,两次世界大战动员了数千万人参战。

全球贸易和运输网络的互联达到了前所未有的水平。

哈佛马克一号(Harvard Mark I)是20世纪40年代IBM为二战盟军建造的最大的机电计算机之一。

哈佛马克一号使用了50英尺的轴和5马力的电机来同步内部机械。

曼哈顿计划是这项技术最早的用途之一,用于运行模拟。

继电器是这些大型机电巨兽的大脑,是一种电气控制的机械开关。

1940年代的好继电器每秒可以来回翻转50次,但对于解决大型复杂问题来说速度还不够快。

哈佛马克一号每秒能做3次加法或减法,乘法需要6秒,除法需要15秒。

机械部件随时间的磨损是限制这些机器的另一个因素。

哈佛马克一号大约有3500个继电器,即使假设每个继电器有10年的使用寿命,平均每天也需要更换一个故障继电器。

1947年9月,哈佛马克二号的操作员从一个故障的继电器中取出了一只死蛾,这引出了“计算机bug”这个术语。

约翰·安布罗斯·弗莱明在1904年开发了第一个真空管,这是一种新的电气元件。

李·德福雷斯特在1906年添加了第三个“控制”电极,从而发明了三极管真空管。

真空管没有移动部件,这意味着磨损更少,而且它们可以每秒切换数千次。

科洛萨斯马克1(Colossus Mk 1)是第一个大规模使用真空管进行计算的设备,由工程师汤米·弗劳尔斯设计,于1943年12月完成。

ENIAC(电子数值积分器和计算机)是1946年在宾夕法尼亚大学完成的世界首台真正通用的可编程电子计算机。

20世纪50年代,即使是基于真空管的计算机也达到了其极限。

1947年,贝尔实验室的科学家约翰·巴丁、沃尔特·布拉顿和威廉·肖克利发明了晶体管,开启了计算的新时代。

晶体管与继电器或真空管一样,是一种可以通过控制线施加电力来打开或关闭的开关。

IBM 608是1957年发布的第一台完全晶体管化的商业可用计算机,它包含了3000个晶体管,每秒可以执行4500次加法或大约80次乘法或除法。

今天的计算机使用的晶体管尺寸小于50纳米,它们不仅非常小,而且非常快,每秒可以切换数百万次,并且可以运行数十年。

Transcripts

play00:02

Our last episode brought us to the start of the 20th century, where early, special purpose

play00:07

computing devices, like tabulating machines, were a huge boon to governments and business

play00:10

- aiding, and sometimes replacing, rote manual tasks. But the scale of human systems continued

play00:15

to increase at an unprecedented rate. The first half of the 20th century saw the

play00:19

world’s population almost double. World War 1 mobilized 70 million people, and World

play00:24

War 2 involved more than 100 million. Global trade and transit networks became interconnected

play00:28

like never before, and the sophistication of our engineering and scientific endeavors

play00:32

reached new heights – we even started to seriously consider visiting other planets.

play00:36

And it was this explosion of complexity, bureaucracy, and ultimately data, that drove an increasing

play00:41

need for automation and computation. Soon those cabinet-sized electro-mechanical

play00:46

computers grew into room-sized behemoths that were expensive to maintain and prone to errors.

play00:52

And it was these machines that would set the stage for future innovation.

play00:55

INTRO

play01:04

One of the largest electro-mechanical computers

play01:06

built was the Harvard Mark I, completed in 1944 by IBM for the Allies during World War 2.

play01:12

It contained 765,000 components, three million connections, and five hundred miles of wire.

play01:19

To keep its internal mechanics synchronized,

play01:21

it used a 50-foot shaft running right through the machine driven by a five horsepower motor.

play01:26

One of the earliest uses for this technology was running simulations for the Manhattan Project.

play01:30

The brains of these huge electro-mechanical

play01:32

beasts were relays: electrically-controlled mechanical switches. In a relay, there is

play01:37

a control wire that determines whether a circuit is opened or closed. The control wire connects

play01:42

to a coil of wire inside the relay. When current flows through the coil, an electromagnetic

play01:47

field is created, which in turn, attracts a metal arm inside the relay, snapping it

play01:51

shut and completing the circuit. You can think of a relay like a water faucet. The control

play01:56

wire is like the faucet handle. Open the faucet, and water flows through the pipe. Close the

play02:00

faucet, and the flow of water stops.

play02:02

Relays are doing the same thing, just with

play02:04

electrons instead of water. The controlled circuit can then connect to other circuits,

play02:08

or to something like a motor, which might increment a count on a gear, like in Hollerith's

play02:13

tabulating machine we talked about last episode. Unfortunately, the mechanical arm inside of

play02:17

a relay *has mass*, and therefore can’t move instantly between opened and closed states.

play02:21

A good relay in the 1940’s might be able to flick back and forth fifty times in a second.

play02:26

That might seem pretty fast, but it’s not fast enough to be useful at solving large,

play02:31

complex problems. The Harvard Mark I could do 3 additions or

play02:34

subtractions per second; multiplications took 6 seconds, and divisions took 15.

play02:39

And more complex operations, like a trigonometric function, could take over a minute.

play02:44

In addition to slow switching speed, another limitation was wear and tear. Anything mechanical

play02:48

that moves will wear over time. Some things break entirely, and other things start getting

play02:52

sticky, slow, and just plain unreliable.

play02:54

And as the number of relays increases, the

play02:56

probability of a failure increases too. The Harvard Mark I had roughly 3500 relays. Even

play03:02

if you assume a relay has an operational life of 10 years, this would mean you’d have

play03:06

to replace, on average, one faulty relay every day! That’s a big problem when you are in

play03:11

the middle of running some important, multi-day calculation.

play03:14

And that’s not all engineers had to contend with. These huge, dark, and warm machines

play03:18

also attracted insects. In September 1947, operators on the Harvard Mark II pulled a

play03:23

dead moth from a malfunctioning relay. Grace Hopper who we’ll talk more about in a later episode noted,

play03:28

“From then on, when anything went wrong with a computer,

play03:31

we said it had bugs in it.”

play03:32

And that’s where we get the term computer bug.

play03:34

It was clear that a faster, more reliable alternative to electro-mechanical relays was

play03:38

needed if computing was going to advance further, and fortunately that alternative already existed!

play03:43

In 1904, English physicist John Ambrose Fleming developed a new electrical component called

play03:49

a thermionic valve, which housed two electrodes inside an airtight glass bulb - this was the

play03:54

first vacuum tube. One of the electrodes could be heated, which would cause it to emit electrons

play03:59

– a process called thermionic emission. The other electrode could then attract these

play04:04

electrons to create the flow of our electric faucet, but only if it was positively charged

play04:09

- if it had a negative or neutral charge, the electrons would no longer be attracted

play04:13

across the vacuum so no current would flow.

play04:15

An electronic component that permits the one-way

play04:17

flow of current is called a diode, but what was really needed was a switch to help turn

play04:22

this flow on and off. Luckily, shortly after, in 1906, American inventor Lee de Forest added

play04:28

a third “control” electrode that sits between the two electrodes in Fleming’s design.

play04:32

By applying a positive charge to the control electrode, it would permit the flow

play04:36

of electrons as before. But if the control electrode was given a negative charge, it

play04:41

would prevent the flow of electrons. So by manipulating the control wire, one could

play04:45

open or close the circuit. It’s pretty much the same thing as a relay - but importantly,

play04:49

vacuum tubes have no moving parts. This meant there was less wear, and more importantly,

play04:53

they could switch thousands of times per second. These triode vacuum tubes would become the

play04:58

basis of radio, long distance telephone, and many other electronic devices for nearly a

play05:02

half century. I should note here that vacuum tubes weren’t perfect - they’re kind of

play05:07

fragile, and can burn out like light bulbs, they were a big improvement over mechanical relays.

play05:11

Also, initially vacuum tubes were expensive

play05:14

– a radio set often used just one, but a computer might require hundreds or thousands of electrical switches.

play05:20

But by the 1940s, their cost and reliability had improved to

play05:23

the point where they became feasible for use in computers…. at least by people with deep

play05:28

pockets, like governments. This marked the shift from electro-mechanical

play05:31

computing to electronic computing. Let’s go to the Thought Bubble.

play05:35

The first large-scale use of vacuum tubes for computing was the Colossus Mk 1 designed

play05:40

by engineer Tommy Flowers and completed in December of 1943. The Colossus was installed

play05:46

at Bletchley Park, in the UK, and helped to decrypt Nazi communications.

play05:50

This may sound familiar because two years prior Alan Turing, often called the father

play05:54

of computer science, had created an electromechanical device, also at Bletchley Park, called the

play05:59

Bombe. It was an electromechanical machine designed to break Nazi Enigma codes, but the

play06:04

Bombe wasn’t technically a computer, and we’ll get to Alan Turing’s contributions

play06:08

later. Anyway, the first version of Colossus contained

play06:10

1,600 vacuum tubes, and in total, ten Colossi were built to help with code-breaking.

play06:17

Colossus is regarded as the first programmable, electronic computer.

play06:20

Programming was done by plugging hundreds of wires into plugboards, sort of like old

play06:25

school telephone switchboards, in order to set up the computer to perform the right operations.

play06:29

So while “programmable”, it still had to be configured to perform a specific computation.

play06:35

Enter the The Electronic Numerical Integrator and Calculator – or ENIAC – completed

play06:40

a few years later in 1946 at the University of Pennsylvania.

play06:44

Designed by John Mauchly and J. Presper Eckert, this was the world's first truly general purpose,

play06:50

programmable, electronic computer.

play06:52

ENIAC could perform 5000 ten-digit additions or subtractions per second, many, many times

play06:58

faster than any machine that came before it. It was operational for ten years, and is estimated

play07:03

to have done more arithmetic than the entire human race up to that point.

play07:07

But with that many vacuum tubes failures were common, and ENIAC was generally only operational

play07:11

for about half a day at a time before breaking down.

play07:14

Thanks Thought Bubble. By the 1950’s, even vacuum-tube-based computing was reaching its limits.

play07:19

The US Air Force’s AN/FSQ-7 computer, which was completed in 1955, was part of the

play07:25

“SAGE” air defense computer system we’ll talk more about in a later episode.

play07:29

To reduce cost and size, as well as improve reliability and speed, a radical new electronic

play07:34

switch would be needed. In 1947, Bell Laboratory scientists John Bardeen, Walter Brattain,

play07:39

and William Shockley invented the transistor, and with it, a whole new era of computing was born!

play07:45

The physics behind transistors is pretty complex, relying on quantum mechanics,

play07:49

so we’re going to stick to the basics.

play07:51

A transistor is just like a relay or vacuum tube - it’s a switch that can be opened

play07:56

or closed by applying electrical power via a control wire. Typically, transistors have

play08:00

two electrodes separated by a material that sometimes can conduct electricity, and other

play08:05

times resist it – a semiconductor. In this case, the control wire attaches to

play08:10

a “gate” electrode. By changing the electrical charge of the gate, the conductivity of the

play08:15

semiconducting material can be manipulated, allowing current to flow or be stopped – like

play08:20

the water faucet analogy we discussed earlier. Even the very first transistor at Bell Labs

play08:24

showed tremendous promise – it could switch between on and off states 10,000 times per second.

play08:30

Further, unlike vacuum tubes made of glass and with carefully suspended, fragile

play08:34

components, transistors were solid material known as a solid state component.

play08:39

Almost immediately, transistors could be made smaller than the smallest possible relays or vacuum tubes.

play08:43

This led to dramatically smaller and cheaper computers, like the IBM 608, released in 1957

play08:50

– the first fully transistor-powered, commercially-available computer.

play08:53

It contained 3000 transistors and could perform 4,500 additions, or roughly

play08:59

80 multiplications or divisions, every second. IBM soon transitioned all of its computing

play09:04

products to transistors, bringing transistor-based computers into offices, and eventually, homes.

play09:10

Today, computers use transistors that are smaller than 50 nanometers in size – for

play09:14

reference, a sheet of paper is roughly 100,000 nanometers thick. And they’re not only incredibly

play09:20

small, they’re super fast – they can switch states millions of times per second, and can run for decades.

play09:27

A lot of this transistor and semiconductor development happened in the Santa Clara Valley,

play09:31

between San Francisco and San Jose, California.

play09:34

As the most common material used to create semiconductors is silicon, this

play09:38

region soon became known as Silicon Valley. Even William Shockley moved there, founding

play09:43

Shockley Semiconductor, whose employees later founded

play09:46

Fairchild Semiconductors, whose employees later founded

play09:49

Intel - the world’s largest computer chip maker today.

play09:52

Ok, so we’ve gone from relays to vacuum tubes to transistors. We can turn electricity

play09:57

on and off really, really, really fast. But how do we get from transistors to actually

play10:02

computing something, especially if we don’t have motors and gears?

play10:06

That’s what we’re going to cover over the next few episodes.

play10:09

Thanks for watching. See you next week.

Rate This

5.0 / 5 (0 votes)

Related Tags
计算设备20世纪哈军工真空管晶体管硅谷电子计算技术创新历史回顾计算机科学科技发展
Do you need a summary in English?