Electronic Computing: Crash Course Computer Science #2
Summary
TLDR本视频回顾了20世纪初至中期计算机技术的发展历程。早期的计算机设备,如制表机,对政府和商业活动产生了巨大影响,帮助执行或替代了重复性手动任务。随着人口增长和两次世界大战的爆发,全球贸易和交通网络的互联,以及工程和科学努力的复杂性增加,对自动化和计算的需求不断上升。早期的机电计算机逐渐演变为占据整个房间的大型机器,维护成本高昂且容易出错。1944年,IBM为盟军在第二次世界大战期间完成了哈佛马克I号,这是最大的机电计算机之一,包含76.5万个部件和500英里的电线。这些计算机的“大脑”是继电器,一种电控机械开关。然而,继电器的机械臂有质量,不能瞬间开关,限制了计算速度。哈佛马克I号每秒能执行3次加法或减法,乘法需要6秒,除法需要15秒。此外,机械部件的磨损和故障也是问题。1947年,哈佛马克II号的运营商从故障的继电器中取出了一只死蛾,这导致了“计算机bug”一词的诞生。为了推动计算技术的进步,需要一种更快、更可靠的替代继电器的方法,幸运的是,这种替代方法已经存在——电子管。1904年,英国物理学家约翰·安布罗斯·弗莱明开发了第一个电子管,随后在1906年,美国发明家李·德福雷斯特添加了第三个“控制”电极,创造了三极管真空管,它没有移动部件,可以每秒切换数千次。这标志着从机电计算向电子计算的转变。视频还介绍了第一台大规模使用真空管的计算机——巨人马克1号,以及世界上第一台通用的可编程电子计算机ENIAC。到了1950年代,即使是基于真空管的计算机也达到了其极限,迫切需要一种新的电子开关——晶体管。1947年,贝尔实验室的科学家发明了晶体管,开启了计算的新纪元。晶体管比真空管更小、更快、更可靠,使得计算机变得更小、更便宜。许多晶体管和半导体的发展发生在加利福尼亚的圣克拉拉谷,也就是今天的硅谷。视频最后提出了一个问题:我们如何从晶体管过渡到实际的计算,尤其是在没有电机和齿轮的情况下?这个问题将在接下来的几集中解答。
Takeaways
- 📈 20世纪初,特殊用途的计算设备如制表机对政府和商业产生了巨大帮助,它们辅助甚至取代了重复的手工任务。
- 🌐 20世纪上半叶,世界人口几乎翻倍,两次世界大战动员了数千万人参战,全球贸易和交通网络前所未有地相互连接。
- 🚀 随着复杂性、官僚机构和数据的爆炸性增长,对自动化和计算的需求日益增加。
- 💻 早期的机电计算机逐渐发展成占据整个房间的巨型机器,维护成本高昂且容易出错。
- 🔌 继电器是这些巨型机电计算机的“大脑”,它们是电控机械开关,但存在速度慢和磨损的问题。
- 🔢 哈佛马克一号计算机每秒能执行3次加法或减法,乘法需要6秒,除法需要15秒。
- 🐞 1947年,哈佛马克二号计算机的故障中发现了一只死蛾,这引出了“计算机bug”一词。
- 🌟 约翰·安布罗斯·弗莱明在1904年开发了第一个真空管,这是一种新型的电子元件。
- 🛠️ 李·德弗雷斯特在1906年添加了第三个“控制”电极,创造了三极管真空管,它没有运动部件,开关速度更快。
- 📡 科洛萨斯计算机是第一个大规模使用真空管的计算机,它是第一台可编程的电子计算机。
- 🔢 ENIAC是第一台真正通用的可编程电子计算机,每秒能执行5000个十位数的加法或减法。
- 🚨 20世纪50年代,真空管计算机也达到了其物理极限,需要一种新的电子开关。
- 📍 1947年,贝尔实验室的科学家发明了晶体管,开启了计算的新时代。
- ⚙️ 晶体管是一种开关,可以通过控制线施加电力来打开或关闭,它们比真空管更小、更快、更可靠。
- 🏭 加州的圣克拉拉谷成为了半导体发展的中心,后来被称为硅谷。
- 📉 从继电器到真空管再到晶体管,计算技术的发展极大地提高了电的开关速度,为后续的计算机发展奠定了基础。
Q & A
20世纪初期,特殊用途的计算设备对政府和商业有什么影响?
-20世纪初期,特殊用途的计算设备,如制表机,对政府和商业产生了巨大的推动作用,它们帮助并有时取代了机械的手工任务,提高了数据处理的效率。
为什么20世纪上半叶对自动化和计算的需求不断增加?
-20世纪上半叶,由于人口规模的快速增长、两次世界大战的动员、全球贸易和交通网络的互联以及工程和科学事业的复杂性增加,导致了对自动化和计算的日益增长的需求。
哈哈佛马克一号(Harvard Mark I)是由哪家公司为哪个目的建造的?
-哈佛马克一号是由IBM公司为第二次世界大战期间的盟友建造的,它包含了765,000个组件、三百万次连接和五百英里的电线。
什么是中继器(relay),它如何工作?
-中继器是一种电控机械开关,它通过控制线来决定电路是打开还是关闭。控制线连接到中继器内部的线圈,当电流流过线圈时,会产生一个电磁场,吸引中继器内部的金属臂,使其闭合电路。
哈佛马克一号的计算速度有多快?
-哈佛马克一号每秒可以执行3次加法或减法运算,乘法需要6秒,除法需要15秒。更复杂的操作,如三角函数,可能需要超过一分钟。
为什么机械中继器的移动速度不够快?
-机械中继器内部的机械臂具有质量,因此不能在打开和关闭状态之间瞬间移动。即使在1940年代性能良好的中继器也只能每秒来回切换五十次。
计算机中“bug”一词是如何来的?
-1947年9月,哈佛马克二号的操作员从一个故障的中继器中取出了一只死蛾。随后,计算机科学家Grace Hopper注意到这一点,并创造了“bug”一词来描述计算机中的错误。
约翰·安布罗斯·弗莱明在1904年开发了哪种电子元件?
-约翰·安布罗斯·弗莱明在1904年开发了一种名为热离子阀门的电子元件,这是第一个真空管。
三极管真空管相比机械中继器有哪些优势?
-三极管真空管没有移动部件,这意味着磨损更少,而且它们可以每秒切换数千次,比机械中继器快得多。
科洛萨斯马克1(Colossus Mk 1)是什么时候由谁设计的?
-科洛萨斯马克1是在1943年12月由工程师汤米·弗劳尔斯设计的,它被安装在英国的布莱切利公园,帮助解密纳粹通信。
ENIAC是世界上第一台真正通用的可编程电子计算机,它是在哪一年完成的?
-ENIAC是在1946年在宾夕法尼亚大学完成的,由约翰·莫克利和J.普雷舍·埃克特设计。
晶体管相比真空管有哪些改进?
-晶体管是固态组件,比真空管更坚固、更不易损坏,而且体积更小,可以每秒切换状态一万次,比真空管快得多。
为什么硅谷被称为硅谷?
-硅谷位于加利福尼亚州的旧金山和圣何塞之间,由于该地区用于制造半导体的最常见材料是硅,因此这个地区被称为硅谷。
IBM 608是何时发布的,它有什么特点?
-IBM 608是在1957年发布的,它是第一台完全由晶体管驱动的商业可用计算机。它包含了3000个晶体管,每秒可以执行4500次加法或大约80次乘法或除法运算。
Outlines
😀 20世纪初的计算设备与自动化需求
20世纪初,随着人口激增和两次世界大战的爆发,全球贸易和交通网络的互联以及工程和科学事业的复杂性增加,对自动化和计算的需求日益增长。早期的专用计算设备,如制表机,对政府和商业操作的自动化起到了巨大的推动作用。然而,这些设备逐渐演变为庞大且昂贵的电子机械计算机,它们不仅维护成本高昂,而且容易出错。其中,哈佛大学的Mark I计算机是最大的电子机械计算机之一,它在第二次世界大战期间为盟军完成,包含了765,000个组件和五百万米的电线。这些计算机的核心是继电器,一种电气控制的机械开关,但由于其机械臂的质量,导致其开关速度不够快,无法有效解决大型复杂问题。此外,机械部件的磨损和故障率随数量增加而增加,这使得维护变得困难。
🤖 电子管与早期电子计算机的发展
为了解决电子机械计算机的局限性,科学家们寻求更快、更可靠的替代方案,这导致了电子管的发明。电子管,也称为热离子阀,是第一个真空管,由约翰·安布罗斯·弗莱明在1904年开发。随后,李·德福雷斯特在1906年添加了第三个“控制”电极,从而创造了一个可以快速开关的电子开关。电子管没有移动部件,因此磨损少,能在每秒开关数千次,这使得它们成为无线电、长途电话和其他电子设备的基础。尽管电子管比机械继电器有所改进,但它们仍然昂贵且易碎。第二次世界大战期间,工程师汤米·弗劳尔斯设计的Colossus Mk 1是第一个大规模使用电子管的计算机,它帮助解密了纳粹通信。ENIAC,由约翰·莫克利和J.普雷斯珀·埃克特设计,是世界首台通用的可编程电子计算机,其计算速度远远超过之前的任何机器。然而,由于电子管的故障率,ENIAC通常每天只能运行半天。到了1950年代,即使是基于电子管的计算机也达到了其极限,这促使科学家们寻求新的电子开关,即晶体管。
🏆 晶体管的发明与硅谷的崛起
1947年,贝尔实验室的科学家约翰·巴丁、沃尔特·布拉顿和威廉·肖克利发明了晶体管,开启了计算的新时代。晶体管的工作原理依赖于量子力学,它是一种可以通过控制线施加电力来开启或关闭的开关。晶体管使用半导体材料,通过改变“门”电极的电荷,可以操纵材料的导电性,从而允许电流流动或停止。晶体管比电子管更小、更快、更可靠,它们可以每秒切换状态一万次。晶体管的发明导致了计算机尺寸的显著减小和成本的降低,例如IBM 608是1957年发布的第一台完全晶体管化的商业计算机。晶体管和半导体的发展大多发生在加利福尼亚州的圣克拉拉谷,也就是今天的硅谷。硅谷成为了半导体和计算机芯片制造的中心,威廉·肖克利在那里创立了肖克利半导体,其员工后来创立了仙童半导体,再之后是英特尔——今天世界上最大的计算机芯片制造商。
📺 下一集预告
视频的最后一个段落是一个简短的预告,告知观众下一集的内容将继续探讨从晶体管到实际计算的过程,特别是没有电机和齿轮的情况下如何进行计算。
Mindmap
Keywords
💡打孔制表机
💡哈哈佛马克I
💡继电器
💡计算机故障
💡真空管
💡三极管
💡巨像计算机
💡ENIAC
💡晶体管
💡硅谷
💡半导体
Highlights
20世纪初,特殊用途的计算设备,如制表机,对政府和商业运作产生了巨大帮助。
20世纪前半叶,世界人口几乎翻倍,两次世界大战动员了数千万人参战。
全球贸易和运输网络的互联达到了前所未有的水平。
哈佛马克一号(Harvard Mark I)是20世纪40年代IBM为二战盟军建造的最大的机电计算机之一。
哈佛马克一号使用了50英尺的轴和5马力的电机来同步内部机械。
曼哈顿计划是这项技术最早的用途之一,用于运行模拟。
继电器是这些大型机电巨兽的大脑,是一种电气控制的机械开关。
1940年代的好继电器每秒可以来回翻转50次,但对于解决大型复杂问题来说速度还不够快。
哈佛马克一号每秒能做3次加法或减法,乘法需要6秒,除法需要15秒。
机械部件随时间的磨损是限制这些机器的另一个因素。
哈佛马克一号大约有3500个继电器,即使假设每个继电器有10年的使用寿命,平均每天也需要更换一个故障继电器。
1947年9月,哈佛马克二号的操作员从一个故障的继电器中取出了一只死蛾,这引出了“计算机bug”这个术语。
约翰·安布罗斯·弗莱明在1904年开发了第一个真空管,这是一种新的电气元件。
李·德福雷斯特在1906年添加了第三个“控制”电极,从而发明了三极管真空管。
真空管没有移动部件,这意味着磨损更少,而且它们可以每秒切换数千次。
科洛萨斯马克1(Colossus Mk 1)是第一个大规模使用真空管进行计算的设备,由工程师汤米·弗劳尔斯设计,于1943年12月完成。
ENIAC(电子数值积分器和计算机)是1946年在宾夕法尼亚大学完成的世界首台真正通用的可编程电子计算机。
20世纪50年代,即使是基于真空管的计算机也达到了其极限。
1947年,贝尔实验室的科学家约翰·巴丁、沃尔特·布拉顿和威廉·肖克利发明了晶体管,开启了计算的新时代。
晶体管与继电器或真空管一样,是一种可以通过控制线施加电力来打开或关闭的开关。
IBM 608是1957年发布的第一台完全晶体管化的商业可用计算机,它包含了3000个晶体管,每秒可以执行4500次加法或大约80次乘法或除法。
今天的计算机使用的晶体管尺寸小于50纳米,它们不仅非常小,而且非常快,每秒可以切换数百万次,并且可以运行数十年。
Transcripts
Our last episode brought us to the start of the 20th century, where early, special purpose
computing devices, like tabulating machines, were a huge boon to governments and business
- aiding, and sometimes replacing, rote manual tasks. But the scale of human systems continued
to increase at an unprecedented rate. The first half of the 20th century saw the
world’s population almost double. World War 1 mobilized 70 million people, and World
War 2 involved more than 100 million. Global trade and transit networks became interconnected
like never before, and the sophistication of our engineering and scientific endeavors
reached new heights – we even started to seriously consider visiting other planets.
And it was this explosion of complexity, bureaucracy, and ultimately data, that drove an increasing
need for automation and computation. Soon those cabinet-sized electro-mechanical
computers grew into room-sized behemoths that were expensive to maintain and prone to errors.
And it was these machines that would set the stage for future innovation.
INTRO
One of the largest electro-mechanical computers
built was the Harvard Mark I, completed in 1944 by IBM for the Allies during World War 2.
It contained 765,000 components, three million connections, and five hundred miles of wire.
To keep its internal mechanics synchronized,
it used a 50-foot shaft running right through the machine driven by a five horsepower motor.
One of the earliest uses for this technology was running simulations for the Manhattan Project.
The brains of these huge electro-mechanical
beasts were relays: electrically-controlled mechanical switches. In a relay, there is
a control wire that determines whether a circuit is opened or closed. The control wire connects
to a coil of wire inside the relay. When current flows through the coil, an electromagnetic
field is created, which in turn, attracts a metal arm inside the relay, snapping it
shut and completing the circuit. You can think of a relay like a water faucet. The control
wire is like the faucet handle. Open the faucet, and water flows through the pipe. Close the
faucet, and the flow of water stops.
Relays are doing the same thing, just with
electrons instead of water. The controlled circuit can then connect to other circuits,
or to something like a motor, which might increment a count on a gear, like in Hollerith's
tabulating machine we talked about last episode. Unfortunately, the mechanical arm inside of
a relay *has mass*, and therefore can’t move instantly between opened and closed states.
A good relay in the 1940’s might be able to flick back and forth fifty times in a second.
That might seem pretty fast, but it’s not fast enough to be useful at solving large,
complex problems. The Harvard Mark I could do 3 additions or
subtractions per second; multiplications took 6 seconds, and divisions took 15.
And more complex operations, like a trigonometric function, could take over a minute.
In addition to slow switching speed, another limitation was wear and tear. Anything mechanical
that moves will wear over time. Some things break entirely, and other things start getting
sticky, slow, and just plain unreliable.
And as the number of relays increases, the
probability of a failure increases too. The Harvard Mark I had roughly 3500 relays. Even
if you assume a relay has an operational life of 10 years, this would mean you’d have
to replace, on average, one faulty relay every day! That’s a big problem when you are in
the middle of running some important, multi-day calculation.
And that’s not all engineers had to contend with. These huge, dark, and warm machines
also attracted insects. In September 1947, operators on the Harvard Mark II pulled a
dead moth from a malfunctioning relay. Grace Hopper who we’ll talk more about in a later episode noted,
“From then on, when anything went wrong with a computer,
we said it had bugs in it.”
And that’s where we get the term computer bug.
It was clear that a faster, more reliable alternative to electro-mechanical relays was
needed if computing was going to advance further, and fortunately that alternative already existed!
In 1904, English physicist John Ambrose Fleming developed a new electrical component called
a thermionic valve, which housed two electrodes inside an airtight glass bulb - this was the
first vacuum tube. One of the electrodes could be heated, which would cause it to emit electrons
– a process called thermionic emission. The other electrode could then attract these
electrons to create the flow of our electric faucet, but only if it was positively charged
- if it had a negative or neutral charge, the electrons would no longer be attracted
across the vacuum so no current would flow.
An electronic component that permits the one-way
flow of current is called a diode, but what was really needed was a switch to help turn
this flow on and off. Luckily, shortly after, in 1906, American inventor Lee de Forest added
a third “control” electrode that sits between the two electrodes in Fleming’s design.
By applying a positive charge to the control electrode, it would permit the flow
of electrons as before. But if the control electrode was given a negative charge, it
would prevent the flow of electrons. So by manipulating the control wire, one could
open or close the circuit. It’s pretty much the same thing as a relay - but importantly,
vacuum tubes have no moving parts. This meant there was less wear, and more importantly,
they could switch thousands of times per second. These triode vacuum tubes would become the
basis of radio, long distance telephone, and many other electronic devices for nearly a
half century. I should note here that vacuum tubes weren’t perfect - they’re kind of
fragile, and can burn out like light bulbs, they were a big improvement over mechanical relays.
Also, initially vacuum tubes were expensive
– a radio set often used just one, but a computer might require hundreds or thousands of electrical switches.
But by the 1940s, their cost and reliability had improved to
the point where they became feasible for use in computers…. at least by people with deep
pockets, like governments. This marked the shift from electro-mechanical
computing to electronic computing. Let’s go to the Thought Bubble.
The first large-scale use of vacuum tubes for computing was the Colossus Mk 1 designed
by engineer Tommy Flowers and completed in December of 1943. The Colossus was installed
at Bletchley Park, in the UK, and helped to decrypt Nazi communications.
This may sound familiar because two years prior Alan Turing, often called the father
of computer science, had created an electromechanical device, also at Bletchley Park, called the
Bombe. It was an electromechanical machine designed to break Nazi Enigma codes, but the
Bombe wasn’t technically a computer, and we’ll get to Alan Turing’s contributions
later. Anyway, the first version of Colossus contained
1,600 vacuum tubes, and in total, ten Colossi were built to help with code-breaking.
Colossus is regarded as the first programmable, electronic computer.
Programming was done by plugging hundreds of wires into plugboards, sort of like old
school telephone switchboards, in order to set up the computer to perform the right operations.
So while “programmable”, it still had to be configured to perform a specific computation.
Enter the The Electronic Numerical Integrator and Calculator – or ENIAC – completed
a few years later in 1946 at the University of Pennsylvania.
Designed by John Mauchly and J. Presper Eckert, this was the world's first truly general purpose,
programmable, electronic computer.
ENIAC could perform 5000 ten-digit additions or subtractions per second, many, many times
faster than any machine that came before it. It was operational for ten years, and is estimated
to have done more arithmetic than the entire human race up to that point.
But with that many vacuum tubes failures were common, and ENIAC was generally only operational
for about half a day at a time before breaking down.
Thanks Thought Bubble. By the 1950’s, even vacuum-tube-based computing was reaching its limits.
The US Air Force’s AN/FSQ-7 computer, which was completed in 1955, was part of the
“SAGE” air defense computer system we’ll talk more about in a later episode.
To reduce cost and size, as well as improve reliability and speed, a radical new electronic
switch would be needed. In 1947, Bell Laboratory scientists John Bardeen, Walter Brattain,
and William Shockley invented the transistor, and with it, a whole new era of computing was born!
The physics behind transistors is pretty complex, relying on quantum mechanics,
so we’re going to stick to the basics.
A transistor is just like a relay or vacuum tube - it’s a switch that can be opened
or closed by applying electrical power via a control wire. Typically, transistors have
two electrodes separated by a material that sometimes can conduct electricity, and other
times resist it – a semiconductor. In this case, the control wire attaches to
a “gate” electrode. By changing the electrical charge of the gate, the conductivity of the
semiconducting material can be manipulated, allowing current to flow or be stopped – like
the water faucet analogy we discussed earlier. Even the very first transistor at Bell Labs
showed tremendous promise – it could switch between on and off states 10,000 times per second.
Further, unlike vacuum tubes made of glass and with carefully suspended, fragile
components, transistors were solid material known as a solid state component.
Almost immediately, transistors could be made smaller than the smallest possible relays or vacuum tubes.
This led to dramatically smaller and cheaper computers, like the IBM 608, released in 1957
– the first fully transistor-powered, commercially-available computer.
It contained 3000 transistors and could perform 4,500 additions, or roughly
80 multiplications or divisions, every second. IBM soon transitioned all of its computing
products to transistors, bringing transistor-based computers into offices, and eventually, homes.
Today, computers use transistors that are smaller than 50 nanometers in size – for
reference, a sheet of paper is roughly 100,000 nanometers thick. And they’re not only incredibly
small, they’re super fast – they can switch states millions of times per second, and can run for decades.
A lot of this transistor and semiconductor development happened in the Santa Clara Valley,
between San Francisco and San Jose, California.
As the most common material used to create semiconductors is silicon, this
region soon became known as Silicon Valley. Even William Shockley moved there, founding
Shockley Semiconductor, whose employees later founded
Fairchild Semiconductors, whose employees later founded
Intel - the world’s largest computer chip maker today.
Ok, so we’ve gone from relays to vacuum tubes to transistors. We can turn electricity
on and off really, really, really fast. But how do we get from transistors to actually
computing something, especially if we don’t have motors and gears?
That’s what we’re going to cover over the next few episodes.
Thanks for watching. See you next week.
Parcourir plus de vidéos associées
![](https://i.ytimg.com/vi/6-tKOHICqrI/hq720.jpg)
Integrated Circuits & Moore's Law: Crash Course Computer Science #17
![](https://i.ytimg.com/vi/m8i38Yq1wX4/hq720.jpg)
The Cold War and Consumerism: Crash Course Computer Science #24
![](https://i.ytimg.com/vi/4RPtJ9UyHS0/hq720.jpg)
Keyboards & Command Line Interfaces: Crash Course Computer Science #22
![](https://i.ytimg.com/vi/M5BZou6C01w/hq720.jpg)
The Personal Computer Revolution: Crash Course Computer Science #25
![](https://i.ytimg.com/vi/O5nskjZ_GoI/hq720.jpg)
Early Computing: Crash Course Computer Science #1
![](https://i.ytimg.com/vi/XIGSJshYb90/hq720.jpg)
Graphical User Interfaces: Crash Course Computer Science #26
5.0 / 5 (0 votes)