Electronic Computing: Crash Course Computer Science #2

CrashCourse
1 Mar 201710:43

Summary

TLDRThis episode delves into the evolution of computing from the early 20th century, highlighting the limitations of electro-mechanical devices like the Harvard Mark I and the subsequent shift to electronic computing with vacuum tubes. It discusses the development of the first programmable electronic computer, Colossus Mk 1, and the ENIAC, which significantly accelerated computation speeds. The script also introduces the transistor, invented in 1947, which revolutionized computing by enabling faster, smaller, and more reliable devices, leading to the rise of Silicon Valley and paving the way for modern computer technology.

Takeaways

  • 📊 The early 20th century saw a significant increase in global complexity and data, necessitating more advanced computing solutions.
  • 🏭 The Harvard Mark I, completed in 1944, was a massive electro-mechanical computer used for military purposes during World War 2.
  • 🔌 Relays, used in early computers, were mechanical switches that limited processing speed due to their slow operation and susceptibility to wear.
  • 🐛 The term 'computer bug' originated from a moth found in the Harvard Mark II, symbolizing the first recorded computer glitch.
  • 🌟 John Ambrose Fleming's thermionic valve, or vacuum tube, was a pivotal development, offering a faster and more reliable alternative to relays.
  • 🚀 The Colossus Mk 1, developed in 1943, was the first programmable electronic computer, significantly aiding in code-breaking during World War 2.
  • 💡 ENIAC, completed in 1946, was the world's first general-purpose electronic computer, capable of performing complex calculations at unprecedented speeds.
  • 🔝 The limitations of vacuum tubes, including their fragility and high failure rates, led to the search for more efficient computing components.
  • 🔬 The invention of the transistor in 1947 by Bell Labs scientists marked a revolutionary step in computing, offering a smaller, faster, and more durable switch.
  • 🌐 The development of transistors and semiconductors in Silicon Valley has had a profound impact on the tech industry, shaping the modern computer landscape.

Q & A

  • What was the significance of the early 20th century for computing devices?

    -The early 20th century was significant for computing devices as it marked the beginning of special purpose computing devices like tabulating machines, which greatly aided governments and businesses by automating manual tasks and handling complex data due to the rapid increase in human systems and data.

  • What was the Harvard Mark I and what was its role during World War 2?

    -The Harvard Mark I was one of the largest electro-mechanical computers, completed in 1944 by IBM for the Allies during World War 2. It was used for running simulations for the Manhattan Project, highlighting its role in aiding the war effort through complex calculations.

  • How did the relays function in early electro-mechanical computers?

    -Relays functioned as electrically-controlled mechanical switches. A control wire determined whether a circuit was opened or closed, similar to a water faucet. This allowed for the controlled flow of electrons through circuits, which could then connect to other circuits or devices like motors.

  • What limitations did the Harvard Mark I face in terms of speed and reliability?

    -The Harvard Mark I was limited by the speed of its mechanical relays, which could only switch about fifty times per second. It could perform 3 additions or subtractions per second, with more complex operations like trigonometric functions taking over a minute. Reliability was also an issue due to wear and tear, with an estimated need to replace one faulty relay every day.

  • What is the origin of the term 'computer bug'?

    -The term 'computer bug' originated when operators on the Harvard Mark II found a dead moth in a malfunctioning relay in September 1947. Grace Hopper noted this incident, and from then on, any computer malfunction was referred to as having 'bugs'.

  • What was the thermionic valve and how did it contribute to computing?

    -The thermionic valve, developed by John Ambrose Fleming in 1904, was the first vacuum tube. It housed two electrodes inside an airtight glass bulb and allowed for the one-way flow of current through thermionic emission, providing a faster and more reliable alternative to electro-mechanical relays.

  • How did the addition of a third electrode by Lee de Forest improve the functionality of vacuum tubes?

    -Lee de Forest's addition of a third 'control' electrode allowed for the manipulation of electron flow by applying a positive or negative charge. This enabled the vacuum tube to act as an electronic switch, which was a significant improvement over the diode and laid the foundation for electronic computing.

  • What was the Colossus Mk 1 and why was it significant?

    -The Colossus Mk 1, designed by Tommy Flowers and completed in 1943, was the first programmable electronic computer. It was significant because it used 1,600 vacuum tubes and was instrumental in decrypting Nazi communications at Bletchley Park, marking a major advancement in electronic computing.

  • What was the ENIAC and how did it differ from previous computing devices?

    -The ENIAC, completed in 1946, was the world's first general-purpose, programmable electronic computer. It differed from previous devices by using vacuum tubes to perform calculations at a much faster rate, capable of 5000 ten-digit additions or subtractions per second, and was a significant leap in computing speed and capability.

  • What were the limitations of vacuum tube-based computers?

    -Vacuum tube-based computers had several limitations, including their fragility, high cost, and the fact that they could burn out like light bulbs. Additionally, they were large, consumed a lot of power, and generated heat, which led to frequent failures and the need for regular maintenance.

  • How did the invention of the transistor revolutionize computing?

    -The transistor, invented in 1947, revolutionized computing by offering a smaller, more reliable, and faster switching device compared to vacuum tubes. It was a solid-state component that could switch states much quicker and was less prone to wear and tear, leading to smaller, cheaper, and more efficient computers.

Outlines

00:00

đŸ’» The Dawn of 20th Century Computing

The script begins by recounting the early 20th century, a period marked by the advent of specialized computing devices like tabulating machines, which significantly aided governments and businesses by automating manual tasks. The script highlights the rapid increase in human systems' scale, exemplified by the doubling of the world's population and the mobilization of millions for World Wars 1 and 2. The interconnectedness of global trade and the advancements in engineering and science are noted, setting the stage for the growing need for automation and computation. The narrative then shifts to the limitations of early electro-mechanical computers, which were large, expensive, and prone to errors, with the Harvard Mark I being a notable example. The script explains the workings of relays, which were the 'brains' of these early machines, comparing their function to water faucets. Despite their initial utility, the slow speed and mechanical wear of relays necessitated a search for more efficient computing solutions.

05:03

🔬 The Evolution of Computing: From Relays to Vacuum Tubes

This section delves into the development of the thermionic valve, or vacuum tube, by John Ambrose Fleming in 1904, which marked a significant leap from electro-mechanical relays. The script explains the vacuum tube's operation, highlighting its lack of moving parts and its ability to switch much faster than relays. The vacuum tube's introduction led to the creation of the Colossus Mk 1, the first programmable electronic computer, which played a crucial role in decrypting Nazi communications during World War 2. The script then discusses the Electronic Numerical Integrator and Calculator (ENIAC), designed by John Mauchly and J. Presper Eckert, which was the world's first general-purpose programmable electronic computer. Despite its impressive capabilities, ENIAC was prone to frequent breakdowns due to the large number of vacuum tubes it used. The section concludes with the recognition that vacuum tubes, while an improvement, were reaching their limits and a new technology was needed to advance computing further.

10:06

🚀 The Birth of Silicon Valley and the Transistor Revolution

The final paragraph outlines the invention of the transistor by John Bardeen, Walter Brattain, and William Shockley in 1947, which initiated a new era in computing. The transistor's operation is likened to a water faucet, with the ability to control the flow of electricity. Unlike vacuum tubes, transistors are solid-state components, making them more reliable and compact. The script mentions the IBM 608, one of the first commercially available computers to use transistors, which was much smaller and faster than its predecessors. The development of transistors and semiconductors is traced back to Silicon Valley, highlighting the region's significance in the tech industry. The narrative concludes with a teaser for upcoming episodes, which will explore the transition from transistors to actual computing and the evolution of computer technology.

Mindmap

Keywords

💡Electro-mechanical computers

Electro-mechanical computers are early computing devices that combine electrical and mechanical components to perform calculations. They were a significant advancement from purely mechanical devices, like tabulating machines, and laid the groundwork for modern computers. In the video, the Harvard Mark I is mentioned as an example of an electro-mechanical computer, which was used for complex tasks such as simulations for the Manhattan Project.

💡Relay

A relay is an electrically-controlled mechanical switch that can open or close a circuit. It was a key component in early computers, functioning similarly to a water faucet, where the control wire acts like a handle to start or stop the flow. The video explains that while relays were an improvement over manual switches, they were limited by their mechanical nature, which caused slow operation and wear over time.

💡Vacuum tube

A vacuum tube, also known as a thermionic valve, is an electronic component that allows for the control of electric current flow in a sealed glass envelope from which air has been evacuated. It was a significant innovation over relays because it had no moving parts, which meant faster operation and less wear. The video discusses how vacuum tubes became the basis for electronic devices and computing, leading to machines like the Colossus Mk 1 and ENIAC.

💡Transistor

A transistor is a semiconductor device used to amplify or switch electronic signals and electrical power. It is a key building block of modern electronic devices, including computers. The video highlights the invention of the transistor as a pivotal moment in computing history, as it allowed for the creation of smaller, faster, and more reliable computers compared to those using vacuum tubes or relays.

💡Harvard Mark I

The Harvard Mark I, also known as the IBM Automatic Sequence Controlled Calculator, is an example of an early electro-mechanical computer. It was used during World War II for various calculations, including those for the Manhattan Project. The video describes its massive scale, with hundreds of thousands of components, and how it was limited by the speed and reliability of its relays.

💡Colossus Mk 1

The Colossus Mk 1 was an early electronic computer designed to decrypt encrypted German messages during World War II. It utilized vacuum tubes and is considered the first programmable electronic computer. The video mentions that the Colossus was a significant step forward in computing, as it could perform complex tasks much faster than its electro-mechanical predecessors.

💡ENIAC

The Electronic Numerical Integrator and Calculator (ENIAC) was one of the first general-purpose electronic digital computers. It was capable of performing a large number of calculations much faster than any previous machine. The video discusses ENIAC's impressive speed and its use of vacuum tubes, which, despite their reliability issues, allowed for a significant leap in computational capabilities.

💡Silicon Valley

Silicon Valley is a region in California known for its high-tech innovation and electronics industry. It got its name from the silicon used in making semiconductors, which are essential components in modern electronics. The video explains how the development of transistors and semiconductors in this region led to the establishment of major tech companies, including Intel.

💡Quantum mechanics

Quantum mechanics is a fundamental theory in physics that describes the behavior of matter and energy at the atomic and subatomic level. It is crucial to understanding how transistors work, as their operation is based on the principles of quantum mechanics. The video briefly mentions the complex physics behind transistors, indicating that their functionality is deeply rooted in this scientific theory.

💡Solid-state component

A solid-state component is a type of electronic component that does not have any moving parts and is made of solid materials, unlike vacuum tubes which are fragile and have delicate internal structures. Transistors are an example of solid-state components. The video highlights the advantage of solid-state components in terms of reliability and miniaturization, which has been essential for the development of modern computers.

Highlights

The 20th century saw a significant increase in the scale of human systems, with the world's population nearly doubling and major wars mobilizing millions.

The need for automation and computation grew due to the explosion of complexity, bureaucracy, and data.

Electro-mechanical computers like the Harvard Mark I were large and prone to errors, setting the stage for future innovation.

The Harvard Mark I, completed in 1944, was a massive electro-mechanical computer used for simulations during World War 2.

Relays, the brains of early computers, were electrically-controlled mechanical switches with limitations in speed and reliability.

The Harvard Mark I could perform only 3 additions or subtractions per second, illustrating the slow speed of early computing.

Mechanical wear and tear, such as the need to replace faulty relays daily, highlighted the maintenance challenges of early computers.

The term 'computer bug' originated from a moth found in the Harvard Mark II, symbolizing the challenges of early computing.

The development of the thermionic valve by John Ambrose Fleming in 1904 marked the beginning of electronic components.

Lee de Forest's addition of a control electrode to the thermionic valve created the triode vacuum tube, a key innovation for electronic switches.

Vacuum tubes, despite being fragile and expensive, were a significant improvement over mechanical relays for computing.

The Colossus Mk 1, designed by Tommy Flowers, was the first programmable electronic computer used for code-breaking during World War 2.

ENIAC, completed in 1946, was the world's first general-purpose, programmable electronic computer, capable of performing 5000 operations per second.

The limitations of vacuum-tube-based computing led to the development of the transistor in 1947 by Bell Labs scientists.

Transistors, being solid-state components, allowed for smaller, faster, and more reliable computers compared to vacuum tubes.

The IBM 608, released in 1957, was the first fully transistor-powered, commercially-available computer, marking a shift in the computing industry.

Silicon Valley, named for its focus on silicon semiconductors, became a hub for transistor and computer chip development.

The evolution from relays to vacuum tubes to transistors paved the way for modern computing, with transistors enabling fast and efficient data processing.

Transcripts

play00:02

Our last episode brought us to the start of the 20th century, where early, special purpose

play00:07

computing devices, like tabulating machines, were a huge boon to governments and business

play00:10

- aiding, and sometimes replacing, rote manual tasks. But the scale of human systems continued

play00:15

to increase at an unprecedented rate. The first half of the 20th century saw the

play00:19

world’s population almost double. World War 1 mobilized 70 million people, and World

play00:24

War 2 involved more than 100 million. Global trade and transit networks became interconnected

play00:28

like never before, and the sophistication of our engineering and scientific endeavors

play00:32

reached new heights – we even started to seriously consider visiting other planets.

play00:36

And it was this explosion of complexity, bureaucracy, and ultimately data, that drove an increasing

play00:41

need for automation and computation. Soon those cabinet-sized electro-mechanical

play00:46

computers grew into room-sized behemoths that were expensive to maintain and prone to errors.

play00:52

And it was these machines that would set the stage for future innovation.

play00:55

INTRO

play01:04

One of the largest electro-mechanical computers

play01:06

built was the Harvard Mark I, completed in 1944 by IBM for the Allies during World War 2.

play01:12

It contained 765,000 components, three million connections, and five hundred miles of wire.

play01:19

To keep its internal mechanics synchronized,

play01:21

it used a 50-foot shaft running right through the machine driven by a five horsepower motor.

play01:26

One of the earliest uses for this technology was running simulations for the Manhattan Project.

play01:30

The brains of these huge electro-mechanical

play01:32

beasts were relays: electrically-controlled mechanical switches. In a relay, there is

play01:37

a control wire that determines whether a circuit is opened or closed. The control wire connects

play01:42

to a coil of wire inside the relay. When current flows through the coil, an electromagnetic

play01:47

field is created, which in turn, attracts a metal arm inside the relay, snapping it

play01:51

shut and completing the circuit. You can think of a relay like a water faucet. The control

play01:56

wire is like the faucet handle. Open the faucet, and water flows through the pipe. Close the

play02:00

faucet, and the flow of water stops.

play02:02

Relays are doing the same thing, just with

play02:04

electrons instead of water. The controlled circuit can then connect to other circuits,

play02:08

or to something like a motor, which might increment a count on a gear, like in Hollerith's

play02:13

tabulating machine we talked about last episode. Unfortunately, the mechanical arm inside of

play02:17

a relay *has mass*, and therefore can’t move instantly between opened and closed states.

play02:21

A good relay in the 1940’s might be able to flick back and forth fifty times in a second.

play02:26

That might seem pretty fast, but it’s not fast enough to be useful at solving large,

play02:31

complex problems. The Harvard Mark I could do 3 additions or

play02:34

subtractions per second; multiplications took 6 seconds, and divisions took 15.

play02:39

And more complex operations, like a trigonometric function, could take over a minute.

play02:44

In addition to slow switching speed, another limitation was wear and tear. Anything mechanical

play02:48

that moves will wear over time. Some things break entirely, and other things start getting

play02:52

sticky, slow, and just plain unreliable.

play02:54

And as the number of relays increases, the

play02:56

probability of a failure increases too. The Harvard Mark I had roughly 3500 relays. Even

play03:02

if you assume a relay has an operational life of 10 years, this would mean you’d have

play03:06

to replace, on average, one faulty relay every day! That’s a big problem when you are in

play03:11

the middle of running some important, multi-day calculation.

play03:14

And that’s not all engineers had to contend with. These huge, dark, and warm machines

play03:18

also attracted insects. In September 1947, operators on the Harvard Mark II pulled a

play03:23

dead moth from a malfunctioning relay. Grace Hopper who we’ll talk more about in a later episode noted,

play03:28

“From then on, when anything went wrong with a computer,

play03:31

we said it had bugs in it.”

play03:32

And that’s where we get the term computer bug.

play03:34

It was clear that a faster, more reliable alternative to electro-mechanical relays was

play03:38

needed if computing was going to advance further, and fortunately that alternative already existed!

play03:43

In 1904, English physicist John Ambrose Fleming developed a new electrical component called

play03:49

a thermionic valve, which housed two electrodes inside an airtight glass bulb - this was the

play03:54

first vacuum tube. One of the electrodes could be heated, which would cause it to emit electrons

play03:59

– a process called thermionic emission. The other electrode could then attract these

play04:04

electrons to create the flow of our electric faucet, but only if it was positively charged

play04:09

- if it had a negative or neutral charge, the electrons would no longer be attracted

play04:13

across the vacuum so no current would flow.

play04:15

An electronic component that permits the one-way

play04:17

flow of current is called a diode, but what was really needed was a switch to help turn

play04:22

this flow on and off. Luckily, shortly after, in 1906, American inventor Lee de Forest added

play04:28

a third “control” electrode that sits between the two electrodes in Fleming’s design.

play04:32

By applying a positive charge to the control electrode, it would permit the flow

play04:36

of electrons as before. But if the control electrode was given a negative charge, it

play04:41

would prevent the flow of electrons. So by manipulating the control wire, one could

play04:45

open or close the circuit. It’s pretty much the same thing as a relay - but importantly,

play04:49

vacuum tubes have no moving parts. This meant there was less wear, and more importantly,

play04:53

they could switch thousands of times per second. These triode vacuum tubes would become the

play04:58

basis of radio, long distance telephone, and many other electronic devices for nearly a

play05:02

half century. I should note here that vacuum tubes weren’t perfect - they’re kind of

play05:07

fragile, and can burn out like light bulbs, they were a big improvement over mechanical relays.

play05:11

Also, initially vacuum tubes were expensive

play05:14

– a radio set often used just one, but a computer might require hundreds or thousands of electrical switches.

play05:20

But by the 1940s, their cost and reliability had improved to

play05:23

the point where they became feasible for use in computers
. at least by people with deep

play05:28

pockets, like governments. This marked the shift from electro-mechanical

play05:31

computing to electronic computing. Let’s go to the Thought Bubble.

play05:35

The first large-scale use of vacuum tubes for computing was the Colossus Mk 1 designed

play05:40

by engineer Tommy Flowers and completed in December of 1943. The Colossus was installed

play05:46

at Bletchley Park, in the UK, and helped to decrypt Nazi communications.

play05:50

This may sound familiar because two years prior Alan Turing, often called the father

play05:54

of computer science, had created an electromechanical device, also at Bletchley Park, called the

play05:59

Bombe. It was an electromechanical machine designed to break Nazi Enigma codes, but the

play06:04

Bombe wasn’t technically a computer, and we’ll get to Alan Turing’s contributions

play06:08

later. Anyway, the first version of Colossus contained

play06:10

1,600 vacuum tubes, and in total, ten Colossi were built to help with code-breaking.

play06:17

Colossus is regarded as the first programmable, electronic computer.

play06:20

Programming was done by plugging hundreds of wires into plugboards, sort of like old

play06:25

school telephone switchboards, in order to set up the computer to perform the right operations.

play06:29

So while “programmable”, it still had to be configured to perform a specific computation.

play06:35

Enter the The Electronic Numerical Integrator and Calculator – or ENIAC – completed

play06:40

a few years later in 1946 at the University of Pennsylvania.

play06:44

Designed by John Mauchly and J. Presper Eckert, this was the world's first truly general purpose,

play06:50

programmable, electronic computer.

play06:52

ENIAC could perform 5000 ten-digit additions or subtractions per second, many, many times

play06:58

faster than any machine that came before it. It was operational for ten years, and is estimated

play07:03

to have done more arithmetic than the entire human race up to that point.

play07:07

But with that many vacuum tubes failures were common, and ENIAC was generally only operational

play07:11

for about half a day at a time before breaking down.

play07:14

Thanks Thought Bubble. By the 1950’s, even vacuum-tube-based computing was reaching its limits.

play07:19

The US Air Force’s AN/FSQ-7 computer, which was completed in 1955, was part of the

play07:25

“SAGE” air defense computer system we’ll talk more about in a later episode.

play07:29

To reduce cost and size, as well as improve reliability and speed, a radical new electronic

play07:34

switch would be needed. In 1947, Bell Laboratory scientists John Bardeen, Walter Brattain,

play07:39

and William Shockley invented the transistor, and with it, a whole new era of computing was born!

play07:45

The physics behind transistors is pretty complex, relying on quantum mechanics,

play07:49

so we’re going to stick to the basics.

play07:51

A transistor is just like a relay or vacuum tube - it’s a switch that can be opened

play07:56

or closed by applying electrical power via a control wire. Typically, transistors have

play08:00

two electrodes separated by a material that sometimes can conduct electricity, and other

play08:05

times resist it – a semiconductor. In this case, the control wire attaches to

play08:10

a “gate” electrode. By changing the electrical charge of the gate, the conductivity of the

play08:15

semiconducting material can be manipulated, allowing current to flow or be stopped – like

play08:20

the water faucet analogy we discussed earlier. Even the very first transistor at Bell Labs

play08:24

showed tremendous promise – it could switch between on and off states 10,000 times per second.

play08:30

Further, unlike vacuum tubes made of glass and with carefully suspended, fragile

play08:34

components, transistors were solid material known as a solid state component.

play08:39

Almost immediately, transistors could be made smaller than the smallest possible relays or vacuum tubes.

play08:43

This led to dramatically smaller and cheaper computers, like the IBM 608, released in 1957

play08:50

– the first fully transistor-powered, commercially-available computer.

play08:53

It contained 3000 transistors and could perform 4,500 additions, or roughly

play08:59

80 multiplications or divisions, every second. IBM soon transitioned all of its computing

play09:04

products to transistors, bringing transistor-based computers into offices, and eventually, homes.

play09:10

Today, computers use transistors that are smaller than 50 nanometers in size – for

play09:14

reference, a sheet of paper is roughly 100,000 nanometers thick. And they’re not only incredibly

play09:20

small, they’re super fast – they can switch states millions of times per second, and can run for decades.

play09:27

A lot of this transistor and semiconductor development happened in the Santa Clara Valley,

play09:31

between San Francisco and San Jose, California.

play09:34

As the most common material used to create semiconductors is silicon, this

play09:38

region soon became known as Silicon Valley. Even William Shockley moved there, founding

play09:43

Shockley Semiconductor, whose employees later founded

play09:46

Fairchild Semiconductors, whose employees later founded

play09:49

Intel - the world’s largest computer chip maker today.

play09:52

Ok, so we’ve gone from relays to vacuum tubes to transistors. We can turn electricity

play09:57

on and off really, really, really fast. But how do we get from transistors to actually

play10:02

computing something, especially if we don’t have motors and gears?

play10:06

That’s what we’re going to cover over the next few episodes.

play10:09

Thanks for watching. See you next week.

Rate This
★
★
★
★
★

5.0 / 5 (0 votes)

Ähnliche Tags
Computing HistoryElectro-MechanicalElectronic ComputingVacuum TubesTransistorsHarvard Mark IENIACColossus Mk 1Alan TuringSilicon ValleyTech Innovation
Benötigen Sie eine Zusammenfassung auf Englisch?