Operating Systems: Crash Course Computer Science #18

CrashCourse
28 Jun 201713:35

Summary

TLDR本视频介绍了操作系统(OS)的发展历程。20世纪40年代和50年代早期,计算机一次只能运行一个程序,程序员需要手动将程序输入计算机。随着计算机速度的指数级增长,手动输入程序变得耗时且低效,因此操作系统应运而生。操作系统是一类特殊的程序,它们拥有对硬件的特殊权限,能够管理和运行其他程序。20世纪50年代,随着计算机的普及和性能的提升,操作系统开始出现,它们通过批处理技术减少了程序加载的手动操作。操作系统还提供了软件抽象层,通过API和设备驱动程序简化了程序员与硬件设备的交互。到了50年代末,计算机速度的提升导致了处理器在等待慢速的I/O设备(如打印机)时出现闲置。为了充分利用这一资源,操作系统引入了多任务处理,允许多个程序在同一CPU上同时运行。此外,操作系统通过虚拟化内存位置和内存保护功能,为每个程序分配独立的内存块,提高了系统的稳定性和安全性。70年代,随着计算机速度的提升和成本的降低,出现了支持多用户同时交互访问的时间共享系统。Multics和Unix是早期时间共享操作系统的代表,后者以其简洁性和可移植性广受欢迎。到了80年代,个人计算机的出现使得操作系统如MS-DOS和早期Windows系统变得更为简单,尽管它们缺乏多任务处理和内存保护功能。现代操作系统如Mac OS X、Windows 10、Linux、iOS和Android都具备多任务处理、虚拟内存和内存保护功能,使得用户能够同时运行多个程序。

Takeaways

  • 📚 20世纪40年代和50年代早期的计算机一次只能运行一个程序,程序员需要手动将程序输入计算机。
  • 🚀 随着计算机速度的指数级增长,手动输入程序变得不再高效,因此产生了操作系统以实现计算机的自动化操作。
  • 💾 操作系统(OS)是一类特殊的程序,它们拥有硬件的特殊权限,可以运行和管理其他程序。
  • 🔁 20世纪50年代,计算机开始采用批处理技术,可以自动连续处理多个程序,减少了等待时间。
  • 📈 计算机变得更快、更便宜,操作系统通过设备驱动器提供的软件抽象,简化了程序员与硬件外设的交互。
  • 🔄 到了50年代末,计算机速度的提升导致它们常常因为等待慢速的I/O设备而空闲,这催生了多任务处理的需求。
  • 🤖 曼彻斯特大学的Atlas超级计算机是早期能够通过巧妙调度同时运行多个程序的系统之一。
  • 🧠 为了支持多任务处理,操作系统引入了内存分配和管理,每个程序都拥有自己的内存块,这称为动态内存分配。
  • 🔗 虚拟内存技术使得程序可以使用连续的内存地址,而操作系统和CPU自动处理虚拟到物理内存地址的映射。
  • 🛡️ 内存保护功能确保了一个程序的故障不会影响到其他程序的内存,提高了系统的稳定性和安全性。
  • 📊 到了70年代,计算机的速度和便宜程度使得多个用户可以同时交互使用一台计算机,这要求操作系统实现时分共享。
  • 🌟 Unix操作系统以其简洁性和可移植性成为70年代和80年代最受欢迎的操作系统之一,它对操作系统的发展产生了深远影响。
  • 🏠 个人电脑的兴起要求操作系统更加简单,如微软的MS-DOS,尽管它缺乏多任务处理和内存保护功能。
  • 🔄 现代操作系统如Mac OS X、Windows 10、Linux、iOS和Android都具备多任务处理、虚拟内存和内存保护功能。

Q & A

  • 20世纪40年代和50年代初期的计算机是如何运行程序的?

    -20世纪40年代和50年代初期的计算机一次只能运行一个程序。程序员会在自己的办公桌上编写程序,例如使用打孔卡片,然后将程序带到一个房间大小的计算机所在的房间,并将其交给专门的计算机操作员。操作员会在计算机空闲时将程序输入计算机,计算机运行程序,输出结果,然后停止。

  • 为什么需要操作系统?

    -随着计算机速度的指数级增长,人工插入程序到读取器的时间比运行实际程序本身还要长。因此,我们需要一种让计算机自行操作的方式,这就是操作系统诞生的原因。

  • 操作系统是如何帮助程序员简化与硬件外设的接口的?

    -操作系统作为软件程序和硬件外设之间的中介,提供了通过APIs(应用程序接口)的软件抽象,称为设备驱动程序。这使得程序员可以使用标准化的机制与常见的输入和输出硬件,或简称I/O,进行通信。

  • 什么是批处理(batch processing)?

    -批处理是一种允许计算机一次性接收一批程序,当计算机完成一个程序后,它会立即自动开始运行下一个程序,无需人工干预,从而减少了停机时间。

  • 多任务处理(multitasking)是如何实现的?

    -多任务处理是通过操作系统的能力实现的,它允许多个程序同时在单个CPU上运行,通过聪明的调度算法,当一个程序等待I/O操作完成时,CPU可以切换到另一个就绪等待运行的程序。

  • 虚拟内存是如何工作的?

    -虚拟内存是操作系统虚拟化内存位置的方式,程序可以假定它们的内存始终从地址0开始,保持事情简单和一致。然而,实际的物理内存位置由操作系统隐藏和抽象化。当程序请求内存位置时,操作系统和CPU会自动处理虚拟到物理内存的重映射。

  • 内存保护(Memory Protection)是什么?

    -内存保护是一种功能,它为每个程序分配自己的内存块,这样如果一个程序出现问题并开始写入无意义的数据,它只能破坏自己的内存,而不是其他程序的内存。这对于防止恶意软件,如病毒,也非常有用。

  • Unix操作系统是如何诞生的?

    -Unix操作系统是由Dennis Ritchie和Ken Thompson在Multics项目的基础上创建的,他们希望建立一个更简洁、更高效的操作系统。Unix将操作系统分为两部分:核心功能(如内存管理、多任务处理和I/O处理)的内核,以及随内核捆绑但不属于内核的有用工具(如程序和库)。

  • 个人电脑的操作系统和大学、公司、政府使用的主框架计算机的操作系统有何不同?

    -个人电脑的操作系统比大学、公司、政府使用的主框架计算机的操作系统简单得多。例如,Microsoft的磁盘操作系统(MS-DOS)只有160千字节,可以适应个人电脑的简单硬件,虽然它缺乏多任务处理和受保护的内存。

  • 现代操作系统有哪些?

    -现代操作系统包括Mac OS X、Windows 10、Linux、iOS和Android。即使我们大多数人通常只使用自己的电脑,这些操作系统都具备多任务处理以及虚拟和受保护的内存功能,可以同时运行多个程序。

  • 为什么早期的操作系统(如MS-DOS)缺乏多任务处理和受保护的内存?

    -早期的操作系统缺乏多任务处理和受保护的内存,因为个人电脑的硬件相对简单,操作系统也相应地简化。这种简化使得操作系统能够适应更便宜的硬件,尽管这意味着程序可能会经常崩溃系统,但对于当时的用户来说,简单地重启计算机是一个可以接受的折衷方案。

  • 为什么早期的Windows操作系统会出现“蓝屏死机”?

    -早期的Windows操作系统缺乏强大的内存保护功能,当程序运行出错时,可能会导致整个操作系统崩溃,表现为“蓝屏死机”。这是程序崩溃严重到足以使整个操作系统停止工作的标志。

Outlines

00:00

🖥️ 操作系统的诞生与批处理

在20世纪40年代和50年代早期,计算机一次只能运行一个程序。程序员会在卡片上编写程序,然后将其交给计算机操作员,由操作员在计算机空闲时输入程序。随着计算机速度的指数级增长,手动输入程序变得耗时。为了使计算机能够自我运行,操作系统应运而生。操作系统是一类特殊的程序,它们拥有对硬件的特殊权限,可以管理和运行其他程序。它们通常在计算机启动时首先运行,并负责启动所有后续程序。操作系统的起源可以追溯到1950年代,当时计算机变得更加普及和强大。最初的操作系统通过批处理简化了手动加载程序的繁琐任务,允许计算机批量处理多个程序,减少了等待时间。

05:04

🔩 硬件抽象与多任务处理

随着计算机的普及,不同计算机的配置开始变得多样化,这对程序员来说是一个挑战,因为他们需要为每种硬件设备编写特定的代码。操作系统作为软件和硬件之间的中介,通过提供设备驱动程序的API,简化了这一过程。到了50年代末,计算机速度的提升导致它们在等待I/O操作(如打印)时常常闲置。为了最大化利用这一昂贵资源,开发了如Atlas Supervisor这样的操作系统,它不仅能够自动加载程序,还能通过巧妙的调度在同一CPU上同时运行多个程序。这种通过操作系统实现的程序同时运行的能力被称为多任务处理。此外,为了在多个程序之间共享单个CPU的时间,每个程序都需要自己的内存块,这导致了动态内存分配的概念。操作系统还虚拟化了内存位置,使得程序可以认为它们的内存始终从地址0开始,隐藏了实际的物理内存位置。

10:08

🛡️ 内存保护与时间共享

操作系统的内存虚拟化技术不仅简化了程序的内存管理,还引入了内存保护功能,隔离了不同程序的内存空间,防止了一个程序错误影响到其他程序。Atlas是首个支持虚拟和受保护内存的计算机和操作系统。到了70年代,计算机的速度和价格使得教育机构可以购买计算机供学生使用。这要求操作系统不仅要处理多个程序,还要处理多个用户。时间共享技术允许每个用户只使用计算机资源的一小部分,但由于计算机速度很快,这足以完成许多任务。Multics是早期最具影响力的时间共享操作系统之一,它从一开始就被设计为安全系统。然而,由于过于复杂,Multics并未取得商业成功。这促使Dennis Ritchie和Ken Thompson开发了Unix操作系统,它将操作系统分为内核和实用工具两部分,有意省略了一些功能,如错误恢复,以构建一个紧凑、精简的内核。Unix的简洁性使其能够在多种硬件上运行,并迅速在贝尔实验室流行开来。

📚 Unix的普及与个人计算机的发展

Unix发布后不久,就获得了多种编程语言的编译器和文字处理器,成为1970年代和1980年代最受欢迎的操作系统之一。与此同时,到了1980年代初,个人或家用计算机的成本已经下降到普通人可以负担得起的水平。这些计算机比大学、公司和政府使用的大主机简单得多,因此它们的操作系统也必须同样简单。例如,微软的MS-DOS就是早期个人计算机上最流行的操作系统,尽管它缺乏多任务处理和受保护的内存。早期的Windows版本也缺乏强大的内存保护,程序错误行为可能导致系统崩溃,出现“蓝屏死机”。幸运的是,新版本的Windows有了更好的保护,通常不会那么频繁地崩溃。现代计算机运行的操作系统,如Mac OS X、Windows 10、Linux、iOS和Android,即使大多数时候只由一个人使用,它们都具备多任务处理、虚拟和受保护的内存功能,可以同时运行多个程序。这都得益于几十年来对操作系统的研究和开发,以及存储这些程序所需的适当内存。

Mindmap

Keywords

💡操作系统(Operating Systems)

操作系统是管理计算机硬件资源并提供用户与计算机交互环境的软件。它允许用户运行程序,并在后台控制着内存、处理器和其他硬件设备。在视频中,操作系统的出现是为了解决早期计算机手动加载程序的低效率问题,它们使得计算机能够自动运行多个程序,并且管理这些程序的执行,这是计算机发展史上的一个重要里程碑。

💡批处理(Batch Processing)

批处理是一种计算机处理作业的方式,其中计算机可以一次性接收多个程序,完成一个后自动开始下一个,无需人工干预。这种方法减少了计算机在等待下一个程序加载时的空闲时间,提高了计算机的使用效率。在视频中,批处理是操作系统早期的一个主要功能,它标志着从手动操作到自动化操作的转变。

💡设备驱动程序(Device Drivers)

设备驱动程序是操作系统的一部分,它提供了软件和硬件之间的接口。通过设备驱动程序,程序员可以使用标准化的机制与输入输出硬件通信,而无需了解硬件的详细情况。在视频中,设备驱动程序的出现极大地简化了程序员的工作,他们不再需要为每种硬件编写低级接口代码,而是可以通过调用操作系统提供的函数来实现硬件操作。

💡多任务处理(Multitasking)

多任务处理是指操作系统能够同时运行多个程序的能力。这使得计算机的CPU可以更高效地被使用,因为当一个程序在等待输入输出操作完成时,CPU可以切换到另一个程序继续执行。在视频中,Atlas操作系统通过聪明的调度算法实现了多任务处理,这使得计算机能够同时进行计算、打印和读取数据等多项任务。

💡虚拟内存(Virtual Memory)

虚拟内存是操作系统提供的一种内存管理技术,它允许程序使用比物理内存更大的地址空间。通过虚拟内存,每个程序都认为自己从内存地址0开始,而操作系统则负责将这些虚拟地址映射到物理内存的实际位置。在视频中,虚拟内存的使用简化了内存管理,允许了动态内存分配,并使得程序能够使用看似连续的内存块。

💡内存保护(Memory Protection)

内存保护是一种安全特性,它确保每个程序都在自己的内存空间内运行,不会干扰其他程序。这通过为每个程序分配独立的内存块来实现,即使一个程序出现问题,也不会影响到其他程序或操作系统的稳定性。在视频中,内存保护是操作系统提供的重要功能之一,它提高了系统的稳定性和安全性。

💡时间共享(Time-Sharing)

时间共享是一种操作系统特性,它允许多个用户或多个程序在同一时间内共享计算机资源。操作系统将处理器时间分割成小的时间片,每个用户或程序轮流使用这些时间片,从而创建了一种同时运行的错觉。在视频中,时间共享是随着计算机速度的提升和成本的降低而发展起来的一种技术,它使得单个计算机可以供多个用户使用。

💡Unix

Unix是一种广泛使用的操作系统,以其简洁性和可移植性而闻名。Unix的设计哲学是将操作系统分为两个主要部分:核心功能(内核)和一系列有用的工具(用户空间)。Unix的设计对后来的操作系统产生了深远的影响。在视频中,Unix的创建是为了解决早期操作系统如Multics的复杂性和过度工程化的问题,它的设计者有意省略了一些功能,比如错误恢复,以简化系统。

💡MS-DOS

MS-DOS是微软公司在1981年发布的磁盘操作系统,它是早期个人电脑中最流行的操作系统之一。MS-DOS非常简单,只占用了160千字节的存储空间,可以安装在一张软盘上。尽管它缺乏多任务处理和内存保护功能,但它的简单性使得个人电脑变得更加实用和易于使用。在视频中,MS-DOS的发布标志着个人电脑操作系统的开始。

💡Windows

Windows是微软公司开发的一系列操作系统,它在1990年代开始主导操作系统市场。早期的Windows版本没有强大的内存保护功能,程序错误可能导致整个系统崩溃,表现为“蓝屏死机”。然而,随着时间的推移,Windows不断改进,增加了更好的内存保护和其他安全特性。在视频中,Windows的发展展示了个人电脑操作系统的进步和成熟。

💡现代操作系统(Modern Operating Systems)

现代操作系统指的是当前使用的操作系统,如Mac OS X、Windows 10、Linux、iOS和Android。这些系统都具备多任务处理、虚拟内存和受保护的内存等特性,允许用户同时运行多个程序。在视频中,现代操作系统的发展使得用户可以进行复杂的多任务操作,如同时浏览网页、编辑图片、播放音乐和同步数据等,这些都依赖于操作系统的高级功能。

Highlights

20世纪40年代和50年代早期的计算机一次只能运行一个程序。

程序员需要手动将程序输入计算机,这通常需要花费数小时甚至数周。

随着计算机速度的指数级增长,手动输入程序变得不再高效,因此操作系统应运而生。

操作系统(OS)是一类特殊的程序,它们拥有对硬件的特殊权限,能够运行和管理其他程序。

操作系统的起源可以追溯到1950年代,当时计算机变得更加普及和强大。

批处理(batch processing)允许计算机批量处理程序,减少了等待时间。

计算机的普及和降价使得软件共享成为可能,但也带来了编程上的挑战。

操作系统通过提供设备驱动程序这一软件抽象层,简化了程序员与硬件外设的交互。

到了1950年代末,计算机的快速增长使得它们在等待慢速的I/O设备(如打印机)时常常处于空闲状态。

曼彻斯特大学的Atlas超级计算机和其操作系统Atlas Supervisor通过聪明的调度实现了多任务处理。

多任务处理允许多个程序共享单个CPU,同时进行不同的任务。

为了在单台计算机上同时运行多个程序,每个程序都需要自己的内存块,这通过内存分配来实现。

虚拟内存技术允许程序认为自己的内存始终从地址0开始,隐藏了物理内存的实际位置。

内存保护功能可以防止一个程序错误地破坏其他程序的内存。

1970年代,计算机的速度和价格使得它们可以被教育机构购买并供学生使用。

时间共享系统允许多个用户同时交互式地访问计算机资源。

Multics是第一个旨在从一开始就设计为安全的操作系统,但它因为过于复杂而未能商业成功。

Unix操作系统由Dennis Ritchie和Ken Thompson开发,它将操作系统分为内核和实用工具两部分。

Unix的简洁性使其能够在更便宜和多样化的硬件上运行,迅速在Bell Labs流行开来。

个人或家用计算机的出现使得操作系统必须同样简单,如微软的MS-DOS。

现代操作系统如Mac OS X、Windows 10、Linux、iOS和Android都具备多任务处理、虚拟和受保护的内存功能。

Transcripts

play00:03

This episode is supported by Hover.

play00:06

Hi, I'm Carrie Anne, and welcome to Crash Course Computer Science!

play00:09

Computers in the 1940s and early 50s ran one program at a time.

play00:12

A programmer would write one at their desk, for example, on punch cards.

play00:15

Then, they’d carry it to a room containing a room-sized computer, and hand it to a dedicated

play00:19

computer operator.

play00:20

That person would then feed the program into the computer when it was next available.

play00:24

The computer would run it, spit out some output, and halt.

play00:27

This very manual process worked OK back when computers were slow, and running a program

play00:31

often took hours, days or even weeks.

play00:33

But, as we discussed last episode, computers became faster... and faster... and faster

play00:38

– exponentially so!

play00:39

Pretty soon, having humans run around and inserting programs into readers was taking

play00:43

longer than running the actual programs themselves.

play00:46

We needed a way for computers to operate themselves, and so, operating systems were born.

play00:50

INTRO

play00:59

Operating systems, or OS’es for short, are just programs.

play01:03

But, special privileges on the hardware let them run and manage other programs.

play01:07

They’re typically the first one to start when a computer is turned on, and all subsequent

play01:10

programs are launched by the OS.

play01:12

They got their start in the 1950s, as computers became more widespread and more powerful.

play01:16

The very first OSes augmented the mundane, manual task of loading programs by hand.

play01:21

Instead of being given one program at a time, computers could be given batches.

play01:25

When the computer was done with one, it would automatically and near-instantly start the next.

play01:30

There was no downtime while someone scurried around an office to find the next program

play01:33

to run.

play01:34

This was called batch processing.

play01:36

While computers got faster, they also got cheaper.

play01:38

So, they were popping up all over the world, especially in universities and government

play01:42

offices.

play01:43

Soon, people started sharing software.

play01:45

But there was a problem…

play01:46

In the era of one-off computers, like the Harvard Mark 1 or ENIAC, programmers only

play01:51

had to write code for that one single machine.

play01:53

The processor, punch card readers, and printers were known and unchanging.

play01:58

But as computers became more widespread, their configurations were not always identical,

play02:02

like computers might have the same CPU, but not the same printer.

play02:05

This was a huge pain for programmers.

play02:07

Not only did they have to worry about writing their program, but also how to interface with

play02:11

each and every model of printer, and all devices connected to a computer, what are called peripherals.

play02:16

Interfacing with early peripherals was very low level, requiring programmers to know intimate

play02:20

hardware details about each device.

play02:23

On top of that, programmers rarely had access to every model of a peripheral to test their code on.

play02:27

So, they had to write code as best they could, often just by reading manuals, and hope it

play02:32

worked when shared.

play02:33

Things weren’t exactly plug-and-play back then… more plug-and-pray.

play02:36

This was clearly terrible, so to make it easier for programmers, Operating Systems stepped

play02:40

in as intermediaries between software programs and hardware peripherals.

play02:45

More specifically, they provided a software abstraction, through APIs, called device drivers.

play02:50

These allow programmers to talk to common input and output hardware, or I/O for short,

play02:54

using standardized mechanisms.

play02:56

For example, programmers could call a function like “print highscore”, and the OS would

play03:00

do the heavy lifting to get it onto paper.

play03:02

By the end of the 1950s, computers had gotten so fast, they were often idle waiting for

play03:06

slow mechanical things, like printers and punch card readers.

play03:09

While programs were blocked on I/O, the expensive processor was just chillin’... not like

play03:13

a villain… you know, just relaxing.

play03:15

In the late 50’s, the University of Manchester, in the UK, started work on a supercomputer

play03:19

called Atlas, one of the first in the world.

play03:21

They knew it was going to be wicked fast, so they needed a way to make maximal use of

play03:25

the expensive machine.

play03:26

Their solution was a program called the Atlas Supervisor, finished in 1962.

play03:31

This operating system not only loaded programs automatically, like earlier batch systems,

play03:35

but could also run several at the same time on its single CPU.

play03:39

It did this through clever scheduling.

play03:40

Let’s say we have a game program running on Atlas, and we call the function “print

play03:44

highscore” which instructs Atlas to print the value of a variable named “highscore”

play03:48

onto paper to show our friends that we’re the ultimate champion of virtual tiddlywinks.

play03:52

That function call is going to take a while, the equivalent of thousands of clock cycles,

play03:57

because mechanical printers are slow in comparison to electronic CPUs.

play04:01

So instead of waiting for the I/O to finish, Atlas instead puts our program to sleep, then

play04:05

selects and runs another program that’s waiting and ready to run.

play04:08

Eventually, the printer will report back to Atlas that it finished printing the value

play04:12

of “highscore”.

play04:13

Atlas then marks our program as ready to go, and at some point, it will be scheduled to

play04:16

run again on the CPU, and continue onto the next line of code following the print statement.

play04:21

In this way, Atlas could have one program running calculations on the CPU, while another

play04:25

was printing out data, and yet another reading in data from a punch tape.

play04:29

Atlas’ engineers doubled down on this idea, and outfitted their computer with 4 paper

play04:34

tape readers, 4 paper tape punches, and up to 8 magnetic tape drives.

play04:38

This allowed many programs to be in progress all at once, sharing time on a single CPU.

play04:43

This ability, enabled by the Operating System, is called multitasking.

play04:46

There’s one big catch to having many programs running simultaneously on a single computer, though.

play04:51

Each one is going to need some memory, and we can’t lose that program’s data when

play04:55

we switch to another program.

play04:56

The solution is to allocate each program its own block of memory.

play04:59

So, for example, let’s say a computer has 10,000 memory locations in total.

play05:04

Program A might get allocated memory addresses 0 through 999, and Program B might get 1000

play05:10

through 1999, and so on.

play05:13

If a program asks for more memory, the operating system decides if it can grant that request,

play05:17

and if so, what memory block to allocate next.

play05:20

This flexibility is great, but introduces a quirk.

play05:23

It means that Program A could end up being allocated non-sequential blocks of memory,

play05:27

in say addresses 0 through 999, and 2000 through 2999.

play05:33

And this is just a simple example - a real program might be allocated dozens of blocks

play05:37

scattered all over memory.

play05:38

As you might imagine, this would get really confusing for programmers to keep track of.

play05:42

Maybe there’s a long list of sales data in memory that a program has to total up at

play05:46

the end of the day, but this list is stored across a bunch of different blocks of memory.

play05:50

To hide this complexity, Operating Systems virtualize memory locations.

play05:54

With Virtual Memory, programs can assume their memory always starts at address 0, keeping

play05:58

things simple and consistent.

play06:00

However, the actual, physical location in computer memory is hidden and abstracted by

play06:04

the operating system.

play06:06

Just a new level of abstraction.

play06:13

Let’s take our example Program B, which has been allocated a block of memory from

play06:17

address 1000 to 1999.

play06:21

As far as Program B can tell, this appears to be a block from 0 to 999.

play06:25

The OS and CPU handle the virtual-to-physical memory remapping automatically.

play06:29

So, if Program B requests memory location 42, it really ends up reading address 1042.

play06:36

This virtualization of memory addresses is even more useful for Program A, which in our

play06:40

example, has been allocated two blocks of memory that are separated from one another.

play06:44

This too is invisible to Program A.

play06:46

As far as it can tell, it’s been allocated a continuous block of 2000 addresses.

play06:51

When Program A reads memory address 999, that does coincidentally map to physical memory

play06:57

address 999.

play06:59

But if Program A reads the very next value in memory, at address 1000, that gets mapped

play07:03

behind the scenes to physical memory address 2000.

play07:06

This mechanism allows programs to have flexible memory sizes, called dynamic memory allocation,

play07:11

that appear to be continuous to them.

play07:13

It simplifies everything and offers tremendous flexibility to the Operating System in running

play07:18

multiple programs simultaneously.

play07:20

Another upside of allocating each program its own memory, is that they’re better isolated

play07:23

from one another.

play07:24

So, if a buggy program goes awry, and starts writing gobbledygook, it can only trash its

play07:28

own memory, not that of other programs.

play07:31

This feature is called Memory Protection.

play07:33

This is also really useful in protecting against malicious software, like viruses.

play07:37

For example, we generally don’t want other programs to have the ability to read or modify

play07:41

the memory of, let say, our email, with that kind of access, malware could send emails

play07:45

on your behalf and maybe steal personal information.

play07:48

Not good!

play07:49

Atlas had both virtual and protected memory.

play07:51

It was the first computer and OS to support these features!

play07:54

By the 1970s, computers were sufficiently fast and cheap.

play07:58

Institutions like a university could buy a computer and let students use it.

play08:02

It was not only fast enough to run several programs at once, but also give several users

play08:06

simultaneous, interactive access.

play08:09

This was done through a terminal, which is a keyboard and screen that connects to a big

play08:13

computer, but doesn’t contain any processing power itself.

play08:16

A refrigerator-sized computer might have 50 terminals connected to it, allowing up to

play08:21

50 users.

play08:22

Now operating systems had to handle not just multiple programs, but also multiple users.

play08:27

So that no one person could gobble up all of a computer's resources, operating systems

play08:30

were developed that offered time-sharing.

play08:32

With time-sharing each individual user was only allowed to utilize a small fraction of

play08:37

the computer’s processor, memory, and so on.

play08:39

Because computers are so fast, even getting just 1/50th of its resources was enough for

play08:44

individuals to complete many tasks.

play08:45

The most influential of early time-sharing Operating Systems was Multics, or Multiplexed

play08:50

Information and Computing Service, released in 1969.

play08:54

Multics was the first major operating system designed to be secure from the outset.

play08:58

Developers didn’t want mischievous users accessing data they shouldn't, like students

play09:02

attempting to access the final exam on their professor’s account.

play09:05

Features like this meant Multics was really complicated for its time, using around 1 Megabit

play09:10

of memory, which was a lot back then!

play09:12

That might be half of a computer's memory, just to run the OS!

play09:15

Dennis Ritchie, one of the researchers working on Multics, once said:

play09:18

“One of the obvious things that went wrong with Multics as a commercial success was just

play09:23

that it was sort of over-engineered in a sense.

play09:25

There was just too much in it.”

play09:26

T his lead Dennis, and another Multics researcher,

play09:28

Ken Thompson, to strike out on their own and build a new, lean operating system… called Unix.

play09:33

They wanted to separate the OS into two parts:

play09:36

First was the core functionality of the OS, things like memory management, multitasking,

play09:40

and dealing with I/O, which is called the kernel.

play09:43

The second part was a wide array of useful tools that came bundled with, but not part

play09:47

of the kernel, things like programs and libraries.

play09:49

Building a compact, lean kernel meant intentionally leaving some functionality out.

play09:53

Tom Van Vleck, another Multics developer, recalled:

play09:55

“I remarked to Dennis that easily half the code I was writing in Multics was error recovery

play10:00

code."

play10:01

He said, "We left all that stuff out of Unix.

play10:03

If there's an error, we have this routine called panic, and when it is called, the machine

play10:07

crashes, and you holler down the hall, 'Hey, reboot it.'"”

play10:11

You might have heard of kernel panics, This is where the term came from.

play10:14

It’s literally when the kernel crashes, has no recourse to recover, and so calls a

play10:18

function called “panic”.

play10:19

Originally, all it did was print the word “panic” and then enter

play10:22

an infinite loop.

play10:24

This simplicity meant that Unix could be run on cheaper and more diverse hardware, making

play10:28

it popular inside Bell Labs, where Dennis and Ken worked.

play10:31

As more developers started using Unix to build and run their own programs, the number of

play10:34

contributed tools grew.

play10:36

Soon after its release in 1971, it gained compilers for different programming languages

play10:41

and even a word processor, quickly making it one of the most popular OSes of the 1970s

play10:45

and 80s.

play10:46

At the same time, by the early 1980s, the cost of a basic computer had fallen to the

play10:50

point where individual people could afford one, called a personal or home computer.

play10:55

These were much simpler than the big mainframes found at universities, corporations, and governments.

play10:59

So, their operating systems had to be equally simple.

play11:02

For example, Microsoft’s Disk Operating System, or MS-DOS, was just 160 kilobytes,

play11:07

allowing it to fit, as the name suggests, onto a single disk.

play11:10

First released in 1981, it became the most popular OS for early home computers, even

play11:15

though it lacked multitasking and protected memory.

play11:18

This meant that programs could, and would, regularly crash the system.

play11:22

While annoying, it was an acceptable tradeoff, as users could just turn their own computers

play11:26

off and on again!

play11:27

Even early versions of Windows, first released by Microsoft in 1985 and which dominated the

play11:32

OS scene throughout the 1990s, lacked strong memory protection.

play11:35

When programs misbehaved, you could get the blue screen of death, a sign that a program

play11:40

had crashed so badly that it took down the whole operating system.

play11:43

Luckily, newer versions of Windows have better protections and usually don't crash that often.

play11:48

Today, computers run modern operating systems, like Mac OS X, Windows 10, Linux, iOS and

play11:53

Android.

play11:54

Even though the computers we own are most often used by just a single person, you! their

play11:58

OSes all have multitasking and virtual and protected memory.

play12:02

So, they can run many programs at once: you can watch YouTube in your web browser, edit

play12:06

a photo in Photoshop, play music in Spotify and sync Dropbox all at the same time.

play12:12

This wouldn’t be possible without those decades of research and development on Operating

play12:16

Systems, and of course the proper memory to store those programs.

play12:19

Which we’ll get to next week.

play12:21

I’d like to thank Hover for sponsoring this episode.

play12:24

Hover is a service that helps you buy and manage domain names.

play12:27

Hover has over 400 domain extensions to end your domain with - including .com and .net.

play12:32

You can also get unique domains that are more professional than a generic address.

play12:35

Here at Crash Course, we'd get the domain name “mongols.fans” but I think you know

play12:40

that already.

play12:41

Once you have your domain, you can set up your custom email to forward to your existing

play12:44

email address -- including Outlook or Gmail or whatever you already use.

play12:48

With Hover, you can get a custom domain and email address for 10% off.

play12:52

Go to Hover.com/crashcourse today to create your custom domain and help support our show!

Rate This

5.0 / 5 (0 votes)

Related Tags
操作系统多任务内存保护历史发展计算机科学批处理设备驱动虚拟内存时间共享Unix系统个人电脑
Do you need a summary in English?