Operating Systems: Crash Course Computer Science #18

CrashCourse
28 Jun 201713:35

Summary

TLDRThis episode explores the evolution of operating systems, from early computers running single programs to modern OSes like Mac OS X and Windows 10. It discusses the advent of batch processing, device drivers, multitasking, virtual memory, and memory protection, highlighting key developments like the Atlas Supervisor and Unix, and the shift to personal computers.

Takeaways

  • 💻 In the 1940s and early 1950s, computers ran one program at a time, with programs manually loaded by operators.
  • 🚀 As computers became faster, the manual process of loading programs became inefficient, leading to the development of operating systems.
  • 🌐 Operating systems (OS) are programs that manage other programs and have special privileges on the hardware, typically starting first when a computer is turned on.
  • 📚 The first operating systems in the 1950s introduced batch processing, allowing computers to run multiple programs automatically without downtime.
  • 🔄 As computers spread, the challenge of writing code for different computer configurations led to the need for operating systems to act as intermediaries, providing device drivers for standardized hardware interaction.
  • 🖨 By the end of the 1950s, computers were often idle due to slow I/O operations. The Atlas Supervisor, developed in the 1960s, introduced multitasking by scheduling multiple programs to run concurrently on a single CPU.
  • 🧩 To manage multiple programs running simultaneously, operating systems allocated each program its own block of memory, introducing the concept of dynamic memory allocation.
  • 🔒 Memory protection was also introduced to isolate programs from each other, preventing a buggy program from affecting others and protecting against malicious software.
  • 🌐 The Atlas computer was the first to support virtual and protected memory, enhancing the flexibility and security of multitasking.
  • 📈 By the 1970s, operating systems like Multics and Unix were developed to support time-sharing, allowing multiple users to interact with a computer simultaneously, with Unix emphasizing a lean kernel and user-contributed tools.
  • 🏠 The advent of personal computers in the 1980s required simpler operating systems like MS-DOS, which, despite lacking multitasking and protected memory, became popular due to its small size and compatibility with early home computers.

Q & A

  • What was the primary limitation of computers in the 1940s and early 1950s in terms of running programs?

    -Computers in the 1940s and early 1950s could only run one program at a time, and the process of loading programs was very manual, involving writing code on punch cards and handing them to a computer operator to be fed into the computer when it was available.

  • Why was the development of operating systems necessary as computers became faster?

    -As computers became faster, the time taken by humans to insert programs manually into the computer was longer than the time taken to run the actual programs. Operating systems were needed to automate the process and allow computers to operate themselves more efficiently.

  • What is batch processing in the context of early operating systems?

    -Batch processing is a method where computers were given batches of programs to run instead of one at a time. Once one program finished, the computer would automatically and quickly start the next one, reducing downtime.

  • How did the advent of operating systems help with the issue of diverse computer configurations?

    -Operating systems acted as intermediaries between software programs and hardware peripherals, providing a software abstraction through APIs called device drivers. This allowed programmers to interact with common input and output hardware using standardized mechanisms, simplifying the process of writing code for different computer configurations.

  • What problem did the Atlas Supervisor solve in terms of computer usage?

    -The Atlas Supervisor, developed at the University of Manchester, allowed for multitasking by running several programs at the same time on a single CPU through clever scheduling. This maximized the use of the computer's resources, preventing idle time while waiting for slow mechanical processes like printing.

  • How does virtual memory help with the complexity of dynamic memory allocation?

    -Virtual memory allows programs to assume their memory always starts at address 0, simplifying the programming process. The operating system abstracts the actual physical memory locations, handling the remapping from virtual to physical addresses automatically.

  • What is memory protection and why is it important?

    -Memory protection is a feature that isolates each program in its own memory space, preventing a buggy program from affecting the memory of other programs. This is crucial for system stability and security, especially against malicious software.

  • What was the significance of the Multics operating system in the development of Unix?

    -Multics was an early time-sharing operating system designed with security in mind. However, it was considered over-engineered and complex. The experience with Multics led Dennis Ritchie and Ken Thompson to create Unix, which focused on a lean kernel and a collection of useful tools separate from the core OS.

  • Why was the simplicity of Unix's design a key factor in its popularity?

    -The simplicity of Unix's design allowed it to run on cheaper and more diverse hardware. It also made it easier for developers to contribute tools and programs, which contributed to its widespread adoption in the 1970s and 80s.

  • How did the early personal computers' operating systems differ from those used in larger mainframes?

    -Early personal computers' operating systems, such as MS-DOS, were much simpler and smaller in size compared to mainframe operating systems. They lacked features like multitasking and protected memory, which meant that programs could easily crash the system, but this was an acceptable tradeoff given the affordability and simplicity of personal computers at the time.

  • What modern operating systems have inherited features from the developments in operating systems over the decades?

    -Modern operating systems like Mac OS X, Windows 10, Linux, iOS, and Android have inherited features such as multitasking, virtual memory, and protected memory from the developments in operating systems over the decades. These features enable them to run many programs simultaneously and provide a stable and secure computing environment.

Outlines

00:00

💻 The Birth of Operating Systems

In the 1940s and 1950s, computers operated one program at a time, with manual processes involving punch cards and dedicated operators. As computers became faster, these manual processes became inefficient. Operating systems (OS) were introduced in the 1950s to automate program loading and manage multiple programs, leading to batch processing. The first OSes allowed computers to run multiple programs in sequence without downtime. They also provided a software abstraction layer through APIs, called device drivers, to simplify interfacing with hardware peripherals. This abstraction helped programmers manage the increasing diversity of computer configurations and peripherals.

05:04

🔄 The Evolution of Multitasking and Memory Management

As computers became more powerful, the need for efficient use of resources led to the development of multitasking in operating systems. The University of Manchester's Atlas Supervisor, completed in 1962, was an early example of an OS that could run multiple programs simultaneously through clever scheduling. This multitasking allowed for better utilization of the CPU while waiting for slow I/O operations. Memory management also evolved with the introduction of virtual memory, which allowed programs to assume a continuous memory space starting at address 0, simplifying programming and enabling dynamic memory allocation. Memory protection was another key feature, isolating programs and preventing them from affecting each other's data, thus enhancing system stability and security.

10:08

🌐 The Emergence of Personal Computing and Modern OS

By the 1970s, computers were accessible enough for institutions to allow multiple users, leading to the development of time-sharing systems. The Multics OS, released in 1969, was an early attempt at a secure, time-sharing system, but its complexity led to its commercial failure. This prompted the development of Unix by Dennis Ritchie and Ken Thompson, which focused on a lean kernel and a wide array of tools. Unix's simplicity and flexibility made it popular, especially in academic and research environments. As personal computing emerged in the 1980s, operating systems like Microsoft's MS-DOS and early versions of Windows were designed for simplicity and affordability, though they initially lacked advanced features like multitasking and protected memory. Modern operating systems, such as Mac OS X, Windows 10, Linux, iOS, and Android, have evolved to include multitasking, virtual memory, and protected memory, enabling users to run multiple applications simultaneously.

Mindmap

Keywords

💡Operating Systems (OS)

Operating Systems, or OS, are software that manage computer hardware resources and provide services for computer programs. They act as an intermediary between the user's program and the computer hardware, allowing for efficient use of the system. In the video, the evolution of OS is discussed, starting from the need for automation in running programs to the development of features like multitasking and memory protection, which are integral to modern computing.

💡Batch Processing

Batch processing is a method of running multiple jobs without human intervention. It was a significant advancement in the early days of computing, allowing computers to run a series of programs one after another automatically, thus reducing downtime. The script mentions batch processing as an early feature of operating systems that improved efficiency by eliminating the need for manual program loading.

💡Device Drivers

Device drivers are software components that act as an interface between the operating system and hardware devices. They provide a standardized way for programs to interact with hardware, abstracting the low-level details. In the script, device drivers are highlighted as a solution to the problem of interfacing with a variety of computer peripherals, allowing programmers to write more portable code.

💡Multitasking

Multitasking is the ability of a computer to run multiple programs or processes simultaneously. The script explains how early operating systems like the Atlas Supervisor enabled multitasking by scheduling programs to run on the CPU while other programs were waiting for I/O operations to complete, thus maximizing the use of the computer's resources.

💡Virtual Memory

Virtual memory is a memory management technique that provides the illusion to programs that they have a large, private memory space, while in reality, the physical memory is shared among multiple programs. The script describes how virtual memory simplifies programming by allowing programs to assume a continuous block of memory starting at address 0, abstracting the physical memory allocation managed by the OS.

💡Memory Protection

Memory protection is a security feature that isolates programs from each other in memory, preventing a program from accessing or modifying the memory of another program. The script mentions memory protection as a feature that prevents buggy programs from affecting others and also protects against malicious software by ensuring that programs cannot access unauthorized memory.

💡Time-Sharing

Time-sharing is a method of operating systems that allows multiple users to share the resources of a single computer almost simultaneously. The script explains how time-sharing systems allocate a small fraction of the computer's resources to each user, enabling interactive access for multiple users on a single machine.

💡Unix

Unix is a powerful, multi-user and multitasking operating system. It was designed with a simple and modular architecture, distinguishing between the core functionality (kernel) and additional tools. The script discusses Unix as a significant development in operating systems, emphasizing its influence on the design of modern operating systems.

💡MS-DOS

MS-DOS, or Microsoft Disk Operating System, was an operating system for early personal computers. It was simple and compact, fitting on a single disk, but lacked features like multitasking and protected memory. The script mentions MS-DOS as an example of the early operating systems used in personal computers, highlighting the trade-offs made for simplicity and affordability.

💡Personal Computers

Personal Computers, or home computers, are computers designed for individual use, as opposed to large, shared mainframes. The script discusses the advent of personal computers in the 1980s, which made computing accessible to individuals and required operating systems that were simple and cost-effective, like MS-DOS.

💡Kernel Panic

A kernel panic is a fatal error from which the operating system cannot recover, causing it to crash. The term originated from Unix's approach to handling errors, where a function called 'panic' was called, resulting in a system crash. The script uses the term to illustrate the simplicity of Unix's error handling in its early design.

Highlights

Computers in the 1940s and early 50s ran one program at a time, with a manual process involving punch cards and dedicated computer operators.

As computers became faster, the manual process of loading programs became inefficient, leading to the development of operating systems.

Operating systems, or OSes, are programs that manage other programs and have special privileges on the hardware.

The first operating systems emerged in the 1950s to automate the task of loading programs, introducing batch processing.

With the spread of computers, programmers faced challenges in writing code for different computer configurations and peripherals.

Operating systems acted as intermediaries, providing software abstraction through APIs called device drivers.

The Atlas Supervisor, developed in the late 1950s, was an early operating system that allowed multitasking on a single CPU through clever scheduling.

Multitasking enabled multiple programs to run simultaneously, sharing time on a single CPU.

To manage memory in multitasking environments, operating systems allocated each program its own block of memory.

Virtual memory was introduced to simplify memory management, allowing programs to assume their memory starts at address 0.

Memory Protection was a feature that isolated programs from one another, preventing a buggy program from affecting others.

The University of Manchester's Atlas was the first computer and OS to support virtual and protected memory.

By the 1970s, computers were fast and cheap enough for institutions to allow multiple users, leading to the development of time-sharing systems.

Multics, released in 1969, was an early time-sharing operating system designed to be secure from the outset.

Unix, developed by Dennis Ritchie and Ken Thompson, was a lean operating system that separated the core functionality into a kernel and a set of useful tools.

Unix's simplicity allowed it to be run on diverse hardware and gained popularity in the 1970s and 80s.

MS-DOS, released in 1981, was a simple operating system for early home computers, despite lacking multitasking and protected memory.

Early versions of Windows lacked strong memory protection, leading to the infamous 'blue screen of death' when programs crashed.

Modern operating systems like Mac OS X, Windows 10, Linux, iOS, and Android all feature multitasking, virtual, and protected memory.

The development of operating systems has enabled the simultaneous running of multiple programs, a feat made possible by decades of research and development.

Transcripts

play00:03

This episode is supported by Hover.

play00:06

Hi, I'm Carrie Anne, and welcome to Crash Course Computer Science!

play00:09

Computers in the 1940s and early 50s ran one program at a time.

play00:12

A programmer would write one at their desk, for example, on punch cards.

play00:15

Then, they’d carry it to a room containing a room-sized computer, and hand it to a dedicated

play00:19

computer operator.

play00:20

That person would then feed the program into the computer when it was next available.

play00:24

The computer would run it, spit out some output, and halt.

play00:27

This very manual process worked OK back when computers were slow, and running a program

play00:31

often took hours, days or even weeks.

play00:33

But, as we discussed last episode, computers became faster... and faster... and faster

play00:38

– exponentially so!

play00:39

Pretty soon, having humans run around and inserting programs into readers was taking

play00:43

longer than running the actual programs themselves.

play00:46

We needed a way for computers to operate themselves, and so, operating systems were born.

play00:50

INTRO

play00:59

Operating systems, or OS’es for short, are just programs.

play01:03

But, special privileges on the hardware let them run and manage other programs.

play01:07

They’re typically the first one to start when a computer is turned on, and all subsequent

play01:10

programs are launched by the OS.

play01:12

They got their start in the 1950s, as computers became more widespread and more powerful.

play01:16

The very first OSes augmented the mundane, manual task of loading programs by hand.

play01:21

Instead of being given one program at a time, computers could be given batches.

play01:25

When the computer was done with one, it would automatically and near-instantly start the next.

play01:30

There was no downtime while someone scurried around an office to find the next program

play01:33

to run.

play01:34

This was called batch processing.

play01:36

While computers got faster, they also got cheaper.

play01:38

So, they were popping up all over the world, especially in universities and government

play01:42

offices.

play01:43

Soon, people started sharing software.

play01:45

But there was a problem…

play01:46

In the era of one-off computers, like the Harvard Mark 1 or ENIAC, programmers only

play01:51

had to write code for that one single machine.

play01:53

The processor, punch card readers, and printers were known and unchanging.

play01:58

But as computers became more widespread, their configurations were not always identical,

play02:02

like computers might have the same CPU, but not the same printer.

play02:05

This was a huge pain for programmers.

play02:07

Not only did they have to worry about writing their program, but also how to interface with

play02:11

each and every model of printer, and all devices connected to a computer, what are called peripherals.

play02:16

Interfacing with early peripherals was very low level, requiring programmers to know intimate

play02:20

hardware details about each device.

play02:23

On top of that, programmers rarely had access to every model of a peripheral to test their code on.

play02:27

So, they had to write code as best they could, often just by reading manuals, and hope it

play02:32

worked when shared.

play02:33

Things weren’t exactly plug-and-play back then… more plug-and-pray.

play02:36

This was clearly terrible, so to make it easier for programmers, Operating Systems stepped

play02:40

in as intermediaries between software programs and hardware peripherals.

play02:45

More specifically, they provided a software abstraction, through APIs, called device drivers.

play02:50

These allow programmers to talk to common input and output hardware, or I/O for short,

play02:54

using standardized mechanisms.

play02:56

For example, programmers could call a function like “print highscore”, and the OS would

play03:00

do the heavy lifting to get it onto paper.

play03:02

By the end of the 1950s, computers had gotten so fast, they were often idle waiting for

play03:06

slow mechanical things, like printers and punch card readers.

play03:09

While programs were blocked on I/O, the expensive processor was just chillin’... not like

play03:13

a villain… you know, just relaxing.

play03:15

In the late 50’s, the University of Manchester, in the UK, started work on a supercomputer

play03:19

called Atlas, one of the first in the world.

play03:21

They knew it was going to be wicked fast, so they needed a way to make maximal use of

play03:25

the expensive machine.

play03:26

Their solution was a program called the Atlas Supervisor, finished in 1962.

play03:31

This operating system not only loaded programs automatically, like earlier batch systems,

play03:35

but could also run several at the same time on its single CPU.

play03:39

It did this through clever scheduling.

play03:40

Let’s say we have a game program running on Atlas, and we call the function “print

play03:44

highscore” which instructs Atlas to print the value of a variable named “highscore”

play03:48

onto paper to show our friends that we’re the ultimate champion of virtual tiddlywinks.

play03:52

That function call is going to take a while, the equivalent of thousands of clock cycles,

play03:57

because mechanical printers are slow in comparison to electronic CPUs.

play04:01

So instead of waiting for the I/O to finish, Atlas instead puts our program to sleep, then

play04:05

selects and runs another program that’s waiting and ready to run.

play04:08

Eventually, the printer will report back to Atlas that it finished printing the value

play04:12

of “highscore”.

play04:13

Atlas then marks our program as ready to go, and at some point, it will be scheduled to

play04:16

run again on the CPU, and continue onto the next line of code following the print statement.

play04:21

In this way, Atlas could have one program running calculations on the CPU, while another

play04:25

was printing out data, and yet another reading in data from a punch tape.

play04:29

Atlas’ engineers doubled down on this idea, and outfitted their computer with 4 paper

play04:34

tape readers, 4 paper tape punches, and up to 8 magnetic tape drives.

play04:38

This allowed many programs to be in progress all at once, sharing time on a single CPU.

play04:43

This ability, enabled by the Operating System, is called multitasking.

play04:46

There’s one big catch to having many programs running simultaneously on a single computer, though.

play04:51

Each one is going to need some memory, and we can’t lose that program’s data when

play04:55

we switch to another program.

play04:56

The solution is to allocate each program its own block of memory.

play04:59

So, for example, let’s say a computer has 10,000 memory locations in total.

play05:04

Program A might get allocated memory addresses 0 through 999, and Program B might get 1000

play05:10

through 1999, and so on.

play05:13

If a program asks for more memory, the operating system decides if it can grant that request,

play05:17

and if so, what memory block to allocate next.

play05:20

This flexibility is great, but introduces a quirk.

play05:23

It means that Program A could end up being allocated non-sequential blocks of memory,

play05:27

in say addresses 0 through 999, and 2000 through 2999.

play05:33

And this is just a simple example - a real program might be allocated dozens of blocks

play05:37

scattered all over memory.

play05:38

As you might imagine, this would get really confusing for programmers to keep track of.

play05:42

Maybe there’s a long list of sales data in memory that a program has to total up at

play05:46

the end of the day, but this list is stored across a bunch of different blocks of memory.

play05:50

To hide this complexity, Operating Systems virtualize memory locations.

play05:54

With Virtual Memory, programs can assume their memory always starts at address 0, keeping

play05:58

things simple and consistent.

play06:00

However, the actual, physical location in computer memory is hidden and abstracted by

play06:04

the operating system.

play06:06

Just a new level of abstraction.

play06:13

Let’s take our example Program B, which has been allocated a block of memory from

play06:17

address 1000 to 1999.

play06:21

As far as Program B can tell, this appears to be a block from 0 to 999.

play06:25

The OS and CPU handle the virtual-to-physical memory remapping automatically.

play06:29

So, if Program B requests memory location 42, it really ends up reading address 1042.

play06:36

This virtualization of memory addresses is even more useful for Program A, which in our

play06:40

example, has been allocated two blocks of memory that are separated from one another.

play06:44

This too is invisible to Program A.

play06:46

As far as it can tell, it’s been allocated a continuous block of 2000 addresses.

play06:51

When Program A reads memory address 999, that does coincidentally map to physical memory

play06:57

address 999.

play06:59

But if Program A reads the very next value in memory, at address 1000, that gets mapped

play07:03

behind the scenes to physical memory address 2000.

play07:06

This mechanism allows programs to have flexible memory sizes, called dynamic memory allocation,

play07:11

that appear to be continuous to them.

play07:13

It simplifies everything and offers tremendous flexibility to the Operating System in running

play07:18

multiple programs simultaneously.

play07:20

Another upside of allocating each program its own memory, is that they’re better isolated

play07:23

from one another.

play07:24

So, if a buggy program goes awry, and starts writing gobbledygook, it can only trash its

play07:28

own memory, not that of other programs.

play07:31

This feature is called Memory Protection.

play07:33

This is also really useful in protecting against malicious software, like viruses.

play07:37

For example, we generally don’t want other programs to have the ability to read or modify

play07:41

the memory of, let say, our email, with that kind of access, malware could send emails

play07:45

on your behalf and maybe steal personal information.

play07:48

Not good!

play07:49

Atlas had both virtual and protected memory.

play07:51

It was the first computer and OS to support these features!

play07:54

By the 1970s, computers were sufficiently fast and cheap.

play07:58

Institutions like a university could buy a computer and let students use it.

play08:02

It was not only fast enough to run several programs at once, but also give several users

play08:06

simultaneous, interactive access.

play08:09

This was done through a terminal, which is a keyboard and screen that connects to a big

play08:13

computer, but doesn’t contain any processing power itself.

play08:16

A refrigerator-sized computer might have 50 terminals connected to it, allowing up to

play08:21

50 users.

play08:22

Now operating systems had to handle not just multiple programs, but also multiple users.

play08:27

So that no one person could gobble up all of a computer's resources, operating systems

play08:30

were developed that offered time-sharing.

play08:32

With time-sharing each individual user was only allowed to utilize a small fraction of

play08:37

the computer’s processor, memory, and so on.

play08:39

Because computers are so fast, even getting just 1/50th of its resources was enough for

play08:44

individuals to complete many tasks.

play08:45

The most influential of early time-sharing Operating Systems was Multics, or Multiplexed

play08:50

Information and Computing Service, released in 1969.

play08:54

Multics was the first major operating system designed to be secure from the outset.

play08:58

Developers didn’t want mischievous users accessing data they shouldn't, like students

play09:02

attempting to access the final exam on their professor’s account.

play09:05

Features like this meant Multics was really complicated for its time, using around 1 Megabit

play09:10

of memory, which was a lot back then!

play09:12

That might be half of a computer's memory, just to run the OS!

play09:15

Dennis Ritchie, one of the researchers working on Multics, once said:

play09:18

“One of the obvious things that went wrong with Multics as a commercial success was just

play09:23

that it was sort of over-engineered in a sense.

play09:25

There was just too much in it.”

play09:26

T his lead Dennis, and another Multics researcher,

play09:28

Ken Thompson, to strike out on their own and build a new, lean operating system… called Unix.

play09:33

They wanted to separate the OS into two parts:

play09:36

First was the core functionality of the OS, things like memory management, multitasking,

play09:40

and dealing with I/O, which is called the kernel.

play09:43

The second part was a wide array of useful tools that came bundled with, but not part

play09:47

of the kernel, things like programs and libraries.

play09:49

Building a compact, lean kernel meant intentionally leaving some functionality out.

play09:53

Tom Van Vleck, another Multics developer, recalled:

play09:55

“I remarked to Dennis that easily half the code I was writing in Multics was error recovery

play10:00

code."

play10:01

He said, "We left all that stuff out of Unix.

play10:03

If there's an error, we have this routine called panic, and when it is called, the machine

play10:07

crashes, and you holler down the hall, 'Hey, reboot it.'"”

play10:11

You might have heard of kernel panics, This is where the term came from.

play10:14

It’s literally when the kernel crashes, has no recourse to recover, and so calls a

play10:18

function called “panic”.

play10:19

Originally, all it did was print the word “panic” and then enter

play10:22

an infinite loop.

play10:24

This simplicity meant that Unix could be run on cheaper and more diverse hardware, making

play10:28

it popular inside Bell Labs, where Dennis and Ken worked.

play10:31

As more developers started using Unix to build and run their own programs, the number of

play10:34

contributed tools grew.

play10:36

Soon after its release in 1971, it gained compilers for different programming languages

play10:41

and even a word processor, quickly making it one of the most popular OSes of the 1970s

play10:45

and 80s.

play10:46

At the same time, by the early 1980s, the cost of a basic computer had fallen to the

play10:50

point where individual people could afford one, called a personal or home computer.

play10:55

These were much simpler than the big mainframes found at universities, corporations, and governments.

play10:59

So, their operating systems had to be equally simple.

play11:02

For example, Microsoft’s Disk Operating System, or MS-DOS, was just 160 kilobytes,

play11:07

allowing it to fit, as the name suggests, onto a single disk.

play11:10

First released in 1981, it became the most popular OS for early home computers, even

play11:15

though it lacked multitasking and protected memory.

play11:18

This meant that programs could, and would, regularly crash the system.

play11:22

While annoying, it was an acceptable tradeoff, as users could just turn their own computers

play11:26

off and on again!

play11:27

Even early versions of Windows, first released by Microsoft in 1985 and which dominated the

play11:32

OS scene throughout the 1990s, lacked strong memory protection.

play11:35

When programs misbehaved, you could get the blue screen of death, a sign that a program

play11:40

had crashed so badly that it took down the whole operating system.

play11:43

Luckily, newer versions of Windows have better protections and usually don't crash that often.

play11:48

Today, computers run modern operating systems, like Mac OS X, Windows 10, Linux, iOS and

play11:53

Android.

play11:54

Even though the computers we own are most often used by just a single person, you! their

play11:58

OSes all have multitasking and virtual and protected memory.

play12:02

So, they can run many programs at once: you can watch YouTube in your web browser, edit

play12:06

a photo in Photoshop, play music in Spotify and sync Dropbox all at the same time.

play12:12

This wouldn’t be possible without those decades of research and development on Operating

play12:16

Systems, and of course the proper memory to store those programs.

play12:19

Which we’ll get to next week.

play12:21

I’d like to thank Hover for sponsoring this episode.

play12:24

Hover is a service that helps you buy and manage domain names.

play12:27

Hover has over 400 domain extensions to end your domain with - including .com and .net.

play12:32

You can also get unique domains that are more professional than a generic address.

play12:35

Here at Crash Course, we'd get the domain name “mongols.fans” but I think you know

play12:40

that already.

play12:41

Once you have your domain, you can set up your custom email to forward to your existing

play12:44

email address -- including Outlook or Gmail or whatever you already use.

play12:48

With Hover, you can get a custom domain and email address for 10% off.

play12:52

Go to Hover.com/crashcourse today to create your custom domain and help support our show!

Rate This

5.0 / 5 (0 votes)

Related Tags
Operating SystemsComputer HistoryBatch ProcessingMultitaskingVirtual MemoryMemory ProtectionUnixMS-DOSTime-SharingSoftware Abstraction