The World Wide Web: Crash Course Computer Science #30

CrashCourse
4 Oct 201711:36

Summary

TLDR这段视频脚本深入探讨了互联网和万维网(World Wide Web)的区别,以及万维网是如何在互联网的基础上运行的。万维网是一个庞大的分布式应用程序,通过全球数百万服务器上的网页来访问,而网页则是通过超链接(hyperlinks)相互连接的文档。视频还介绍了超文本(hypertext)的概念,以及如何使用统一资源定位器(URL)和超文本传输协议(HTTP)来定位和请求网页。此外,还讲述了超文本标记语言(HTML)的发展,以及它如何被用来创建和链接网页。视频还回顾了网络浏览器的起源,包括第一个网络浏览器和网络服务器的创建者蒂姆·伯纳斯-李(Tim Berners-Lee)的工作,以及随后出现的多种浏览器和服务器。最后,视频讨论了网络中立性(Net Neutrality)的概念,这是一个关于互联网上所有数据包是否应被平等对待的重要议题,以及这一原则对创新和技术发展的潜在影响。

Takeaways

  • 🌐 互联网和万维网(World Wide Web)是两个不同的概念,万维网是建立在互联网之上的。
  • 🔗 超链接(hyperlinks)是万维网的基本构成元素,它允许用户从一个文档轻松跳转到另一个文档。
  • 📄 万维网的文档称为网页,这些文档通过超文本(hypertext)的形式存在,可以通过网络浏览器检索和渲染。
  • 🌐 每个网页都需要一个唯一的地址,即统一资源定位器(URL),用于在互联网上定位资源。
  • 📡 当用户请求一个网页时,计算机首先进行DNS查询,将域名解析为IP地址,然后通过TCP连接到目标服务器。
  • 🗨️ 超文本传输协议(HTTP)是用于从服务器请求网页的标准协议,最初的HTTP 0.9版本仅有一个命令“GET”。
  • 📝 超文本标记语言(HTML)是用于创建网页的标记语言,它定义了网页的结构和内容。
  • 🔍 搜索引擎的发展极大地方便了用户查找信息,最初的搜索引擎通过爬虫程序索引网页,而现代搜索引擎如Google则使用复杂的算法来提高搜索质量。
  • 🏗️ 第一个网络浏览器和服务器是由蒂姆·伯纳斯-李(Tim Berners-Lee)在1990年开发的,他同时创建了URL、HTML和HTTP等基础网络标准。
  • 🌟 网络中立性(Net Neutrality)是一个重要原则,主张所有互联网上的数据包应被平等对待,不应因来源不同而受到不同的传输速度或优先级。
  • 🚀 网络的开放标准促进了创新和发展,使得任何人都可以开发新的网络服务器和浏览器,这是万维网快速成长的关键因素。
  • 📈 随着万维网的快速发展,出现了多种网络浏览器和服务器,以及新的网站和服务,如亚马逊(Amazon)和eBay。

Q & A

  • 互联网和万维网有什么区别?

    -互联网是传输数据的基础架构,而万维网是运行在互联网之上的最大的分布式应用程序,通过网页浏览器进行访问。互联网负责数据传输,万维网则提供了一种通过超链接浏览信息的方式。

  • 超链接是如何改变信息浏览方式的?

    -在超链接出现之前,用户需要通过文件系统搜索或输入搜索框来查找信息。超链接的出现使得用户可以通过点击文本或图片轻松地从一个相关主题跳转到另一个主题。

  • 万维网的基本构成单元是什么?

    -万维网的基本构成单元是单个网页,这是一个包含内容的文档,可以包含指向其他页面的链接,这些链接被称为超链接。

  • Vannevar Bush在1945年提出了什么概念?

    -Vannevar Bush在1945年提出了超链接信息价值的概念,并描述了一个假想的机器——Memex,它通过“联想索引”将信息项相互关联,允许用户通过按钮点击即刻选择并自动检索另一项信息。

  • 网页是如何通过统一资源定位器(URL)进行唯一标识的?

    -每个超文本页面需要一个唯一的地址以链接到另一个页面,这个地址在万维网上由统一资源定位器(URL)指定。例如,一个网页的URL可能是 thecrashcourse.com/courses。

  • 当用户在浏览器中输入一个网址时,计算机首先会执行什么操作?

    -当用户在浏览器中输入一个网址时,计算机首先会执行DNS查找,将域名(如'thecrashcourse.com')转换为对应的计算机IP地址。

  • HTTP协议的最初版本有哪些功能?

    -HTTP协议的最初版本,即HTTP 0.9,只包含一个命令——'GET'。这个命令用于请求网页,对于基本的网页检索来说已经足够。

  • 超文本标记语言(HTML)的最初版本提供了多少个标记命令?

    -超文本标记语言(HTML)的最初版本,即HTML 0.a,是在1990年创建的,提供了18个HTML命令来标记页面。

  • 现代网页与早期网页相比有哪些进步?

    -现代网页相比早期网页更为复杂和先进。HTML的最新版本,即HTML 5,提供了超过一百种不同的标签,用于创建图片、表格、表单和按钮等内容。此外,还有CSS和JavaScript等技术可以嵌入到HTML页面中,实现更丰富的功能。

  • 第一个网络浏览器和网络服务器是由谁编写的?

    -第一个网络浏览器和网络服务器是由(现在的)Sir Tim Berners-Lee在1990年的两个月内编写的。当时他在瑞士的CERN工作,并同时创建了URL、HTML和HTTP等基础网络标准。

  • 搜索引擎的工作原理是什么?

    -搜索引擎通过三部分组成:网络爬虫(web crawler),索引(index),和搜索算法(search algorithm)。网络爬虫会跟踪网页上的所有链接,索引记录了爬虫访问的页面上出现的文本术语,搜索算法则根据索引提供搜索结果。

  • 网络中立性是什么?

    -网络中立性是所有互联网上的数据包应被平等对待的原则。这意味着不论是电子邮件还是视频流,它们都应该以相同的速度和优先级传输。网络中立性的辩论涉及到是否允许互联网服务提供商(ISP)为某些数据提供优先传输,以及这可能对小型公司和创新产生的影响。

Outlines

00:00

🌐 互联网与万维网的区别

本段介绍了互联网和万维网的不同。互联网是基础架构,负责数据传输,而万维网是建立在互联网之上的分布式应用程序,通过浏览器访问。万维网由网页组成,这些网页通过超链接相互连接,形成庞大的信息网络。超链接的概念最早由Vannevar Bush在1945年提出,他描述了一个名为Memex的假想机器,能够实现关联索引。此外,还介绍了网页的地址——URL,以及如何通过DNS查找、TCP连接和HTTP协议请求网页。

05:01

📄 HTML与网页的构建

本段详细讲解了HTML(超文本标记语言)的基础知识,它是创建网页的标记语言。HTML文档通过不同的标签定义内容的结构,如标题、链接、列表等。介绍了如何使用HTML标签构建一个简单的网页,包括创建标题、添加内容、制作超链接和创建有序列表。此外,还提到了HTML的发展,从最初的18个命令发展到HTML5的一百多个标签,并提及了CSS和JavaScript等其他技术,这些可以嵌入HTML页面中以实现更复杂的功能。

10:01

🏛 网络浏览器与万维网的起源

本段讲述了网络浏览器的历史和万维网的起源。第一个网络浏览器和服务器由Tim Berners-Lee在1990年编写,他在CERN工作期间创造了URL、HTML和HTTP等基础网络标准。随后,浏览器和服务器软件被发布并迅速发展,如Mosaic浏览器和多种网络服务器。此外,还讨论了搜索引挚的发展,从早期的JumpStation到Google的算法,后者通过检查其他网站对某页面的链接来评估网页的权威性。

🚀 网络中立性的重要性

本段探讨了网络中立性的概念和争议。网络中立性主张所有互联网上的数据包应被平等对待,无论是电子邮件还是视频流。然而,一些公司,如ISP,可能希望他们的数据能够优先传输。这可能导致对小型公司和初创企业的不公平竞争,因为他们可能无法支付额外的费用以获得优先服务。同时,也提到了网络中立性反对者的观点,他们认为市场力量和竞争将防止ISP的不良行为。这个议题复杂且影响深远,需要更深入的了解和讨论。

Mindmap

Keywords

💡互联网

互联网是连接全球计算机的网络系统,它通过各种线路、信号、交换机、数据包和路由器等基础设施来传输数据。在视频中,互联网被描述为数据传输的底层架构,支撑着包括万维网在内的各种应用和服务。

💡万维网

万维网是建立在互联网之上的,由数以百万计的服务器运行的分布式应用程序,用户通过称为网络浏览器的特殊程序访问。视频中提到,万维网并非互联网,尽管人们在日常语言中经常交替使用这两个术语。

💡超链接

超链接是万维网中用于从一个页面跳转到另一个页面的链接,可以是文本或图片形式。视频中强调,超链接使得用户可以轻松地从一个相关主题流向另一个主题,是万维网的核心特性之一。

💡超文本

超文本是一种文本形式,其中包含超链接,允许用户从一个信息点轻松跳转到另一个信息点。视频中提到,超文本的强大之处在于它使得信息的连接和检索变得更加直观和便捷。

💡统一资源定位器(URL)

URL是用于标识互联网上资源位置的标准地址格式。每个超文本页面都需要一个唯一的URL来标识。视频中举例说明了如何通过URL访问特定的网页。

💡域名系统(DNS)

DNS是一个将域名转换为IP地址的系统。在请求网站时,用户的计算机首先执行DNS查找,将域名转换为对应的IP地址,以便建立连接。视频中解释了DNS查找的工作原理。

💡超文本传输协议(HTTP)

HTTP是用于从网络服务器请求网页的协议。视频中提到,最初的HTTP 0.9版本只包含一个命令“GET”,用于请求网页。HTTP协议的后续版本增加了状态代码,用于指示请求的成功或错误。

💡HTML

HTML(Hypertext Markup Language)是一种用于创建网页的标记语言。视频中展示了如何使用HTML的基本标签创建一个简单的网页,包括标题、链接和列表等元素。

💡网络浏览器

网络浏览器是一种应用程序,它允许用户与网络服务器通信,请求页面和媒体,并渲染返回的内容。视频中提到了第一个网络浏览器和服务器的创建者Tim Berners-Lee,并介绍了浏览器的发展历史。

💡搜索引擎

搜索引擎是一种帮助用户在互联网上查找信息的工具。视频中讨论了搜索引擎的发展历程,从最初的目录式网站到后来的自动化爬虫和索引系统,以及Google如何通过链接分析算法改变了搜索结果的排名方式。

💡网络中立性

网络中立性是一个原则,主张互联网上的所有数据包应被平等对待,无论它们属于哪种服务或应用。视频中讨论了网络中立性的重要性,以及它对于防止互联网服务提供商偏袒特定内容和服务的影响。

Highlights

互联网和万维网是两个不同的概念,尽管在日常语言中人们常常将它们交替使用。

万维网是建立在互联网之上的,就像Skype、Minecraft或Instagram等应用一样。

互联网是传输所有不同应用数据的底层架构,而万维网是其中最大的分布式应用。

万维网的基本构建块是单个网页,它是一个包含内容的文档,可以链接到其他页面。

超链接是连接不同信息片段的文本或图片,用户可以点击以跳转到另一页面。

Vannevar Bush在1945年就概念化了超链接信息的价值,并描述了一个称为Memex的假想机器。

超文本是包含超链接的文本,它使得信息的流动从一个相关主题到另一个变得容易。

每个超文本页面需要一个唯一的地址,即统一资源定位符(URL)。

当你请求一个网站时,计算机首先进行DNS查找,将域名转换为IP地址。

浏览器通过HTTP协议向服务器发送GET请求以获取网页。

HTTP协议的后续版本添加了状态代码,例如200表示OK,404表示客户端错误。

网页超文本以纯文本形式存储和发送,例如ASCII或UTF-16编码。

超文本标记语言(HTML)是为了标记文本文件中的超文本元素而开发的。

HTML的最初版本提供了18个命令来标记页面,而HTML5有超过100个不同的标签。

Cascading Style Sheets(CSS)和JavaScript是可以嵌入HTML页面并实现更复杂功能的其他技术。

Tim Berners-Lee在1990年创造了第一个网页浏览器和服务器,并同时创建了URL、HTML和HTTP等基础网络标准。

Mosaic浏览器是第一个允许将图形嵌入文本的浏览器,它在1993年由伊利诺伊大学香槟分校的团队创建。

随着网络的增长,人们需要新的方式来寻找信息,这导致了搜索引擎的发展。

Google的成功部分归功于其算法,该算法通过检查其他网站如何链接到某个页面来评估其质量。

网络中立性是一个原则,它认为互联网上的所有数据包应该被平等对待。

网络中立性的辩论涉及复杂的技术和商业问题,对创新和市场竞争都有深远的影响。

Transcripts

play00:03

Hi, I’m Carrie Anne, and welcome to CrashCourse Computer Science.

play00:05

Over the past two episodes, we’ve delved into the wires, signals, switches, packets,

play00:10

routers and protocols that make up the internet.

play00:12

Today we’re going to move up yet another level of abstraction and talk about the World

play00:16

Wide Web.This is not the same thing as the Internet, even though people often use the

play00:20

two terms interchangeably in everyday language.

play00:21

The World Wide Web runs on top of the internet, in the same way that Skype, Minecraft or Instagram do.

play00:27

The Internet is the underlying plumbing that conveys the data for all these different applications.

play00:31

And The World Wide Web is the biggest of them all – a huge distributed application running

play00:35

on millions of servers worldwide, accessed using a special program called a web browser.

play00:40

We’re going to learn about that, and much more, in today’s episode.

play00:43

INTRO

play00:53

The fundamental building block of the World Wide Web – or web for short – is a single

play00:57

page.

play00:58

This is a document, containing content, which can include links to other pages.

play01:01

These are called hyperlinks.

play01:03

You all know what these look like: text or images that you can click, and they jump you

play01:06

to another page.

play01:08

These hyperlinks form a huge web of interconnected information, which is where the whole thing

play01:12

gets its name.

play01:13

This seems like such an obvious idea.

play01:15

But before hyperlinks were implemented, every time you wanted to switch to another piece

play01:18

of information on a computer, you had to rummage through the file system to find it, or type

play01:22

it into a search box.

play01:24

With hyperlinks, you can easily flow from one related topic to another.

play01:28

The value of hyperlinked information was conceptualized by Vannevar Bush way back in 1945.

play01:33

He published an article describing a hypothetical machine called a Memex, which we discussed

play01:37

in Episode 24.

play01:39

Bush described it as "associative indexing ... whereby any item may be caused at will

play01:44

to select another immediately and automatically."

play01:47

He elaborated: "The process of tying two things together is the important thing...thereafter,

play01:52

at any time, when one of those items is in view, the other [item] can be instantly recalled

play01:57

merely by tapping a button."

play01:59

In 1945, computers didn’t even have screens, so this idea was way ahead of its time!

play02:04

Text containing hyperlinks is so powerful, it got an equally awesome name: hypertext!

play02:09

Web pages are the most common type of hypertext document today.

play02:12

They’re retrieved and rendered by web browsers which we'll get to in a few minutes.

play02:15

In order for pages to link to one another, each hypertext page needs a unique address.

play02:20

On the web, this is specified by a Uniform Resource Locator, or URL for short.

play02:25

An example web page URL is thecrashcourse.com/courses.

play02:29

Like we discussed last episode, when you request a site, the first thing your computer does

play02:33

is a DNS lookup.

play02:34

This takes a domain name as input – like “the crash course dot com” – and replies

play02:38

back with the corresponding computer’s IP address.

play02:40

Now, armed with the IP address of the computer you want, your web browser opens a TCP connection

play02:45

to a computer that’s running a special piece of software called a web server.

play02:49

The standard port number for web servers is port 80.

play02:52

At this point, all your computer has done is connect to the web server at the address

play02:55

thecrashcourse.com

play02:57

The next step is to ask that web server for the “courses” hypertext page.

play03:01

To do this, it uses the aptly named Hypertext Transfer Protocol, or HTTP.

play03:05

The very first documented version of this spec, HTTP 0.9, created in 1991, only had

play03:11

one command – “GET”.

play03:13

Fortunately, that’s pretty much all you need.

play03:15

Because we’re trying to get the “courses” page, we send the server the following command

play03:19

– GET /courses.

play03:21

This command is sent as raw ASCII text to the web server, which then replies back with

play03:25

the web page hypertext we requested.

play03:27

This is interpreted by your computer's web browser and rendered to your screen.

play03:31

If the user follows a link to another page, the computer just issues another GET request.

play03:35

And this goes on and on as you surf around the website.

play03:38

In later versions, HTTP added status codes, which prefixed any hypertext that was sent

play03:43

following a GET request.

play03:45

For example, status code 200 means OK – I’ve got the page and here it is!

play03:49

Status codes in the four hundreds are for client errors.

play03:51

Like, if a user asks the web server for a page that doesn’t exist, that’s the dreaded

play03:56

404 error!

play03:57

Web page hypertext is stored and sent as plain old text, for example, encoded in ASCII or

play04:01

UTF-16, which we talked about in Episodes 4 and 20.

play04:05

Because plain text files don’t have a way to specify what’s a link and what’s not,

play04:09

it was necessary to develop a way to “mark up” a text file with hypertext elements.

play04:13

For this, the Hypertext Markup Language was developed.

play04:16

The very first version of HTML version 0.a, created in 1990, provided 18 HTML commands

play04:22

to markup pages.

play04:23

That’s it!

play04:24

Let’s build a webpage with these!

play04:25

First, let’s give our web page a big heading.

play04:28

To do this, we type in the letters “H 1”, which indicates the start of a first level

play04:32

heading, and we surround that in angle brackets.

play04:35

This is one example of an HTML tag.

play04:38

Then, we enter whatever heading text we want.

play04:40

We don’t want the whole page to be a heading.

play04:42

So, we need to “close” the “h1” tag like so, with a little slash in the front.

play04:45

Now lets add some content.

play04:47

Visitors may not know what Klingons are, so let’s make that word a hyperlink to the

play04:51

Klingon Language Institute for more information.

play04:53

We do this with an “A” tag, inside of which we include an attribute that specifies

play04:57

a hyperlink reference.

play04:58

That’s the page to jump to if the link is clicked.

play05:00

And finally, we need to close the A tag.

play05:03

Now lets add a second level heading, which uses an “h2” tag.

play05:06

HTML also provides tags to create lists.

play05:09

We start this by adding the tag for an ordered list.

play05:12

Then we can add as many items as we want, surrounded in “L i” tags, which stands

play05:16

for list item.

play05:17

People may not know what a bat'leth is, so let’s make that a hyperlink too.

play05:21

Lastly, for good form, we need to close the ordered list tag.

play05:24

And we’re done – that’s a very simple web page!

play05:27

If you save this text into notepad or textedit, and name it something like “test.html”,

play05:31

you should be able to open it by dragging it into your computer’s web browser.

play05:35

Of course, today’s web pages are a tad more sophisticated.

play05:38

The newest version of HTML, version 5, has over a hundred different tags – for things

play05:42

like images, tables, forms and buttons.

play05:44

And there are other technologies we’re not going to discuss, like Cascading Style Sheets

play05:48

or CSS and JavaScript, which can be embedded into HTML pages and do even fancier things.

play05:54

That brings us back to web browsers.

play05:56

This is the application on your computer that lets you talk with all these web servers.

play06:00

Browsers not only request pages and media, but also render the content that’s being

play06:03

returned.

play06:04

The first web browser, and web server, was written by (now Sir) Tim Berners-Lee over

play06:09

the course of two months in 1990.

play06:10

At the time, he was working at CERN in Switzerland.

play06:13

To pull this feat off, he simultaneously created several of the fundamental web standards we

play06:18

discussed today: URLs, HTML and HTTP.

play06:21

Not bad for two months work!

play06:23

Although to be fair, he’d been researching hypertext systems for over a decade.

play06:27

After initially circulating his software amongst colleagues at CERN, it was released to the

play06:30

public in 1991.

play06:32

The World Wide Web was born.

play06:34

Importantly, the web was an open standard, making it possible for anyone to develop new

play06:38

web servers and browsers.

play06:39

This allowed a team at the University of Illinois at Urbana-Champaign to create the Mosaic web

play06:43

browser in 1993.

play06:45

It was the first browser that allowed graphics to be embedded alongside text; previous browsers

play06:50

displayed graphics in separate windows.

play06:52

It also introduced new features like bookmarks, and had a friendly GUI interface, which made

play06:56

it popular.

play06:57

Even though it looks pretty crusty, it’s recognizable as the web we know today!

play07:01

By the end of the 1990s, there were many web browsers in use, like Netscape Navigator,

play07:05

Internet Explorer, Opera, OmniWeb and Mozilla.

play07:08

Many web servers were also developed, like Apache and Microsoft’s Internet Information

play07:11

Services (IIS).

play07:13

New websites popped up daily, and web mainstays like Amazon and eBay were founded in the mid-1990s.

play07:18

A golden era!

play07:19

The web was flourishing and people increasingly needed ways to find things.

play07:23

If you knew the web address of where you wanted to go – like ebay.com – you could just

play07:27

type it into the browser.

play07:28

But what if you didn’t know where to go?

play07:30

Like, you only knew that you wanted pictures of cute cats.

play07:33

Right now!

play07:34

Where do you go?

play07:35

At first, people maintained web pages which served as directories hyperlinking to other

play07:39

websites.

play07:40

Most famous among these was "Jerry and David's guide to the World Wide Web", renamed Yahoo

play07:44

in 1994.

play07:45

As the web grew, these human-edited directories started to get unwieldy, and so search engines

play07:50

were developed.

play07:51

Let’s go to the thought bubble!

play07:52

The earliest web search engine that operated like the ones we use today, was JumpStation,

play07:57

created by Jonathon Fletcher in 1993 at the University of Stirling.

play08:01

This consisted of three pieces of software that worked together.

play08:04

The first was a web crawler, software that followed all the links it could find on the

play08:07

web; anytime it followed a link to a page that had new links, it would add those to

play08:11

its list.

play08:12

The second component was an ever enlarging index, recording what text terms appeared

play08:16

on what pages the crawler had visited.

play08:18

The final piece was a search algorithm that consulted the index; for example, if I typed

play08:22

the word “cat” into JumpStation, every webpage where the word “cat” appeared

play08:26

would come up in a list.

play08:28

Early search engines used very simple metrics to rank order their search results, most often

play08:32

just the number of times a search term appeared on a page.

play08:35

This worked okay, until people started gaming the system, like by writing “cat” hundreds

play08:40

of times on their web pages just to steer traffic their way.

play08:43

Google’s rise to fame was in large part due to a clever algorithm that sidestepped

play08:47

this issue.

play08:48

Instead of trusting the content on a web page, they looked at how other websites linked to

play08:52

that page.

play08:53

If it was a spam page with the word cat over and over again, no site would link to it.

play08:57

But if the webpage was an authority on cats, then other sites would likely link to it.

play09:01

So the number of what are called “backlinks”, especially from reputable sites, was often

play09:05

a good sign of quality.

play09:07

This started as a research project called BackRub at Stanford University in 1996, before

play09:12

being spun out, two years later, into the Google we know today.

play09:15

Thanks thought bubble!

play09:16

Finally, I want to take a second to talk about a term you’ve probably heard a lot recently,

play09:20

“Net Neutrality”.

play09:21

Now that you’ve built an understanding of packets, internet routing, and the World Wide

play09:25

Web, you know enough to understand the essence – at least the technical essence – of

play09:29

this big debate.

play09:30

In short, network neutrality is the principle that all packets on the internet should be

play09:34

treated equally.

play09:35

It doesn’t matter if the packets are my email or you streaming this video, they should

play09:38

all chug along at the same speed and priority.

play09:41

But many companies would prefer that their data arrive to you preferentially.

play09:45

Take for example, Comcast, a large ISP that also owns many TV channels, like NBC and The

play09:50

Weather Channel, which are streamed online.

play09:52

Not to pick on Comcast, but in the absence of Net Neutrality rules, they could for example say that

play09:57

they want their content to be delivered silky smooth, with high priority…

play10:01

But other streaming videos are going to get throttled, that is, intentionally given less

play10:04

bandwidth and lower priority. Again I just want to reiterate here this is just conjecture.

play10:09

At a high level, Net Neutrality advocates argue that giving internet providers this

play10:13

ability to essentially set up tolls on the internet – to provide premium packet delivery

play10:17

– plants the seeds for an exploitative business model.

play10:20

ISPs could be gatekeepers to content, with strong incentives to not play nice with competitors.

play10:25

Also, if big companies like Netflix and Google can pay to get special treatment, small companies,

play10:30

like start-ups, will be at a disadvantage, stifling innovation.

play10:34

On the other hand, there are good technical reasons why you might want different types

play10:37

of data to flow at different speeds.

play10:39

That skype call needs high priority, but it’s not a big deal if an email comes in a few

play10:43

seconds late.

play10:44

Net-neutrality opponents also argue that market forces and competition would discourage bad

play10:49

behavior, because customers would leave ISPs that are throttling sites they like.

play10:53

This debate will rage on for a while yet, and as we always encourage on Crash Course,

play10:57

you should go out and learn more because the implications of Net Neutrality are complex

play11:01

and wide-reaching.

play11:02

I’ll see you next week.

Rate This

5.0 / 5 (0 votes)

相关标签
互联网基础万维网超链接HTTP协议域名解析网络服务器超文本HTML语言网络浏览器搜索引擎网络中立性