Compression: Crash Course Computer Science #21

CrashCourse
26 Jul 201712:47

Summary

TLDR本视频由Crash Course Computer Science出品,由Carrie Anne主持,深入探讨了数据压缩技术。视频首先解释了文件的概念,随后指出了传统文件格式如文本、波形和位图的局限性,即它们不够高效。接着,视频介绍了压缩技术,包括无损压缩和有损压缩,通过实例如Pac-Man图像展示了如何使用运行长度编码和霍夫曼编码来减少数据大小。此外,还讨论了音频和视频压缩技术,解释了如何通过去除人类不易感知的细节来实现数据的大幅度减少,同时保持了可接受的感知质量。视频最后强调了压缩技术对于存储和传输大量数据的重要性,特别是在流媒体服务中的应用。

Takeaways

  • 🗜️ 文件压缩是将数据编码成更少比特数以减少文件大小的技术,这使得存储和传输更加高效。
  • 🔴 运行长度编码(Run-Length Encoding)是一种简单的压缩技术,通过减少文件中重复或冗余信息的数量来实现压缩。
  • 📈 霍夫曼编码是一种利用数据频率来生成更紧凑表示的无损压缩技术,它通过构建霍夫曼树来实现。
  • 🎨 无损压缩技术不会丢失任何信息,解压后的数据与压缩前完全相同。
  • 🧩 有损压缩技术通过去除或减少人感知不明显的信息来减小文件大小,例如JPEG和MP3。
  • 👂 有损音频压缩利用人耳对不同频率的敏感度不同,去除或降低人耳不易察觉的频率的精度。
  • 👀 有损图像压缩通过识别人类视觉系统对细节的不敏感,去除一些细节以减少数据量。
  • 📹 视频压缩利用帧与帧之间的时间冗余,通过复制和重用数据块来减少传输的数据量。
  • 🚫 有损压缩在压缩过重时可能导致数据错误,从而产生视频播放中的异常效果。
  • 📈 压缩技术允许用户以高效的方式存储图片、音乐和视频,对于数据传输和存储至关重要。
  • 🌐 压缩技术使得在线流媒体服务(如YouTube)能够以较小的成本传输大量数据。
  • 🎬 CuriosityStream是一个提供纪录片和非虚构作品的流媒体服务,推荐观看“Miniverse”,了解太阳系的奇妙。

Q & A

  • 为什么我们需要对文件进行压缩?

    -文件压缩可以减小文件的大小,使我们能够在不填满硬盘的情况下存储更多的文件,并且能够更快地传输它们,避免了例如等待电子邮件附件下载时的不便。

  • 什么是无损压缩,它有什么特点?

    -无损压缩是一种压缩技术,它允许数据在压缩和解压缩后能够完全恢复到原始状态,没有任何信息损失。这意味着解压缩后的数据与压缩前的数据完全相同,比特对比特。

  • 运行长度编码(Run-Length Encoding)是如何工作的?

    -运行长度编码是一种简单的数据压缩方式,它通过识别文件中连续出现的相同值的序列(运行),并用一个表示长度的额外字节来代替这个序列,从而减少重复或冗余信息。

  • 霍夫曼编码(Huffman Coding)是如何生成有效的编码的?

    -霍夫曼编码通过构建一个霍夫曼树来生成有效的编码。它首先列出所有可能的数据块及其频率,然后在每一轮中选择频率最低的两个块,将它们合并为一个新节点,并记录这个新节点的总频率。这个过程重复进行,直到所有块都被合并到一个树中。然后,使用这个按频率排序的树,通过给每个分支标记0或1来生成所需的编码。

  • 为什么霍夫曼编码的代码是前缀免费的?

    -霍夫曼编码的代码是前缀免费的,因为每一条从树根到叶节点的路径是唯一的,这意味着没有任何一个代码是以另一个完整的代码开始的,从而避免了代码之间的冲突。

  • 有损压缩和无损压缩的主要区别是什么?

    -有损压缩允许在压缩过程中丢失一些信息,通常这些信息是人的视觉或听觉不易察觉的。而无损压缩则保证了压缩和解压缩后的数据与原始数据完全相同,没有任何信息的丢失。

  • 为什么我们可以在有损压缩中丢弃一些数据而不显著影响用户体验?

    -有损压缩利用了人类感知系统的局限性,例如在音频和图像压缩中,人耳对某些频率的声音不敏感,人眼对细微的颜色变化也不敏感。通过丢弃或减少这些不易感知的细节,可以在不显著影响用户体验的情况下显著减少文件大小。

  • JPEG图像压缩是如何工作的?

    -JPEG图像压缩通过将图像分割成8x8像素的块,然后丢弃许多高频空间数据来工作。这样做保留了视觉的本质,但可能只使用了原始数据的一小部分,从而实现了高压缩率。

  • 视频压缩中的时间冗余是什么?

    -时间冗余是指在视频的连续帧之间,许多像素是相同的,不需要在每一帧中重新传输这些像素。视频格式可以利用这一点,通过仅传输数据来编码帧与帧之间的差异,而不是重新传输所有像素,从而提高压缩效率。

  • 为什么压缩技术对于数据存储和传输非常重要?

    -压缩技术允许用户以高效的方式存储图片、音乐和视频,没有它,流式传输YouTube上喜爱的Carpool Karaoke视频几乎是不可能的,因为带宽和传输如此大量数据的经济性会受到限制。

  • 为什么Skype通话有时听起来像机器人在说话?

    -当音频信号质量或带宽变差时,压缩算法会移除更多的数据,进一步降低精度,这可能导致Skype通话听起来像机器人在说话,这是因为压缩过程中丢失了一些细节信息。

Outlines

00:00

📚 文件压缩基础

本段介绍了文件压缩的重要性和基本概念。Carrie Anne 首先回顾了文件和基本文件格式,然后指出了这些格式的局限性,即它们不够高效。为了解决这个问题,她引入了压缩的概念,解释了如何通过使用更少的比特来编码数据,从而实现文件大小的减少。接着,她用一个4x4像素的Pac-Man图像作为例子,展示了如何通过运行长度编码(RLE)来减少冗余信息,以及如何通过构建霍夫曼树来生成更紧凑的表示,从而实现无损压缩。

05:01

🔊 有损压缩与感知编码

第二段深入讲解了有损压缩技术,特别是如何利用人类感知系统的局限性来减少数据量。首先,通过声音的示例,说明了人类对不同频率的听觉敏感度不同,因此可以在不显著影响听觉体验的情况下丢弃或降低某些频率的精度。然后,通过JPEG图像压缩的例子,展示了如何通过丢弃8x8像素块中的高频空间数据来实现图像的有损压缩。此外,还讨论了视频压缩技术,它们利用帧与帧之间的时间冗余,通过复制和变换数据块来减少所需的数据量。

10:01

🎥 视频压缩与应用

最后一段讨论了视频压缩的重要性,并举了MPEG-4视频压缩标准为例,说明了压缩技术如何使视频文件大大减小,同时保持可接受的图像质量。还提到了压缩过度可能导致的图像质量问题,如错误的运动应用导致的怪异效果。最后,强调了压缩技术对于数据存储和传输的重要性,举例说明了如果没有压缩技术,流媒体服务将难以为继。此外,还提到了CuriosityStream流媒体服务,并推荐了一部关于太阳系的纪录片。

Mindmap

Keywords

💡压缩

压缩是一种将数据编码为更少比特的技术,目的是减少文件的大小,以便更快地传输和存储更多的文件。在视频中,压缩技术被用来提高文件的存储效率和传输速度,例如通过减少重复信息(如Run-Length Encoding)或使用更紧凑的表示方法(如Huffman编码)。

💡无损压缩

无损压缩是一种压缩技术,它允许压缩后的数据在解压缩后恢复到与原始数据完全相同的状态,即没有任何信息损失。视频中提到,无损压缩对于某些类型的文件非常重要,如文档文件,因为它们需要保持数据的完整性。

💡有损压缩

有损压缩是一种允许在压缩过程中丢失一些数据的压缩技术,通常用于那些对人类感知影响不大的文件类型。例如,视频中提到了音频和图像的有损压缩,通过去除人耳或人眼不易察觉的频率或细节来大幅减小文件大小。

💡感知编码

感知编码是一种利用人类感知系统的局限性来减少数据量的技术。它基于心理物理学的模型,通过去除或降低那些人类感知不太敏感的信息的精度来实现压缩。视频中提到了MP3音频和JPEG图像格式就是利用感知编码来减小文件大小的。

💡Huffman树

Huffman树是一种用于生成有效编码的数据结构,由David Huffman在20世纪50年代发明。它通过构建一个频率排序的树来为不同的数据块生成紧凑的编码。在视频中,Huffman树被用来为图像数据中的像素对生成紧凑的二进制代码。

💡Run-Length Encoding

Run-Length Encoding(RLE)是一种简单的数据压缩技术,它通过记录数据值的重复次数来减少文件中的冗余信息。例如,视频中的Pac-Man图像中有7个连续的黄色像素,RLE会用一个额外的字节来表示这个连续的长度,从而减少所需的总字节数。

💡位图

位图是一种图像文件格式,它将图像数据存储为像素值的列表。每个像素的颜色由红色、绿色和蓝色的值组合而成,每个颜色值占用一个字节,从而为每个像素提供0到255的颜色范围。视频中提到位图文件会包含元数据来定义图像的尺寸等属性。

💡元数据

元数据是描述其他数据的数据,它提供了关于数据的上下文和信息。在视频压缩的上下文中,元数据可以定义图像的尺寸、颜色深度等属性,帮助解释和使用数据。尽管视频中为了简化没有深入讨论元数据,但它对于完整地理解和使用文件是至关重要的。

💡心理物理学

心理物理学是一门研究人类感知和心理经验如何与物理刺激相关联的科学领域。在视频压缩技术中,心理物理学的模型被用来指导感知编码,通过了解人类对不同刺激的感知能力来优化压缩算法,减少对感知影响不大的数据。

💡JPEG

JPEG是一种广泛使用的有损图像压缩格式,它利用人类视觉系统对某些细节不敏感的特性来减少图像文件的大小。通过将图像分割成8x8像素的小块,并丢弃高频空间数据,JPEG能够在保持图像基本视觉特征的同时大幅度减少数据量。

💡MP3

MP3是一种流行的有损音频压缩格式,它通过感知编码技术减少音频文件的大小。MP3利用人耳对不同频率声音的敏感度不同,对声音的不同频段进行不同程度的压缩,从而在不显著降低听觉体验的前提下大幅度减少文件的存储空间。

Highlights

文件压缩是减少数据存储和传输时间的关键技术。

压缩技术通过使用比原始表示更少的比特来编码数据。

Run-Length Encoding(RLE)通过减少重复信息来压缩数据。

使用RLE,Pac-man图像的数据从48字节减少到24字节,压缩了50%。

无损压缩(lossless compression)允许数据在压缩后完全恢复,不丢失任何信息。

Huffman编码是一种根据数据块出现频率生成高效编码的方法。

Huffman树构建过程选择频率最低的两个块,将它们合并为一个新块。

通过Huffman树,Pac-man图像数据被压缩到仅14位,远小于原始的48字节。

压缩文件格式如GIF、PNG、PDF和ZIP通常结合了去除冗余和使用更紧凑表示的方法。

有损压缩(lossy compression)通过去除或降低与人类感知不敏感的信息的精度来减少文件大小。

音频压缩利用人耳对不同频率的敏感度不同,去除超声波等无法听到的频率。

MP3等压缩音频文件比未压缩的音频格式如WAV或FLAC小得多。

JPEG图像压缩通过丢弃8x8像素块中的高频空间数据来减少文件大小。

视频压缩通过利用帧之间的时间冗余和像素差异来提高压缩效率。

高级视频压缩格式如MPEG-4可以将视频文件压缩到原始大小的1/20到1/200。

压缩技术对于存储和传输大量数据至关重要,它使得在线视频流媒体服务如YouTube成为可能。

Crash Course Computer Science由CuriosityStream赞助,后者是一个提供大量纪录片和非虚构作品的流媒体服务。

Transcripts

play00:03

This episode is brought to you by Curiosity Stream.

play00:05

Hi, I'm Carrie Anne, and welcome to Crash Course Computer Science!

play00:09

Last episode we talked about Files, bundles of data, stored on a computer, that are formatted

play00:13

and arranged to encode information, like text, sound or images.

play00:17

We even discussed some basic file formats, like text, wave, and bitmap.

play00:20

While these formats are perfectly fine and still used today, their simplicity also means

play00:24

they’re not very efficient.

play00:26

Ideally, we want files to be as small as possible, so we can store lots of them without filling

play00:30

up our hard drives, and also transmit them more quickly.

play00:33

Nothing is more frustrating than waiting for an email attachment to download. Ugh!

play00:37

The answer is compression, which literally squeezes data into a smaller size.

play00:41

To do this, we have to encode data using fewer bits than the original representation.

play00:46

That might sound like magic, but it’s actually computer science!

play00:49

INTRO

play00:58

Lets return to our old friend from last episode, Mr. Pac-man!

play01:01

This image is 4 pixels by 4 pixels.

play01:04

As we discussed, image data is typically stored as a list of pixel values.

play01:08

To know where rows end, image files have metadata, which defines properties like dimensions.

play01:12

But, to keep it simple today, we’re not going to worry about it.

play01:15

Each pixel’s color is a combination of three additive primary colors: red, green and blue.

play01:20

We store each of those values in one byte, giving us a range of 0 to 255 for each color.

play01:26

If you mix full intensity red, green and blue - that’s 255 for all three values - you

play01:31

get the color white.

play01:32

If you mix full intensity red and green, but no blue (it’s 0), you get yellow.

play01:36

We have 16 pixels in our image, and each of those needs 3 bytes of color data.

play01:41

That means this image’s data will consume 48 bytes of storage.

play01:44

But, we can compress the data and pack it into a smaller number of bytes than 48!

play01:48

One way to compress data is to reduce repeated or redundant information.

play01:52

The most straightforward way to do this is called Run-Length Encoding.

play01:55

This takes advantage of the fact that there are often runs of identical values in files.

play01:59

For example, in our pac-man image, there are 7 yellow pixels in a row.

play02:03

Instead of encoding redundant data: yellow pixel, yellow pixel, yellow pixel, and so

play02:07

on, we can just say “there’s 7 yellow pixels in a row” by inserting an extra byte

play02:12

that specifies the length of the run, like so:

play02:15

And then we can eliminate the redundant data behind it.

play02:17

To ensure that computers don’t get confused with which bytes are run lengths and which

play02:21

bytes represent color, we have to be consistent in how we apply this scheme.

play02:25

So, we need to preface all pixels with their run-length.

play02:28

In some cases, this actually adds data, but on the whole, we’ve dramatically reduced

play02:32

the number of bytes we need to encode this image.

play02:34

We’re now at 24 bytes, down from 48.

play02:37

That’s 50% smaller!

play02:38

A huge saving!

play02:40

Also note that we haven’t lost any data.

play02:42

We can easily expand this back to the original form without any degradation.

play02:45

A compression technique that has this characteristic is called lossless compression, because we

play02:50

don’t lose anything.

play02:51

The decompressed data is identical to the original before compression, bit for bit.

play02:56

Let's take a look at another type of lossless compression, where blocks of data are replaced

play03:00

by more compact representations.

play03:02

This is sort of like “don’t forget to be awesome” being replaced by DFTBA.

play03:06

To do this, we need a dictionary that stores the mapping from codes to data.

play03:10

Lets see how this works for our example.

play03:12

We can view our image as not just a string of individual pixels, but as little blocks

play03:15

of data.

play03:16

For simplicity, we’re going to use pixel pairs, which are 6 bytes long, but blocks

play03:20

can be any size.

play03:22

In our example, there are only four pairings: White-yellow, black-yellow, yellow-yellow

play03:26

and white-white.

play03:27

Those are the data blocks in our dictionary we want to generate compact codes for.

play03:31

What’s interesting, is that these blocks occur at different frequencies.

play03:34

There are 4 yellow-yellow pairs, 2 white-yellow pairs, and 1 each of black-yellow and white-white.

play03:39

Because yellow-yellow is the most common block, we want that to be substituted for the most

play03:43

compact representation.

play03:45

On the other hand, black-yellow and white-white, can be substituted for something longer because

play03:49

those blocks are infrequent.

play03:51

One method for generating efficient codes is building a Huffman Tree, invented by David

play03:55

Huffman while he was a student at MIT in the 1950s.

play03:58

His algorithm goes like this.

play04:00

First, you layout all the possible blocks and their frequencies.

play04:03

At every round, you select the two with the lowest frequencies.

play04:05

Here, that’s Black-Yellow and White-White, each with a frequency of 1.

play04:10

You combine these into a little tree... ...which have a combined frequency of 2, so we record

play04:14

that.

play04:15

And now one step of the algorithm done.

play04:17

Now we repeat the process.

play04:18

This time we have three things to choose from.

play04:20

Just like before, we select the two with the lowest frequency, put them into a little tree,

play04:25

and record the new total frequency of all the sub items.

play04:27

Ok, we’re almost done.

play04:29

This time it’s easy to select the two items with the lowest frequency because there are

play04:33

only two things left to pick.

play04:34

We combine these into a tree, and now we’re done!

play04:37

Our tree looks like this, and it has a very cool property: it’s arranged by frequency,

play04:41

with less common items lower down.

play04:43

So, now we have a tree, but you may be wondering how this gets us to a dictionary.

play04:46

Well, we use our frequency-sorted tree to generate the codes we need by labeling each

play04:51

branch with a 0 or a 1, like so:

play04:53

With this, we can write out our code dictionary.

play04:56

Yellow-yellow is encoded as just a single 0.

play04:59

White-yellow is encoded as 1 0 (“one zero”)

play05:01

Black-Yellow is 1 1 0

play05:02

and finally white-white is 1 1 1.

play05:04

The really cool thing about these codewords is that there’s no way to have conflicting

play05:08

codes, because each path down the tree is unique.

play05:10

This means our codes are prefix-free, that is no code starts with another complete code.

play05:15

Now, let’s return to our image data and compress it!

play05:18

Our first pixel pair, white-yellow, is substituted for the bits “1 0”.

play05:21

The next pair is black-yellow, which is substituted for “1 1 0”.

play05:25

Next is yellow-yellow with the incredibly compact substitution of just “0”.

play05:29

And this process repeats for the rest of the image:

play05:32

So instead of 48 bytes of image data ...this process has encoded it into 14 bits -- NOT

play05:37

BYTES -- BITS!!

play05:38

That’s less than 2 bytes of data!

play05:40

But, don’t break out the champagne quite yet!

play05:42

This data is meaningless unless we also save our code dictionary.

play05:45

So, we’ll need to append it to the front of the image data, like this.

play05:49

Now, including the dictionary, our image data is 30 bytes long.

play05:53

That’s still a significant improvement over 48 bytes.

play05:56

The two approaches we discussed, removing redundancies and using more compact representations,

play06:00

are often combined, and underlie almost all lossless compressed file formats, like GIF,

play06:05

PNG, PDF and ZIP files.

play06:07

Both run-length encoding and dictionary coders are lossless compression techniques.

play06:11

No information is lost; when you decompress, you get the original file.

play06:14

That’s really important for many types of files.

play06:17

Like, it’d be very odd if I zipped up a word document to send to you, and when you

play06:20

decompressed it on your computer, the text was different.

play06:23

But, there are other types of files where we can get away with little changes, perhaps

play06:26

by removing unnecessary or less important information, especially information that human

play06:31

perception is not good at detecting.

play06:33

And this trick underlies most lossy compression techniques.

play06:37

These tend to be pretty complicated, so we’re going to attack this at a conceptual level.

play06:41

Let’s take sound as an example.

play06:42

Your hearing is not perfect.

play06:44

We can hear some frequencies of sound better than others.

play06:47

And there are some we can’t hear at all, like ultrasound.

play06:49

Unless you’re a bat.

play06:50

Basically, if we make a recording of music, and there’s data in the ultrasonic frequency

play06:54

range, we can discard it, because we know that humans can’t hear it.

play06:58

On the other hand, humans are very sensitive to frequencies in the vocal range, like people

play07:01

singing, so it’s best to preserve quality there as much as possible.

play07:05

Deep bass is somewhere in between.

play07:07

Humans can hear it, but we’re less attuned to it.

play07:09

We mostly sense it.

play07:11

Lossy audio compressors takes advantage of this, and encode different frequency bands

play07:14

at different precisions.

play07:16

Even if the result is rougher, it’s likely that users won’t perceive the difference.

play07:20

Or at least it doesn’t dramatically affect the experience.

play07:23

And here comes the hate mail from the audiophiles!

play07:25

You encounter this type of audio compression all the time.

play07:28

It’s one of the reasons you sound different on a cellphone versus in person.

play07:32

The audio data is being compressed, allowing more people to take calls at once.

play07:35

As the signal quality or bandwidth get worse, compression algorithms remove more data, further

play07:40

reducing precision, which is why Skype calls sometimes sound like robots talking.

play07:44

Compared to an uncompressed audio format, like a WAV or FLAC (there we go, got the audiophiles back)

play07:49

compressed audio files, like MP3s, are often 10 times smaller.

play07:53

That’s a huge saving!

play07:55

And it’s why I’ve got a killer music collection on my retro iPod.

play07:58

Don’t judge.

play07:59

This idea of discarding or reducing precision in a manner that aligns with human perception

play08:03

is called perceptual coding, and it relies on models of human perception,

play08:07

which come from a field of study called Psychophysics.

play08:09

This same idea is the basis of lossy compressed image formats, most famously JPEGs.

play08:14

Like hearing, the human visual system is imperfect.

play08:16

We’re really good at detecting sharp contrasts, like the edges of objects, but our perceptual

play08:21

system isn’t so hot with subtle color variations.

play08:23

JPEG takes advantage of this by breaking images up into blocks of 8x8 pixels, then throwing

play08:28

away a lot of the high-frequency spatial data.

play08:31

For example, take this photo of our directors dog - Noodle.

play08:33

So cute!

play08:34

Let’s look at patch of 8x8 pixels.

play08:37

Pretty much every pixel is different from its neighbor, making it hard to compress with

play08:41

loss-less techniques because there’s just a lot going on.

play08:43

Lots of little details.

play08:45

But human perception doesn’t register all those details.

play08:47

So, we can discard a lot of that detail, and replace it with a simplified patch like this.

play08:52

This maintains the visual essence, but might only use 10% of the data.

play08:55

We can do this for all the patches in the image and get this result.

play08:58

You can still see it’s a dog, but the image is rougher.

play09:01

So, that’s an extreme example, going from a slightly compressed JPEG to a highly compressed

play09:05

one, one-eighth the original file size.

play09:07

Often, you can get away with a quality somewhere in between, and perceptually, it’s basically

play09:12

the same as the original.

play09:13

The one on the left is one-third the file size of the one on the right.

play09:16

That’s a big savings for essentially the same thing.

play09:19

Can you tell the difference between the two?

play09:21

Probably not, but I should mention that video compression plays a role in that too, since

play09:25

I’m literally being compressed in a video right now.

play09:27

Videos are really just long sequences of images, so a lot of what I said about them applies

play09:31

here too.

play09:32

But videos can do some extra clever stuff, because between frames, a lot of pixels are

play09:36

going to be the same.

play09:37

Like this whole background behind me!

play09:39

This is called temporal redundancy.

play09:41

We don’t need to re-transmit those pixels every frame of the video.

play09:44

We can just copy patches of data forward.

play09:46

When there are small pixel differences, like the readout on this frequency generator behind

play09:50

me, most video formats send data that encodes just the difference between patches, which

play09:55

is more efficient than re-transmitting all the pixels afresh, again taking advantage

play09:59

of inter-frame similarity.

play10:01

The fanciest video compression formats go one step further.

play10:04

They find patches that are similar between frames, and not only copy them forward, with

play10:08

or without differences, but also can apply simple effects to them, like a shift or rotation.

play10:13

They can also lighten or darken a patch between frames.

play10:16

So, if I move my hand side to side like this the video compressor will identify the similarity,

play10:21

capture my hand in one or more patches, then just move these patches around between frames.

play10:25

You’re actually seeing my hand from the past… kinda freaky, but it uses a lot less data.

play10:30

MPEG-4 videos, a common standard, are often 20 to 200 times smaller than the original,

play10:34

uncompressed file.

play10:35

However, encoding frames as translations and rotations of patches from previous frames

play10:40

can go horribly wrong when you compress too heavily, and there isn’t enough space to

play10:43

update pixel data inside of the patches.

play10:46

The video player will forge ahead, applying the right motions, even if the patch data

play10:50

is wrong.

play10:51

And this leads to some hilarious and trippy effects, which I’m sure you’ve seen.

play10:54

Overall, it’s extremely useful to have compression techniques for all the types of data I discussed today.

play10:59

(I guess our imperfect vision and hearing are “useful,” too.)

play11:01

And it’s important to know about compression because it allows users to store pictures,

play11:05

music, and videos in efficient ways.

play11:07

Without it, streaming your favorite Carpool Karaoke videos on YouTube would be nearly

play11:11

impossible, due to bandwidth and the economics of transmitting that volume of data for free.

play11:17

And now when your Skype calls sound like they’re being taken over by demons, you’ll know

play11:20

what’s really going on.

play11:21

I’ll see you next week.

play11:23

Hey guys, this week’s episode was brought to you by CuriosityStream which is a streaming

play11:27

service full of documentaries and non­fiction titles from some really great filmmakers,

play11:31

including exclusive originals.

play11:33

Now I normally give computer science recommendations since this is Crash Course Computer Science and all

play11:38

and Curiosity Stream has a ton of great ones. But you absolutely have to check

play11:42

out “Miniverse” starring everyone’s favorite space-station-singing-Canadian astronaut,

play11:47

Chris Hadfield, as he takes a roadtrip across the Solar System scaled down the the size

play11:51

of the United States.

play11:53

It’s basically 50 minutes of Chris and his passengers geeking out about our amazing planetary

play11:57

neighbors and you don’t want to miss it.

play12:00

So get unlimited access today, and your first two months are free if you sign up at curiositystream.com/crashcourse

play12:07

and use the promo code "crashcourse" during the sign up process.

Rate This

5.0 / 5 (0 votes)

Related Tags
数据压缩计算机科学无损压缩有损压缩图像处理音频编码视频技术人耳感知视觉系统心理物理学Crash Course
Do you need a summary in English?