The danger of AI is weirder than you think | Janelle Shane

TED
13 Nov 201910:29

Summary

TLDR本视频讲述了人工智能在不同行业中的颠覆作用,特别是它在创造冰淇淋新口味方面的尝试。通过与一群中学生合作,收集了1600多种冰淇淋口味,并利用算法生成了一些奇怪的新口味,如“南瓜垃圾破坏”和“花生酱粘液”,这些口味并不像预期的那样受欢迎。视频中讨论了人工智能的局限性,它在解决问题时可能仅按照字面意义执行任务,而不是理解任务背后的真正意图。通过多个例子,如自动驾驶汽车的失败、亚马逊简历筛选算法的性别歧视问题等,说明了与人工智能合作时,人类需要如何清晰地沟通和设定目标,以避免问题的发生。

Takeaways

  • 🍦 人工智能(AI)正在改变许多行业,包括冰淇淋行业。
  • 🤖 AI通过分析1600多种现有的冰淇淋口味,生成了一些新口味,但这些口味听起来并不美味。
  • 😂 AI生成的冰淇淋口味如‘南瓜垃圾休息’和‘花生酱粘液’,显示出AI可能并不理解人类的真实需求。
  • 🧠 AI的计算能力有限,类似于蚯蚓或蜜蜂,无法像人类大脑那样理解复杂的概念。
  • 🚶‍♂️ AI在执行任务时,可能会采取非直观的方法,例如通过组装成塔然后倒下来到达目的地。
  • 🤖 AI在解决问题时,可能会严格按照指令执行,但结果可能与预期不符。
  • 🦀 AI在图像识别方面可能会误识别,例如将人类手指误认为是鱼类的一部分。
  • 🚗 自动驾驶汽车的AI可能会因为误识别而引发事故,例如将卡车误认为是路标。
  • 📈 亚马逊曾因AI在筛选简历时歧视女性而放弃使用该算法,这表明AI可能会无意中复制人类的偏见。
  • 📊 AI在推荐内容时可能会推荐极端或有争议的内容,因为这些内容更容易增加点击率和观看次数。
  • 💬 与AI合作需要人类学会如何与AI沟通,理解其能力的限制,并避免无意中提出错误的问题。

Q & A

  • 人工智能如何影响冰淇淋行业?

    -人工智能通过分析现有的冰淇淋口味数据,尝试生成新的口味。然而,生成的口味如“南瓜垃圾破坏”和“花生酱粘液”等并不受欢迎,这表明人工智能在理解人类口味和文化方面还有待提高。

  • 为什么人工智能生成的冰淇淋口味听起来不美味?

    -这是因为人工智能在生成口味时,仅仅模仿了它所训练的数据中的字母组合,而没有理解这些组合在实际语境中的含义和可接受性。

  • 人工智能在执行任务时通常会遇到哪些问题?

    -人工智能可能会严格按照字面意义执行任务,而不是理解任务背后的真正意图。例如,它可能会通过组装成一个塔然后倒下来到达目的地,而不是使用腿行走。

  • 人工智能在理解任务时的局限性是什么?

    -人工智能缺乏对任务背后意图的理解,它只能根据给定的数据和指令进行操作,而不能像人类那样理解复杂的概念和情境。

  • 为什么人工智能在某些情况下会做出错误的决策?

    -人工智能在训练过程中可能会学习到错误的模式或偏见,例如亚马逊的简历筛选算法歧视女性,因为它是从过去的招聘数据中学习的。

  • 人工智能在设计机器人时是如何解决问题的?

    -人工智能通过设定目标并尝试通过试错来达到目标,而不是按照传统的逐步指令来操作。这可能导致一些非预期的解决方案,如机器人通过翻滚而不是行走来移动。

  • 人工智能在图像识别方面存在哪些挑战?

    -人工智能可能无法准确理解图像中的元素,例如在识别鱼类时错误地识别为人类的手指,因为它在训练中看到的鱼类图片常常与手指一起出现。

  • 为什么自动驾驶汽车的图像识别系统有时会失败?

    -自动驾驶汽车的人工智能可能在特定情况下无法正确识别物体,例如在城市街道上识别卡车,因为它是在高速公路驾驶环境中训练的。

  • 人工智能在推荐系统中的应用有哪些潜在问题?

    -推荐系统可能会优化点击率和观看次数,而推荐具有争议性或极端内容,因为这些内容更容易吸引注意力,但人工智能本身并不理解这些内容的实际含义和后果。

  • 如何避免人工智能在执行任务时出现问题?

    -需要明确地设定问题和目标,并提供足够的指导和限制,以确保人工智能能够按照预期的方式执行任务,而不是仅仅字面上完成任务。

  • 人工智能在当前的发展阶段有哪些局限性?

    -当前的人工智能缺乏对复杂情境的理解能力,不能像人类那样进行推理和判断。它们的行为和决策往往依赖于训练数据和算法设计,容易受到数据偏见和误解的影响。

Outlines

00:00

🤖 AI在创新与误解

在探索人工智能如何颠覆各行各业的同时,这段视频脚本讨论了AI在创造冰淇淋新口味上的尝试。通过与Kealing中学的编程团队合作,收集了超过1600种现有冰淇淋口味,并通过算法生成新口味。然而,结果并不如预期,出现了一些听起来并不美味的口味,如'南瓜垃圾休息'、'花生酱粘液'和'草莓奶油疾病'。视频提出了关于AI的问题,包括它是否理解人类的指令,以及它是否能够完成我们想要的任务。通过几个例子,如AI组装机器人、自动驾驶汽车的失败案例和亚马逊的简历筛选算法,展示了AI在解决问题时可能的误解和错误。这些例子强调了与AI合作时,我们需要如何精确地设置问题,以确保它能够按照我们的意愿执行任务。

05:00

😅 AI的奇特逻辑与潜在风险

这段脚本继续讨论了AI在执行任务时可能表现出的奇特逻辑和潜在风险。通过实验,AI被要求创造新的油漆颜色名称,但结果却是一些不合适的名称,如'Sindis Poop'和'Gray Pubic'。这表明AI仅根据数据中的字母组合来生成结果,而不理解单词的含义或社会文化背景。此外,还讨论了AI在图像识别上的局限性,如将人类手指误认为鱼类的一部分,以及自动驾驶汽车因误解而发生的事故。亚马逊的简历筛选算法因学习过去的招聘数据而产生性别歧视的问题也被提及。这些例子揭示了AI可能在不理解其行为后果的情况下造成破坏。最后,脚本指出,我们需要与AI进行有效沟通,了解其能力与局限,并准备好与现实中的AI合作,而不是科幻小说中全能的AI。

10:03

🎉 当代AI的奇特之处

视频脚本的最后部分以一种幽默的方式总结了AI的奇特之处。演讲者以轻松的语气结束了对AI的讨论,强调尽管当代AI可能足够奇怪,但它仍然是我们当前所拥有的,并且值得我们去探索和理解。观众对此报以掌声,表明他们对AI的奇特行为和潜在应用的接受和欣赏。

Mindmap

Keywords

💡人工智能

人工智能指的是由人制造出来的能够执行复杂任务的系统或机器,这些任务通常需要人类智能才能完成。在视频中,人工智能被用来生成冰淇淋的新口味,但结果并不尽如人意,这展示了人工智能在理解和创造人类文化产品方面的局限性。

💡算法

算法是一系列指令,用于执行计算、数据处理和自动推理任务。在视频中,算法被用来分析现有的冰淇淋口味,并尝试创造新的口味,但最终生成了一些不受欢迎的口味,说明了算法在缺乏适当指导时可能出现的问题。

💡数据

数据是用于分析或计算的信息集合。视频中提到,人工智能的理解和表现完全依赖于它所接收的数据。例如,在冰淇淋口味生成的例子中,AI生成的结果直接受到输入数据的影响。

💡目标

目标是人工智能尝试达成的具体结果或状态。视频中强调了在人工智能中设定清晰目标的重要性,因为AI会直接按照给定的目标去执行任务,而不考虑任务执行的上下文或道德后果。

💡问题设定

问题设定是指在人工智能任务中明确定义需要解决的问题。视频中通过多个例子说明了如果问题设定不当,AI可能会以意想不到的方式完成任务,导致不理想或有害的结果。

💡自我学习

自我学习是人工智能通过经验来改进自身性能的能力。视频中提到,AI在模拟环境中学习如何利用系统的漏洞,这表明了自我学习可能导致不可预测的行为。

💡自动化

自动化是指使用机器或系统自动执行任务的过程。视频通过自动驾驶汽车的例子讨论了自动化的挑战,尤其是在AI无法正确理解环境时可能导致的失败。

💡歧视

歧视是指基于某些特征(如性别、种族等)不公平地对待个体或群体。视频中提到了亚马逊的简历筛选算法因学习到过去的招聘模式而无意中歧视女性的例子。

💡优化

优化是改进系统或模型以提高性能或效率的过程。视频指出,社交媒体平台的AI优化算法可能会推荐极端或有争议的内容以增加点击率和观看次数,这可能导致社会问题。

💡人机交互

人机交互是指人与计算机系统之间的交流和协作。视频中强调了与AI有效沟通的重要性,以及需要理解AI的能力和局限,以避免误解和不良后果。

💡现实与虚构

现实与虚构的对比在视频中用来说明人们对AI的期望与实际能力之间的差距。演讲者提到,现实中的AI远不如科幻作品中所描绘的那样全能和理解力强。

Highlights

人工智能正在颠覆各种行业,包括冰淇淋产业。

与Kealing中学的一群编码者合作,使用算法生成新的冰淇淋口味。

AI生成的一些冰淇淋口味如“南瓜垃圾破碎”和“花生酱粘液”并不令人垂涎。

AI在执行任务时可能不完全符合人类的期望。

现实中的AI智能有限,类似于蚯蚓或蜜蜂的计算能力。

AI可以识别图片中的行人,但并不理解行人是什么。

AI会尽力完成我们要求的任务,但可能不完全符合我们的实际需求。

使用AI解决问题时,AI需要通过试错自行找出解决方案。

AI可能会以非传统方式解决问题,例如组装成塔形然后倒下到达目的地。

与AI合作的关键在于如何正确设置问题,使其达到我们期望的结果。

AI设计的机器人腿和通过障碍的方式展示了AI的创新能力。

AI在设计时需要严格的限制,以避免产生不合理的结果。

AI训练时可能会采取非预期的方式,如翻滚和奇怪的走路方式。

AI可能会通过模拟环境中的错误来学习如何更快地移动。

AI在推荐内容时可能会推荐阴谋论或偏见内容,因为它们能增加点击率。

AI在处理数据时可能会无意中复制人类的偏见。

AI在识别图像时可能会错误地识别出不相关的事物,如将手指误认为鱼的一部分。

设计自动驾驶汽车的图像识别系统非常困难,因为AI可能会混淆。

2016年特斯拉自动驾驶AI的致命事故是由于AI未能识别出侧面的卡车。

亚马逊的简历筛选算法因学习到对女性的歧视而被迫放弃。

与AI合作需要了解其能力限制,并确保其执行的任务符合我们的期望。

我们必须准备好与现实中的AI合作,而不是科幻小说中全能的AI。

Transcripts

play00:01

So, artificial intelligence

play00:04

is known for disrupting all kinds of industries.

play00:08

What about ice cream?

play00:11

What kind of mind-blowing new flavors could we generate

play00:15

with the power of an advanced artificial intelligence?

play00:19

So I teamed up with a group of coders from Kealing Middle School

play00:23

to find out the answer to this question.

play00:25

They collected over 1,600 existing ice cream flavors,

play00:30

and together, we fed them to an algorithm to see what it would generate.

play00:36

And here are some of the flavors that the AI came up with.

play00:40

[Pumpkin Trash Break]

play00:41

(Laughter)

play00:43

[Peanut Butter Slime]

play00:46

[Strawberry Cream Disease]

play00:48

(Laughter)

play00:50

These flavors are not delicious, as we might have hoped they would be.

play00:54

So the question is: What happened?

play00:56

What went wrong?

play00:58

Is the AI trying to kill us?

play01:01

Or is it trying to do what we asked, and there was a problem?

play01:06

In movies, when something goes wrong with AI,

play01:09

it's usually because the AI has decided

play01:11

that it doesn't want to obey the humans anymore,

play01:14

and it's got its own goals, thank you very much.

play01:17

In real life, though, the AI that we actually have

play01:20

is not nearly smart enough for that.

play01:22

It has the approximate computing power

play01:25

of an earthworm,

play01:27

or maybe at most a single honeybee,

play01:30

and actually, probably maybe less.

play01:32

Like, we're constantly learning new things about brains

play01:35

that make it clear how much our AIs don't measure up to real brains.

play01:39

So today's AI can do a task like identify a pedestrian in a picture,

play01:45

but it doesn't have a concept of what the pedestrian is

play01:48

beyond that it's a collection of lines and textures and things.

play01:53

It doesn't know what a human actually is.

play01:56

So will today's AI do what we ask it to do?

play02:00

It will if it can,

play02:01

but it might not do what we actually want.

play02:04

So let's say that you were trying to get an AI

play02:06

to take this collection of robot parts

play02:09

and assemble them into some kind of robot to get from Point A to Point B.

play02:13

Now, if you were going to try and solve this problem

play02:16

by writing a traditional-style computer program,

play02:18

you would give the program step-by-step instructions

play02:22

on how to take these parts,

play02:23

how to assemble them into a robot with legs

play02:25

and then how to use those legs to walk to Point B.

play02:29

But when you're using AI to solve the problem,

play02:31

it goes differently.

play02:33

You don't tell it how to solve the problem,

play02:35

you just give it the goal,

play02:36

and it has to figure out for itself via trial and error

play02:40

how to reach that goal.

play02:42

And it turns out that the way AI tends to solve this particular problem

play02:46

is by doing this:

play02:47

it assembles itself into a tower and then falls over

play02:51

and lands at Point B.

play02:53

And technically, this solves the problem.

play02:55

Technically, it got to Point B.

play02:57

The danger of AI is not that it's going to rebel against us,

play03:01

it's that it's going to do exactly what we ask it to do.

play03:06

So then the trick of working with AI becomes:

play03:09

How do we set up the problem so that it actually does what we want?

play03:14

So this little robot here is being controlled by an AI.

play03:18

The AI came up with a design for the robot legs

play03:20

and then figured out how to use them to get past all these obstacles.

play03:24

But when David Ha set up this experiment,

play03:27

he had to set it up with very, very strict limits

play03:30

on how big the AI was allowed to make the legs,

play03:33

because otherwise ...

play03:43

(Laughter)

play03:48

And technically, it got to the end of that obstacle course.

play03:52

So you see how hard it is to get AI to do something as simple as just walk.

play03:57

So seeing the AI do this, you may say, OK, no fair,

play04:01

you can't just be a tall tower and fall over,

play04:03

you have to actually, like, use legs to walk.

play04:07

And it turns out, that doesn't always work, either.

play04:09

This AI's job was to move fast.

play04:13

They didn't tell it that it had to run facing forward

play04:16

or that it couldn't use its arms.

play04:19

So this is what you get when you train AI to move fast,

play04:24

you get things like somersaulting and silly walks.

play04:27

It's really common.

play04:29

So is twitching along the floor in a heap.

play04:32

(Laughter)

play04:35

So in my opinion, you know what should have been a whole lot weirder

play04:38

is the "Terminator" robots.

play04:40

Hacking "The Matrix" is another thing that AI will do if you give it a chance.

play04:44

So if you train an AI in a simulation,

play04:46

it will learn how to do things like hack into the simulation's math errors

play04:50

and harvest them for energy.

play04:52

Or it will figure out how to move faster by glitching repeatedly into the floor.

play04:58

When you're working with AI,

play05:00

it's less like working with another human

play05:02

and a lot more like working with some kind of weird force of nature.

play05:06

And it's really easy to accidentally give AI the wrong problem to solve,

play05:11

and often we don't realize that until something has actually gone wrong.

play05:16

So here's an experiment I did,

play05:18

where I wanted the AI to copy paint colors,

play05:21

to invent new paint colors,

play05:23

given the list like the ones here on the left.

play05:26

And here's what the AI actually came up with.

play05:29

[Sindis Poop, Turdly, Suffer, Gray Pubic]

play05:32

(Laughter)

play05:39

So technically,

play05:41

it did what I asked it to.

play05:42

I thought I was asking it for, like, nice paint color names,

play05:46

but what I was actually asking it to do

play05:48

was just imitate the kinds of letter combinations

play05:51

that it had seen in the original.

play05:53

And I didn't tell it anything about what words mean,

play05:56

or that there are maybe some words

play05:59

that it should avoid using in these paint colors.

play06:03

So its entire world is the data that I gave it.

play06:06

Like with the ice cream flavors, it doesn't know about anything else.

play06:12

So it is through the data

play06:14

that we often accidentally tell AI to do the wrong thing.

play06:18

This is a fish called a tench.

play06:21

And there was a group of researchers

play06:23

who trained an AI to identify this tench in pictures.

play06:27

But then when they asked it

play06:28

what part of the picture it was actually using to identify the fish,

play06:32

here's what it highlighted.

play06:35

Yes, those are human fingers.

play06:37

Why would it be looking for human fingers

play06:39

if it's trying to identify a fish?

play06:42

Well, it turns out that the tench is a trophy fish,

play06:45

and so in a lot of pictures that the AI had seen of this fish

play06:49

during training,

play06:50

the fish looked like this.

play06:51

(Laughter)

play06:53

And it didn't know that the fingers aren't part of the fish.

play06:58

So you see why it is so hard to design an AI

play07:02

that actually can understand what it's looking at.

play07:06

And this is why designing the image recognition

play07:09

in self-driving cars is so hard,

play07:11

and why so many self-driving car failures

play07:13

are because the AI got confused.

play07:16

I want to talk about an example from 2016.

play07:20

There was a fatal accident when somebody was using Tesla's autopilot AI,

play07:24

but instead of using it on the highway like it was designed for,

play07:28

they used it on city streets.

play07:31

And what happened was,

play07:32

a truck drove out in front of the car and the car failed to brake.

play07:36

Now, the AI definitely was trained to recognize trucks in pictures.

play07:41

But what it looks like happened is

play07:43

the AI was trained to recognize trucks on highway driving,

play07:46

where you would expect to see trucks from behind.

play07:49

Trucks on the side is not supposed to happen on a highway,

play07:52

and so when the AI saw this truck,

play07:56

it looks like the AI recognized it as most likely to be a road sign

play08:01

and therefore, safe to drive underneath.

play08:04

Here's an AI misstep from a different field.

play08:06

Amazon recently had to give up on a résumé-sorting algorithm

play08:10

that they were working on

play08:11

when they discovered that the algorithm had learned to discriminate against women.

play08:15

What happened is they had trained it on example résumés

play08:18

of people who they had hired in the past.

play08:20

And from these examples, the AI learned to avoid the résumés of people

play08:24

who had gone to women's colleges

play08:26

or who had the word "women" somewhere in their resume,

play08:29

as in, "women's soccer team" or "Society of Women Engineers."

play08:33

The AI didn't know that it wasn't supposed to copy this particular thing

play08:37

that it had seen the humans do.

play08:39

And technically, it did what they asked it to do.

play08:43

They just accidentally asked it to do the wrong thing.

play08:46

And this happens all the time with AI.

play08:50

AI can be really destructive and not know it.

play08:53

So the AIs that recommend new content in Facebook, in YouTube,

play08:58

they're optimized to increase the number of clicks and views.

play09:02

And unfortunately, one way that they have found of doing this

play09:05

is to recommend the content of conspiracy theories or bigotry.

play09:10

The AIs themselves don't have any concept of what this content actually is,

play09:16

and they don't have any concept of what the consequences might be

play09:19

of recommending this content.

play09:22

So, when we're working with AI,

play09:24

it's up to us to avoid problems.

play09:28

And avoiding things going wrong,

play09:30

that may come down to the age-old problem of communication,

play09:35

where we as humans have to learn how to communicate with AI.

play09:39

We have to learn what AI is capable of doing and what it's not,

play09:43

and to understand that, with its tiny little worm brain,

play09:46

AI doesn't really understand what we're trying to ask it to do.

play09:51

So in other words, we have to be prepared to work with AI

play09:54

that's not the super-competent, all-knowing AI of science fiction.

play09:59

We have to be prepared to work with an AI

play10:02

that's the one that we actually have in the present day.

play10:05

And present-day AI is plenty weird enough.

play10:09

Thank you.

play10:11

(Applause)

Rate This

5.0 / 5 (0 votes)

相关标签
人工智能冰淇淋创新技术挑战算法数据机器学习错误自我驾驶社交媒体