Why AI Is Incredibly Smart and Shockingly Stupid | Yejin Choi | TED

TED
28 Apr 202316:03

Summary

TLDRこのスクリプトでは、人工知能(AI)の強大さと限界について哲学的な視点から議論されています。AIは世界クラスの囲碁チャンピオンを破り、大学入学試験や法科試験に合格することができるほど強力ですが、時に小さな間違いをしてしまうことがあります。AIの安全性や環境への影響、そして教育の方法について社会的な課題が提起されています。また、AIが持つ「一般的な常識」の欠如と、その克服のためには新たなアルゴリズムやデータの創新が必要であると指摘しています。この講演は、AIの持続可能性と人道主義を追求する未来への道を模索するものです。

Takeaways

  • 🧠 「人工知能は非常に強力なツール」と言われており、世界クラスの「囲碁」のチャンピオンを破り、大学入学共通テストを合格し、さらには法科大学院の試験にも合格している。
  • 🔍 AIの今日の状況は、非常に大きな「ゴリア」に例えられており、膨大なGPUと言葉の量で訓練されているとされる。
  • 💡 「AIは一般的な常識を示す火花を示しているが、時には小さな愚かな間違いをすることもある」と述べ、AIの限界について触れている。
  • 💰 AIの訓練には膨大な費用がかかるため、その集中化が進み、僅かなテクノロジー企業が訓練できる状況になっている。
  • 🌳 AIの訓練には大きな環境影響とカーボンフットプリントがあると指摘している。
  • 🤔 AIが持つ「常識」の欠如は、人類にとって本当に安全なのかどうか疑問視されている。
  • 🏫 大学や非営利研究機関で働くスピーカーは、極めて大きなGPUファームを所有していないが、AIを持続可能で人道的にするため、多くのことが行うべきだと主張している。
  • 🌐 AIをより小さくすることで民主化し、人間の価値観を教えることでより安全にすることが求められている。
  • 📚 「戦術の芸術」からのインスピレーションを受け、敵を知り、戦いを選び、武器を革新することが重要であると語っている。
  • 🔮 AIの「常識」を教えることは、月の裏側のような挑戦であり、単純にスケールを増やすことで達成できるかどうか疑問にされている。
  • 🔄 AIの学習アルゴリズムは、信頼できる知識モデルとして最適でないかもしれないと述べ、新たなアルゴリズムを模索している。
  • 🌐 AIの「常識」を教えるために、共通の知識グラフや道徳規範のリポジトリを構築し、透明性の高いオープンなデータを作成している。

Q & A

  • 18世紀の啓蒙思想家のヴォルテールの引用文は何ですか?

    -「常识はあまりに普通ではない」という18世紀の啓蒙思想家ヴォルテールの引用文がスクリプトで紹介されています。

  • 人工知能(AI)が持つ3つの社会レベルの課題は何ですか?

    -スクリプトでは、AIのトレーニングコストの高騰、少数のテック企業によるパワーの集中、そしてこれらのモデルの膨大なカーボンフットプリントと環境への影響という3つの社会レベルの課題が挙げられています。

  • AIの安全性に関する懸念は何ですか?

    -AIの安全性に関する懸念として、研究者がこれらのモデルを実際に検査・解明するための手段が不足していることが挙げられています。

  • スクリプトで述べられた「知的質問」とはどのようなものですか?

    -スクリプトでは、「AIは堅実な常識を持ちながら、本当に人類にとって安全か?」と「暴力的な規模拡大がAIを教える唯一の方法で、正しい方法か?」という知的質問が提起されています。

  • スクリプトの講演者はどのような職業に就いていると述べていますか?

    -講演者は20年間computer scientistとして活動しており、人工知能に携わっていると述べています。

  • 講演者が提唱するAIの3つの改善点は何ですか?

    -AIを小さくして民主化し、人間の常識と価値観を教えることでAIを安全にし、持続可能性を高める必要があると講演者は提唱しています。

  • 「戦争の芸術」からの教訓とは何ですか?

    -「戦争の芸術」からの教訓として、「敵を知れ、戦いを選び、武器を革新せよ」という3つのアプローチがスクリプトで紹介されています。

  • AIが持つ「暗黒物質」とは何を指しますか?

    -AIの「暗黒物質」とは、言語の見えるテキストではなく、世界の仕組みについての不明示のルールを指しています。これは言語の解釈と使用に影響を与えます。

  • AIが持続可能性と人道主義を兼ね備えるためにはどのようなことが求められますか?

    -AIが持続可能性と人道主義を兼ね備えるためには、人間の常識、規範、価値観を教えることが求められています。

  • 講演者が提案する新しいアルゴリズムの1つに「シンボリックナレッジディスティレーション」とは何ですか?

    -「シンボリックナレッジディスティレーション」とは、大きな言語モデルをディープラーニングネットワークを使用して、より小さな常識モデルに圧縮するプロセスであり、人間の検査可能な常識の知識表現を生成します。

  • スクリプトで述べられているAIの学習の側面とは何ですか?

    -スクリプトでは、AIの学習が単に次の単語を予測するのではなく、世界を理解し、世界の仕組みを学ぶプロセスであることが強調されています。

Outlines

00:00

🤖 AIの哲学と現代の課題

この段落では、人工知能(AI)の哲学的な側面と現代の課題について議論されています。18世紀の哲学者ヴォルタールの引用から始まり、AIの強力な能力を示す例を挙げながら、AIが持つ潜在的な問題点に触れています。AIは「Go」の世界チャンピオンを破り、大学入学試験や法科試験に合格することができるが、時に小さな間違いをしてしまうことがあります。AIのトレーニングには膨大な費用がかかるため、特定のテクノロジー企業に力が集中する懸念があり、環境への影響も問題視されています。また、AIが持つ「一般的な常識」の欠如と、それを改善するための方法についても議論されています。

05:02

🧠 AIの「常識」の欠如と教育の課題

AIの「常識」の欠如と、それが人間の安全に与える影響について深く掘り下げられています。AIは大量のデータを学習することで「常識」を獲得しようとしていますが、その方法には限界があります。例えば、AIが「Go」のゲームのルールを理解するのに対し、基本的な物理的常識を理解することは難しいとされています。AIのトレーニングデータには、人種差別や性別差別、誤情報を含まれる可能性があり、それらを排除することが重要です。また、AIのアルゴリズムを改善し、より適切な知識を獲得させる方法についても提案されています。

10:04

🌐 AIの持続可能性と倫理的な課題

AIの持続可能性と倫理的な課題に焦点を当てた段落です。AIのトレーニングにかかる費用と環境への影響、さらにはAIの「常識」の欠如が人間の安全に与える懸念について述べられています。AIのトレーニングに使用されるデータは、人間の判断に基づいて作成されるべきであり、その透明性は重要だと主張しています。また、AIのアルゴリズムを改善し、より適切な「常識」を獲得させる方法について研究を進めていると報告しています。

15:05

🚀 AIの未来と新たな研究の方向性

最後の段落では、AIの未来と新たな研究の方向性が議論されています。AIは人類の新しい知能種族として、独自の強みと弱みを持っており、持続可能性と人道主義を念頭に置いて開発されるべきだと述べています。AIに「常識」や倫理的価値観を教えることが重要で、そのための新しいアルゴリズムの研究が行われていることを紹介しています。また、AIのトレーニングデータを公開し、多様な価値観をサポートすることが求められています。

Mindmap

Keywords

💡人工知能

人工知能とは、人間のように思考や判断能力を持つコンピューターシステムのことを指します。このビデオでは、人工知能が強力なツールとして、世界クラスの'Go'のチャンピオンを破り、大学入学共通テストや法科試験に合格することができると述べています。しかし、小さな間違いをすることもあるため、完全な「一般化された人工知能」とはいえません。

💡一般化された人工知能(AGI)

一般化された人工知能とは、特定のタスクに特化されたAIではなく、人間の知能のように多様なタスクに対処できるAIのことを指します。ビデオでは、最新のAIモデルがAGIの「火花」を示していると述べていますが、小さな間違いをすることがあるため、完全なAGIではないと指摘しています。

💡大きな言語モデル

大きな言語モデルとは、非常に大きなデータセットで訓練され、多様な言語処理タスクに応じるAIモデルのことを指します。ビデオでは、これらのモデルがAGIに近づいているが、時には小さな間違いをすることがあると述べています。

💡社会的な課題

ビデオでは、極めて大きなAIモデルの訓練には膨大な費用がかかり、その結果、パワーの集中化が起こる社会的な課題に触れています。また、これらのモデルの碳足跡や環境への影響についても問題提起しています。

💡知識のグラフ

知識のグラフとは、知識や情報をノードとエッジの形で表現したグラフ構造のことを指します。ビデオでは、研究者が一般常識の知識グラフを作成し、AIに基本的な常識や道徳規範を教える取り組みについて話しています。

💡暗黙の知識

暗黙の知識とは、言語化されていないが、人々の言語使用や理解に影響を与える世界についてのルールや知識のことを指します。ビデオでは、言語モデルがこれらの暗黙の知識を獲得することが重要であると述べています。

💡ニューラルネットワーク

ニューラルネットワークとは、人間の脳の神経細胞にインスパイアされた計算モデルのことを指します。ビデオでは、ディープラーニングを通じてAIの進歩を築き、新しいアルゴリズムを模索していると述べています。

💡教育

ビデオでは、AIを教育することの重要性が強調されています。特に、人間のようにAIに常識や道徳規範を教えることが、AIを持続可能で人道的にするため必要であると述べています。

💡創造性

ビデオでは、AIのアルゴリズムやデータの創造性について触れています。特に、新しいアルゴリズムを模索し、AIがより直接的に常識を獲得することができるようにすることが求められています。

💡倫理

ビデオでは、AIが倫理的であるためには、基本的な人間の価値観や道徳規範を理解することが重要であると述べています。これは、AIが適切な行動をとるために必要であると強調されています。

💡透明性

ビデオでは、AIの訓練データが公開され、誰でも内容を検査し、必要な場合は訂正できることが重要であると述べています。透明性は、重要な研究トピックにとって鍵であると主張しています。

Highlights

AI today is compared to Goliath, large in scale and demonstrating sparks of AGI, yet prone to making small, silly mistakes.

The speaker, a computer scientist with 20 years of experience, aims to demystify AI.

Extreme-scale AI models, trained on tens of thousands of GPUs and a trillion words, are expensive and concentrate power among a few tech companies.

AI's lack of robust common sense raises questions about its safety for humanity.

The environmental impact and massive carbon footprint of training large AI models are highlighted as a concern.

The speaker suggests making AI smaller to democratize it and teaching it human norms and values to enhance safety.

An analogy is drawn between extreme-scale language models and Goliath, seeking inspiration from 'The Art of War'.

The importance of evaluating AI with scrutiny is emphasized, questioning its common sense despite passing professional exams.

Examples of AI's failure at basic common sense tasks are provided, illustrating the gap between intelligence and understanding.

The speaker argues that simply adding more training data may not be the correct approach to fixing AI's common sense issues.

Common sense in AI is compared to dark matter, invisible yet crucial for understanding the world.

The potential dangers of AI lacking human values are discussed, referencing Nick Bostrom's paper clip maximizer thought experiment.

The speaker's team is working on commonsense knowledge graphs and moral norm repositories to teach AI basic norms and morals.

Innovation in data and algorithms is proposed as a way forward for AI development, moving beyond brute-force scaling.

The importance of transparency in AI training data is highlighted, advocating for open and publicly available data.

The quest for direct commonsense knowledge acquisition in AI is presented, including the development of new algorithms.

The speaker envisions a synthesis of ideas for AI that combines advancements in deep neural networks with an optimal scale.

The talk concludes with a call to teach AI common sense, norms, and values to ensure it is sustainable and humanistic.

Transcripts

play00:03

So I'm excited to share a few spicy thoughts on artificial intelligence.

play00:10

But first, let's get philosophical

play00:13

by starting with this quote by Voltaire,

play00:16

an 18th century Enlightenment philosopher,

play00:18

who said, "Common sense is not so common."

play00:21

Turns out this quote couldn't be more relevant

play00:24

to artificial intelligence today.

play00:27

Despite that, AI is an undeniably powerful tool,

play00:31

beating the world-class "Go" champion,

play00:33

acing college admission tests and even passing the bar exam.

play00:38

I’m a computer scientist of 20 years,

play00:40

and I work on artificial intelligence.

play00:43

I am here to demystify AI.

play00:46

So AI today is like a Goliath.

play00:50

It is literally very, very large.

play00:53

It is speculated that the recent ones are trained on tens of thousands of GPUs

play00:59

and a trillion words.

play01:02

Such extreme-scale AI models,

play01:04

often referred to as "large language models,"

play01:07

appear to demonstrate sparks of AGI,

play01:11

artificial general intelligence.

play01:14

Except when it makes small, silly mistakes,

play01:18

which it often does.

play01:20

Many believe that whatever mistakes AI makes today

play01:24

can be easily fixed with brute force,

play01:26

bigger scale and more resources.

play01:28

What possibly could go wrong?

play01:32

So there are three immediate challenges we face already at the societal level.

play01:37

First, extreme-scale AI models are so expensive to train,

play01:44

and only a few tech companies can afford to do so.

play01:48

So we already see the concentration of power.

play01:52

But what's worse for AI safety,

play01:55

we are now at the mercy of those few tech companies

play01:59

because researchers in the larger community

play02:02

do not have the means to truly inspect and dissect these models.

play02:08

And let's not forget their massive carbon footprint

play02:12

and the environmental impact.

play02:14

And then there are these additional intellectual questions.

play02:18

Can AI, without robust common sense, be truly safe for humanity?

play02:24

And is brute-force scale really the only way

play02:28

and even the correct way to teach AI?

play02:32

So I’m often asked these days

play02:33

whether it's even feasible to do any meaningful research

play02:36

without extreme-scale compute.

play02:38

And I work at a university and nonprofit research institute,

play02:42

so I cannot afford a massive GPU farm to create enormous language models.

play02:48

Nevertheless, I believe that there's so much we need to do

play02:53

and can do to make AI sustainable and humanistic.

play02:57

We need to make AI smaller, to democratize it.

play03:01

And we need to make AI safer by teaching human norms and values.

play03:06

Perhaps we can draw an analogy from "David and Goliath,"

play03:11

here, Goliath being the extreme-scale language models,

play03:16

and seek inspiration from an old-time classic, "The Art of War,"

play03:21

which tells us, in my interpretation,

play03:23

know your enemy, choose your battles, and innovate your weapons.

play03:28

Let's start with the first, know your enemy,

play03:30

which means we need to evaluate AI with scrutiny.

play03:35

AI is passing the bar exam.

play03:38

Does that mean that AI is robust at common sense?

play03:41

You might assume so, but you never know.

play03:44

So suppose I left five clothes to dry out in the sun,

play03:48

and it took them five hours to dry completely.

play03:51

How long would it take to dry 30 clothes?

play03:55

GPT-4, the newest, greatest AI system says 30 hours.

play03:59

Not good.

play04:01

A different one.

play04:02

I have 12-liter jug and six-liter jug,

play04:04

and I want to measure six liters.

play04:06

How do I do it?

play04:07

Just use the six liter jug, right?

play04:09

GPT-4 spits out some very elaborate nonsense.

play04:13

(Laughter)

play04:17

Step one, fill the six-liter jug,

play04:19

step two, pour the water from six to 12-liter jug,

play04:22

step three, fill the six-liter jug again,

play04:25

step four, very carefully, pour the water from six to 12-liter jug.

play04:30

And finally you have six liters of water in the six-liter jug

play04:34

that should be empty by now.

play04:36

(Laughter)

play04:37

OK, one more.

play04:39

Would I get a flat tire by bicycling over a bridge

play04:43

that is suspended over nails, screws and broken glass?

play04:48

Yes, highly likely, GPT-4 says,

play04:51

presumably because it cannot correctly reason

play04:53

that if a bridge is suspended over the broken nails and broken glass,

play04:58

then the surface of the bridge doesn't touch the sharp objects directly.

play05:02

OK, so how would you feel about an AI lawyer that aced the bar exam

play05:08

yet randomly fails at such basic common sense?

play05:12

AI today is unbelievably intelligent and then shockingly stupid.

play05:18

(Laughter)

play05:20

It is an unavoidable side effect of teaching AI through brute-force scale.

play05:26

Some scale optimists might say, “Don’t worry about this.

play05:29

All of these can be easily fixed by adding similar examples

play05:33

as yet more training data for AI."

play05:36

But the real question is this.

play05:39

Why should we even do that?

play05:40

You are able to get the correct answers right away

play05:43

without having to train yourself with similar examples.

play05:48

Children do not even read a trillion words

play05:51

to acquire such a basic level of common sense.

play05:54

So this observation leads us to the next wisdom,

play05:58

choose your battles.

play06:00

So what fundamental questions should we ask right now

play06:04

and tackle today

play06:06

in order to overcome this status quo with extreme-scale AI?

play06:11

I'll say common sense is among the top priorities.

play06:15

So common sense has been a long-standing challenge in AI.

play06:19

To explain why, let me draw an analogy to dark matter.

play06:23

So only five percent of the universe is normal matter

play06:26

that you can see and interact with,

play06:29

and the remaining 95 percent is dark matter and dark energy.

play06:34

Dark matter is completely invisible,

play06:36

but scientists speculate that it's there because it influences the visible world,

play06:40

even including the trajectory of light.

play06:43

So for language, the normal matter is the visible text,

play06:47

and the dark matter is the unspoken rules about how the world works,

play06:51

including naive physics and folk psychology,

play06:54

which influence the way people use and interpret language.

play06:58

So why is this common sense even important?

play07:02

Well, in a famous thought experiment proposed by Nick Bostrom,

play07:07

AI was asked to produce and maximize the paper clips.

play07:13

And that AI decided to kill humans to utilize them as additional resources,

play07:19

to turn you into paper clips.

play07:23

Because AI didn't have the basic human understanding about human values.

play07:29

Now, writing a better objective and equation

play07:32

that explicitly states: “Do not kill humans”

play07:35

will not work either

play07:36

because AI might go ahead and kill all the trees,

play07:40

thinking that's a perfectly OK thing to do.

play07:42

And in fact, there are endless other things

play07:44

that AI obviously shouldn’t do while maximizing paper clips,

play07:47

including: “Don’t spread the fake news,” “Don’t steal,” “Don’t lie,”

play07:51

which are all part of our common sense understanding about how the world works.

play07:55

However, the AI field for decades has considered common sense

play08:00

as a nearly impossible challenge.

play08:03

So much so that when my students and colleagues and I

play08:07

started working on it several years ago, we were very much discouraged.

play08:11

We’ve been told that it’s a research topic of ’70s and ’80s;

play08:14

shouldn’t work on it because it will never work;

play08:16

in fact, don't even say the word to be taken seriously.

play08:20

Now fast forward to this year,

play08:22

I’m hearing: “Don’t work on it because ChatGPT has almost solved it.”

play08:26

And: “Just scale things up and magic will arise,

play08:29

and nothing else matters.”

play08:31

So my position is that giving true common sense

play08:34

human-like robots common sense to AI, is still moonshot.

play08:38

And you don’t reach to the Moon

play08:40

by making the tallest building in the world one inch taller at a time.

play08:44

Extreme-scale AI models

play08:45

do acquire an ever-more increasing amount of commonsense knowledge,

play08:48

I'll give you that.

play08:50

But remember, they still stumble on such trivial problems

play08:54

that even children can do.

play08:56

So AI today is awfully inefficient.

play09:00

And what if there is an alternative path or path yet to be found?

play09:05

A path that can build on the advancements of the deep neural networks,

play09:09

but without going so extreme with the scale.

play09:12

So this leads us to our final wisdom:

play09:15

innovate your weapons.

play09:17

In the modern-day AI context,

play09:19

that means innovate your data and algorithms.

play09:22

OK, so there are, roughly speaking, three types of data

play09:24

that modern AI is trained on:

play09:26

raw web data,

play09:28

crafted examples custom developed for AI training,

play09:32

and then human judgments,

play09:34

also known as human feedback on AI performance.

play09:38

If the AI is only trained on the first type, raw web data,

play09:42

which is freely available,

play09:43

it's not good because this data is loaded with racism and sexism

play09:48

and misinformation.

play09:49

So no matter how much of it you use, garbage in and garbage out.

play09:54

So the newest, greatest AI systems

play09:57

are now powered with the second and third types of data

play10:00

that are crafted and judged by human workers.

play10:04

It's analogous to writing specialized textbooks for AI to study from

play10:09

and then hiring human tutors to give constant feedback to AI.

play10:15

These are proprietary data, by and large,

play10:17

speculated to cost tens of millions of dollars.

play10:20

We don't know what's in this,

play10:22

but it should be open and publicly available

play10:24

so that we can inspect and ensure [it supports] diverse norms and values.

play10:29

So for this reason, my teams at UW and AI2

play10:32

have been working on commonsense knowledge graphs

play10:35

as well as moral norm repositories

play10:37

to teach AI basic commonsense norms and morals.

play10:41

Our data is fully open so that anybody can inspect the content

play10:44

and make corrections as needed

play10:45

because transparency is the key for such an important research topic.

play10:50

Now let's think about learning algorithms.

play10:53

No matter how amazing large language models are,

play10:58

by design

play10:59

they may not be the best suited to serve as reliable knowledge models.

play11:04

And these language models do acquire a vast amount of knowledge,

play11:08

but they do so as a byproduct as opposed to direct learning objective.

play11:14

Resulting in unwanted side effects such as hallucinated effects

play11:18

and lack of common sense.

play11:20

Now, in contrast,

play11:22

human learning is never about predicting which word comes next,

play11:25

but it's really about making sense of the world

play11:28

and learning how the world works.

play11:29

Maybe AI should be taught that way as well.

play11:33

So as a quest toward more direct commonsense knowledge acquisition,

play11:39

my team has been investigating potential new algorithms,

play11:43

including symbolic knowledge distillation

play11:45

that can take a very large language model as shown here

play11:49

that I couldn't fit into the screen because it's too large,

play11:53

and crunch that down to much smaller commonsense models

play11:58

using deep neural networks.

play12:00

And in doing so, we also generate, algorithmically, human-inspectable,

play12:05

symbolic, commonsense knowledge representation,

play12:09

so that people can inspect and make corrections

play12:11

and even use it to train other neural commonsense models.

play12:15

More broadly,

play12:16

we have been tackling this seemingly impossible giant puzzle

play12:21

of common sense, ranging from physical,

play12:23

social and visual common sense

play12:26

to theory of minds, norms and morals.

play12:28

Each individual piece may seem quirky and incomplete,

play12:32

but when you step back,

play12:34

it's almost as if these pieces weave together into a tapestry

play12:38

that we call human experience and common sense.

play12:42

We're now entering a new era

play12:44

in which AI is almost like a new intellectual species

play12:50

with unique strengths and weaknesses compared to humans.

play12:54

In order to make this powerful AI

play12:58

sustainable and humanistic,

play13:00

we need to teach AI common sense, norms and values.

play13:04

Thank you.

play13:05

(Applause)

play13:13

Chris Anderson: Look at that.

play13:15

Yejin, please stay one sec.

play13:18

This is so interesting,

play13:19

this idea of common sense.

play13:21

We obviously all really want this from whatever's coming.

play13:25

But help me understand.

play13:27

Like, so we've had this model of a child learning.

play13:31

How does a child gain common sense

play13:34

apart from the accumulation of more input

play13:38

and some, you know, human feedback?

play13:41

What else is there?

play13:42

Yejin Choi: So fundamentally, there are several things missing,

play13:45

but one of them is, for example,

play13:47

the ability to make hypothesis and make experiments,

play13:51

interact with the world and develop this hypothesis.

play13:56

We abstract away the concepts about how the world works,

play13:59

and then that's how we truly learn,

play14:01

as opposed to today's language model.

play14:05

Some of them is really not there quite yet.

play14:09

CA: You use the analogy that we can’t get to the Moon

play14:12

by extending a building a foot at a time.

play14:14

But the experience that most of us have had

play14:16

of these language models is not a foot at a time.

play14:18

It's like, the sort of, breathtaking acceleration.

play14:21

Are you sure that given the pace at which those things are going,

play14:25

each next level seems to be bringing with it

play14:28

what feels kind of like wisdom and knowledge.

play14:32

YC: I totally agree that it's remarkable how much this scaling things up

play14:38

really enhances the performance across the board.

play14:42

So there's real learning happening

play14:44

due to the scale of the compute and data.

play14:49

However, there's a quality of learning that is still not quite there.

play14:53

And the thing is,

play14:54

we don't yet know whether we can fully get there or not

play14:58

just by scaling things up.

play15:01

And if we cannot, then there's this question of what else?

play15:05

And then even if we could,

play15:07

do we like this idea of having very, very extreme-scale AI models

play15:12

that only a few can create and own?

play15:18

CA: I mean, if OpenAI said, you know, "We're interested in your work,

play15:23

we would like you to help improve our model,"

play15:25

can you see any way of combining what you're doing

play15:28

with what they have built?

play15:30

YC: Certainly what I envision

play15:33

will need to build on the advancements of deep neural networks.

play15:37

And it might be that there’s some scale Goldilocks Zone,

play15:41

such that ...

play15:42

I'm not imagining that the smaller is the better either, by the way.

play15:46

It's likely that there's right amount of scale, but beyond that,

play15:50

the winning recipe might be something else.

play15:53

So some synthesis of ideas will be critical here.

play15:58

CA: Yejin Choi, thank you so much for your talk.

play16:00

(Applause)

Rate This

5.0 / 5 (0 votes)

Related Tags
人工知能共通感覚AI限界持続性人類中心技術革新哲学的視点社会課題環境影響知的財産教育
Do you need a summary in English?