OpenAI CEO Sam Altman and CTO Mira Murati on the Future of AI and ChatGPT | WSJ Tech Live 2023

The Wall Street Journal
21 Oct 202349:36

Summary

TLDRこのビデオのスクリプトでは、人工知能(AI)と全般的人工知能(AGI)の将来に関する議論が繰り広げられています。登場人物はAIが人間の労働や創造性をどのように変えるか、そしてAGIが人間の状態を改善するためにどのように使用されるかについて話し合っています。安全性と信頼性の重要性、データの使用と経済的な影響、AIの倫理的な側面についても議論されています。AIが社会や仕事の性質に与える影響、およびこれらの技術がどのように安全に展開されるべきかについての考察が含まれています。

Takeaways

  • 😀 人間性についての質問がされ、ユーモアや感情が人間らしさを象徴していると示された。
  • 🤖 AIや機械知能の将来について懸念が表明されたが、人間とコンピュータは異なる点で優れているとの楽観的な見方も示された。
  • 🎨 創造性はAIにとって予想よりも簡単であり、例えばDALL-EやGPT-4が創造的な作品を生み出している。
  • 🚀 汎用AI(AGI)の目標とその人間の状況改善における重要性が説明され、豊富で安価な知能とエネルギーが重要であるとされた。
  • 🧠 AGIの定義について言及され、多くのデジタルドメインにわたって一般化できるシステムとされた。
  • 📈 AIの発展は継続的であり、社会や職場におけるAIの影響は次第に認識されている。
  • 🔍 データの取り扱いとAIモデルの訓練についての倫理的な考慮が議論された。
  • 🤝 AI技術の開発と展開における安全性と信頼性の確保が重要視されている。
  • 🌐 社会とAI技術の関係性が変化しており、人々はAIとのインタラクションを通じて新しい関係を築いている。
  • 🎓 職場の未来とAIによる仕事の変化についての懸念と希望が語られた。

Q & A

  • AIや機械知能が人間の仕事を置き換えることに対する懸念は何ですか?

    -AIや機械知能がドライバーや医師などの仕事を置き換える可能性があることですが、最適な観点からは、コンピューターと人間は非常に異なる能力を持っており、互いに補完関係にあるという見方もあります。例えば、コンピューターは大量のデータを分析することに長けていますが、判断力や創造性、共感といった能力においては、現在のところ人間の方が優れています。

  • 2023年時点でのAIに関する見通しはどのように変化しましたか?

    -AIがまずロボットのような単純作業を得意とし、次に高度な判断を下す仕事に進むという予測から、創造性を含む分野でAIが予想外に進歩したことが明らかになりました。例えば、DALL-E 3やGPT-4などのモデルは、驚くべきイメージや創造的な物語を生成できます。

  • AGI(汎用人工知能)とは何ですか?

    -AGIは、多岐にわたるドメインで人間と同等の作業を一般化できるシステムであり、生産性や経済的価値を大幅に生み出すことができます。

  • AGIを目指す理由は何ですか?

    -AGIと豊富かつ安価なエネルギーは、人間の状況を最も改善する上で次の数十年で最も重要な2つの要素であり、AGIはこれまでに人類が作り出した最高のツールとなり得ると考えられています。

  • 将来のAI技術の安全性と信頼性を確保するために重要な要素は何ですか?

    -技術の能力を高めることと同時に、システムが信頼できるものであり、堅牢であることを保証することが重要です。これには、AIモデルのスケーリングと多様なモダリティへの対応が含まれます。

  • ChatGPTやその他のAIモデルは人間の友人や愛人になり得ますか?

    -AIシステムは人間の日常生活の多くの側面において重要な役割を果たす可能性があり、非常にパーソナライズされた関係を築くことができますが、これには大きな責任が伴います。

  • 人工知能(AI)の発展において、社会との関わりはどうあるべきですか?

    -AIの開発と展開は、全社会的な議論が必要であり、多くの異なるアクターが参加するべきであるという考えがあります。

  • AI技術の安全な展開に向けた具体的な取り組みにはどのようなものがありますか?

    -技術の開発だけでなく、実世界での応用を通じて社会が適応できるように段階的に能力を向上させ、実際に使用されることで生じる可能性のあるリスクを理解し、対策を講じることが重要です。

  • AI技術の将来に関する最大の恐れと希望は何ですか?

    -最大の希望は、AI技術が人類にとって非常に有益なものとなり、多くの問題を解決し、創造的な表現の新たな方法をもたらすことです。一方で、非常に強力な技術であるため、その悪用や予期せぬ副作用に関する恐れもあります。

  • AIによる仕事の変化への対応として社会は何をすべきですか?

    -技術の進歩による仕事の変化に対応するためには、教育や再訓練の機会を提供し、仕事の変化に伴う移行期間を支援するための政策や制度を開発することが重要です。

Outlines

00:00

😊 人間性とAIの進化

このパラグラフでは、人間を定義する要素(ユーモアや感情など)と、AI技術の発展によって人間とコンピューターの違いがどのように曖昧になってきているかについて議論されます。AIが特定のタスク(例えば、データ解析)で人間を超える能力を持つ一方で、創造性や共感といった分野ではまだ人間が優れているとの見解が示されます。しかし、AIの創造性が予想以上に発展している事例(DALL·E 3やGPT-4)も紹介され、AIと人間の役割についての議論が深められています。

05:04

😄 AGIへの道のり

AIの発展はAGI(人工汎用知能)への移行として描かれており、この段階においては、AIが様々な領域にわたって人間と同等の作業をこなせるようになると説明されています。AIが単純なロボット的作業から、高度な判断や創造的タスクをこなす段階へと進化している現状が述べられ、今後のAI技術が人類にもたらす可能性について前向きな見方が示されています。また、AGIが目指すべき目標として、豊富な知識と安価なエネルギーが挙げられています。

10:05

😃 AIと人間の相互作用

AI技術の進化は人間の日常生活における相互作用の質を変えていくと指摘されます。特に、AIが人間のように感じられる声や、パーソナライゼーションの向上により、人間とAIの関係がより深く、意味のあるものになる可能性が探求されています。しかし、この進化には責任と倫理的考慮が伴い、AIの個性化が人間関係にどのように影響を及ぼすかについての慎重な検討が求められています。

15:08

😁 AIの社会的・経済的影響

AI技術の発展は社会や経済に大きな影響を及ぼすとされ、特に仕事の自動化や知識労働の変化による影響が議論されています。また、AI技術の普及がデータの所有権や経済的な流れをどのように変えるかについての考察がなされ、AI開発におけるデータ利用の倫理的側面が強調されます。

20:09

😆 AIと人間の未来

AI技術に対する期待と懸念が平行して語られます。AIがもたらすポジティブな変化(創造性の促進、問題解決能力の向上など)と、その技術が引き起こす可能性のあるリスク(仕事の自動化による雇用への影響など)が対比され、AI技術の社会への適応や、これによる未来の形成には注意深いアプローチが必要であると強調されています。

25:10

😉 AIの倫理と社会への影響

AI技術の社会への統合における倫理的考慮と社会的責任が強調されます。特に、AIによる創造物(テキストや画像)の真正性を確認する技術や、AIの使用が社会的な懸念(ディープフェイクや個人情報の安全性など)にどのように対応するかについて議論され、開発者と社会が共同で解決策を模索する必要があることが指摘されています。

30:10

😏 AIの将来への期待と懸念

AI技術の将来に対する楽観的な見通しとそれに伴う懸念が共有されます。特に、AIが人間の生活を豊かにする潜在力と、その技術の安全性や倫理的な使い方を確保するための取り組みに焦点が当てられています。また、AI技術の社会への適応や、それがもたらす変化に社会全体がどのように対応すべきかについての議論が含まれています。

Mindmap

Keywords

💡AI(人工知能)

人工知能(AI)は、コンピュータや機械が人間の知能を模倣してタスクを実行する技術です。このビデオでは、AIがどのようにしてドライバーや医者などの役割を代替し得るか、またそれが人間とコンピュータの相違点を強調する一方で、創造性や感情など、人間固有の特性に対するAIの現在の限界について議論されています。

💡AGI(汎用人工知能)

汎用人工知能(AGI)は、特定のタスクだけでなく、様々な知的タスクをこなせるAIの形態です。ビデオでは、AGIが人間のように多岐にわたる領域で作業を行うシステムとして定義されており、経済的価値を生み出しながら、社会の様々な課題を解決するための鍵とされています。

💡創造性

創造性は、新しいアイデアや解決策を生み出す能力です。このビデオでは、AIが予想以上に創造的な作業、例えば画像生成や物語作成において高い能力を示していると議論されています。これは、AIの可能性についての過去の見解に対する挑戦となっています。

💡人間とAIの相互作用

人間とAIの相互作用は、人間がAI技術とどのように対話し、協働するかに関する研究です。ビデオでは、AIを通じて人間の創造性や生産性が向上する可能性が示されており、AIが人間のパートナーとしての役割を果たす未来が提示されています。

💡データの使用

データの使用は、AIモデルの訓練におけるデータの選択と利用に関する問題です。ビデオでは、適切なデータの使用に関する議論があり、特にハリウッドや出版業界などからのデータに対する懸念が取り上げられています。これは、AI開発において社会的な受け入れと倫理的な配慮が重要であることを示しています。

💡個性化

個性化は、AIが個々のユーザーの特性や好みに合わせてカスタマイズされるプロセスです。ビデオでは、AIがユーザーの情報を使用してより関連性の高いレスポンスを提供する方法が議論され、これがユーザー体験の向上につながることが示されています。

💡安全性と信頼性

AIの安全性と信頼性は、AIが人間に有益な方法で動作し、予期せぬ悪影響を引き起こさないことを確保するための対策です。ビデオでは、AI技術の能力向上とともに、これらのシステムを信頼し、依存できるようにすることの重要性が強調されています。

💡技術革新

技術革新は、新しいアイデアや発明によって技術が進化し、改善されるプロセスです。ビデオでは、AIが人類史上最も重要な発明の一つと見なされ、社会全体に革命をもたらす可能性があると述べられています。

💡知識の生成

知識の生成は、AIが新しい情報やデータを解析し、新しい知識を生み出す能力です。ビデオでは、AIが特定の課題解決や創造的なタスクにおいて人間を補完する例が示されており、AIのこの側面が人間の能力を拡張することが強調されています。

💡労働の未来

労働の未来は、AI技術の進展によって職業や働き方がどのように変化するかについての考察です。ビデオでは、AIが仕事の性質を変え、新しい職業を生み出しながら古いものを代替する可能性について議論されており、これが社会に大きな影響を与えることが示唆されています。

Highlights

Discussion on what makes individuals human, highlighting humor and emotion as key aspects.

The evolving relationship between AI and jobs, touching on AI's capability to perform robotic tasks and its potential to handle higher judgment tasks.

The unexpected ease with which AI has been able to tackle creative tasks, challenging previous notions of AI's limitations.

The definition and goal of AGI (Artificial General Intelligence), emphasizing its potential to generalize across many domains and produce significant economic value.

The importance of abundant and inexpensive intelligence, alongside energy, as crucial for improving the human condition over the coming decades.

AGI's role in problem-solving, creativity, and its transformative potential for humanity, despite the challenges new technologies bring.

The continuous evolution of AI, from mastering games like Chess and Go to the development of models like GPT series.

The acknowledgment that people still prefer human doctors, reflecting the nuanced relationship between AI advancements and human preference.

The challenge in defining the arrival of AGI, given the continuous nature of AI development and the evolving definition of intelligence.

Exploration of 'median human' concept in AI, where AI systems might perform at an average human level in various tasks.

Discussion on the progress of GPT-5 and the importance of making AI systems reliable, safe, and increasingly capable.

Addressing concerns around the data used to train AI models, with a focus on ethical usage and partnerships.

The potential for AI to dramatically transform the nature of work and the importance of society's involvement in shaping this transition.

The necessity of international regulation for the most powerful AI models to ensure safety and coordination.

The role of AI in generating content and the responsibilities of AI companies and social media platforms in managing misinformation.

Transcripts

play00:00

So here's my first question for you.

play00:01

Very,

play00:02

very simple question.

play00:04

What makes you human?

play00:07

Me?

play00:07

Both of you,

play00:08

you both have to answer what makes you human?

play00:10

Oh,

play00:10

and one word you get one word,

play00:12

humor,

play00:13

humor,

play00:16

emotion.

play00:17

OK.

play00:18

Um To confirm you're both human.

play00:20

I'm gonna need you to confirm which of these boxes have a traffic light.

play00:27

I think.

play00:28

Uh I think I can do that too now.

play00:29

Ok.

play00:30

All right.

play00:30

Well,

play00:31

Sam,

play00:31

you are actually here nine years ago at our first tech live and I actually wanna roll the

play00:36

clip of what you said.

play00:39

Certainly the,

play00:40

the fear with,

play00:41

with A I or machine intelligence in general is that it replaces drivers or doctors or

play00:45

whatever.

play00:46

Um,

play00:46

but the optimistic view on this and certainly what

play00:51

backs up what we're seeing is that computers and humans are very good at very different things.

play00:56

So a computer doctor will out crunch the numbers and do a better job than a

play01:00

human on looking at a massive amount of data and saying this.

play01:03

But on cases that require judgment or creativity or empathy,

play01:07

we are nowhere near any computer system that is any good at this.

play01:12

Ok?

play01:13

Does 2023.

play01:15

Partially right and partially wrong.

play01:17

Ok.

play01:17

It could have been worse,

play01:18

could have been worse.

play01:19

What's your outlook?

play01:19

Now,

play01:23

people,

play01:25

I think the prevailing wisdom back then,

play01:27

was that a I was gonna um do the kind of like robotic jobs really

play01:32

well first.

play01:33

So it would have been a great robotic surgeon,

play01:35

something like that.

play01:35

Um And then maybe eventually it was gonna do the,

play01:39

the sort of like higher judgment tasks.

play01:41

Uh And then,

play01:42

you know,

play01:43

then it would kind of do the empathy and then maybe never,

play01:46

it was gonna be like a really great creative thinker and creativity has been

play01:51

in some sense.

play01:52

And at this point,

play01:52

the definition of the word creativity is up for debate,

play01:55

but creativity in,

play01:56

in some sense has been easier for A I than people thought you can,

play02:00

you know,

play02:01

see dolly three generate these like amazing images um or write these creative

play02:05

stories with G BT four or whatever.

play02:07

Um So that part of the answer maybe was not

play02:12

perfect.

play02:13

Uh And GP I,

play02:15

I certainly would not have predicted GP T 49 years ago um quite how it turned

play02:20

out.

play02:21

But a lot of the other parts about people still really want a human doctor.

play02:24

Uh That's definitely very true.

play02:27

And I wanna quickly shift to a G I,

play02:31

what is a G I,

play02:32

Mira?

play02:32

If you could just define it for everybody in the audience,

play02:36

I will say it's a system that can generalize

play02:41

across many domains that,

play02:44

you know,

play02:45

would be equivalent to human work.

play02:49

Um They produce a lot of uh productivity and economic value.

play02:54

And,

play02:54

but you know,

play02:55

we're talking about one system that can generalize across a lot of

play02:59

digital domains of human work.

play03:02

And Sam,

play03:03

why is a G I the goal,

play03:08

the,

play03:08

the two things that I think will matter most over the next decade or

play03:13

few decades um to improving the human condition,

play03:17

the most giving us sort of just more of what we want.

play03:21

Our uh abundant and inexpensive intelligence.

play03:25

Um The more powerful,

play03:26

the more general,

play03:27

the smarter,

play03:27

the better.

play03:28

Uh I think that is A G I and then,

play03:30

and then abundant and cheap energy.

play03:31

And if we can get these two things done in the world,

play03:34

then uh it's almost like difficult to imagine how much else we could

play03:39

do.

play03:40

Uh We're,

play03:40

we're big believers that you give people better tools and they do things that astonish you.

play03:44

And I think A G I will be uh the best tool humanity has yet created uh

play03:49

with it,

play03:50

we will be able to solve all sorts of problems.

play03:52

We'll be able to express ourselves in new creative ways.

play03:54

We'll make just incredible things um for each other,

play03:58

for ourselves,

play03:58

for the world,

play03:59

for,

play04:00

for kind of this unfolding human story.

play04:02

Uh And you know,

play04:04

it's new and anything new comes with change and changes,

play04:07

uh not always all easy.

play04:10

Um But I think this will be just absolutely tremendous upside and

play04:16

gonna,

play04:17

you know,

play04:18

we're gonna nine more years.

play04:20

If you're nice enough to invite me back,

play04:21

you'll roll this question and people will say,

play04:23

like,

play04:24

how could we have thought we didn't want this?

play04:26

Like,

play04:27

how,

play04:28

I guess two parts to that?

play04:29

My next question.

play04:31

When will it be here and how will we know it's here from?

play04:35

Well,

play04:36

either one of you,

play04:36

I mean,

play04:37

you can both predict how long I think we'll call you in 10 years and we'll tell you you're wrong than that.

play04:41

I mean,

play04:42

yeah,

play04:42

I,

play04:42

yeah,

play04:42

probably in the next decade,

play04:43

but I would say it's a bit tricky because we,

play04:47

you know,

play04:47

when will it be here?

play04:49

Right.

play04:49

And I just kind of give you a definition but then often we talk about intelligence and you know,

play04:54

how intelligent is it or whether it's conscious and sentient and all of these terms.

play04:59

And,

play04:59

you know,

play04:59

they're not quite right because they sort of define our,

play05:04

our own intelligence and we're building something slightly different and you can kind of see

play05:08

how the definition of intelligence evolves from,

play05:12

you know,

play05:12

machines that were really great at chess and off ago and now the GP T series

play05:17

and then what's next,

play05:19

but it continues to evolve and it pushes what,

play05:22

how we define intelligence.

play05:24

We,

play05:25

we,

play05:25

we kind of define a G I as like the thing we don't have quite yet.

play05:29

So we've moved I mean,

play05:30

there were a lot of people who would have 10 years ago said art if you could make something like G BT four G BT five,

play05:34

maybe that would have been an A T I and,

play05:37

and now people are like,

play05:38

well,

play05:38

you know,

play05:38

it's like a nice little chat bot or whatever.

play05:40

And I think that's wonderful.

play05:41

I think it's great that the goalposts keep getting moved.

play05:43

It makes us work harder.

play05:44

Um But I think we're getting close enough to whatever that

play05:49

A G I threshold is gonna be that we no longer get to hand wave at it and the definition is gonna

play05:54

matter so less than a decade for some definition.

play05:59

OK.

play06:00

All right.

play06:01

The goalpost is mo moving.

play06:04

Um Sam,

play06:05

you've used the word.

play06:06

Um,

play06:07

and,

play06:07

and,

play06:07

and previously,

play06:08

when describing a G I,

play06:09

the term median human,

play06:12

can you explain what that is?

play06:15

Um I,

play06:18

I think there are experts in areas that are gonna you

play06:24

better than A I systems for a long period of time.

play06:27

Um And so like,

play06:28

you know,

play06:28

you could come to like some area where I'm like,

play06:31

really an expert at some task and I,

play06:32

I'll be like,

play06:33

all right,

play06:33

you know,

play06:34

GP T four is doing a horrible job there.

play06:36

GP T 56,

play06:37

whatever,

play06:38

doing a horrible job there.

play06:39

But you can come to other tasks where I'm ok,

play06:42

but certainly not an expert.

play06:44

I'm kind of like,

play06:45

maybe like an average of what different people in the world could do with something.

play06:49

And for that,

play06:50

uh then I might look at it and say,

play06:52

oh,

play06:52

this is actually doing pretty well.

play06:54

So what we mean by that is that the in any given area,

play06:59

expert,

play07:00

humans may uh like experts in any area can

play07:05

like just do extraordinary things.

play07:06

And that may take us a while to be able to do with these,

play07:09

these systems.

play07:10

But for kind of the more average case performance.

play07:13

So,

play07:13

you know,

play07:13

me doing something that I'm like,

play07:14

not very good at.

play07:15

Anyway,

play07:16

maybe our future versions can help me with that a lot.

play07:19

So am I a median human uh at some tasks?

play07:23

I'm sure.

play07:23

And it's some clearly at this,

play07:25

you're a very expert human and no GP T is taking your job anytime soon.

play07:29

Ok.

play07:29

That makes me feel that makes me feel a little better.

play07:32

Uh Mira,

play07:33

how's the G BT five going?

play07:37

Um We're not there yet,

play07:40

but it's kind of need to know basis.

play07:43

I'll let you know that's such a diplomatic answer.

play07:46

I'm gonna make merry to all of this.

play07:48

I would have no,

play07:49

I would have just said,

play07:49

oh yeah,

play07:50

here's what's happening.

play07:50

That's great.

play07:51

No,

play07:51

no,

play07:52

we're not sending him back here.

play07:53

Pair of these two who paired,

play07:55

whose idea was this?

play07:56

Um You're working on it,

play07:58

you're training it.

play08:00

We're always working on the next thing.

play08:05

Just do a staring contest.

play08:08

That's what makes us human.

play08:11

Um All of these steps though,

play08:14

with GP T,

play08:14

right?

play08:15

Is it,

play08:15

or,

play08:15

you know,

play08:15

GP T 33.5 for our steps towards A G I with

play08:20

each of them?

play08:21

Are you looking for a benchmark?

play08:22

Are you looking for?

play08:24

This is what we want to get to?

play08:25

Yeah.

play08:26

So,

play08:27

you know,

play08:27

before we had the product,

play08:29

we were sort of looking at academic benchmarks and how well these models were doing a

play08:34

academic benchmarks and,

play08:36

you know,

play08:37

open A I is known for betting on scaling,

play08:40

you know,

play08:40

throwing a ton of compute and data on this uh neural networks

play08:45

and seeing how they get better and better at predicting the next token.

play08:50

But it's not that we really care about the prediction of the next token,

play08:53

we care about the tasks in the real world to which this correlates

play08:58

to.

play08:59

And so that's actually what we started seeing once we put out um

play09:03

research in the real world and we,

play09:07

we build out products through the API eventually through A G BT as well.

play09:11

And so now we actually have real world examples.

play09:15

We can see how our customers do in um specific domains,

play09:19

how it moves the needle for specific businesses.

play09:23

Um And of course,

play09:24

with GP T four,

play09:25

we saw that it did really well in um exams like

play09:30

SAT and LS A and so on.

play09:33

So it kind of goes to our earlier point that we're,

play09:36

you know,

play09:36

continually evolving our definition of what it means for these

play09:41

models to be more capable.

play09:43

Um But you know,

play09:45

as we increase the the capability vector,

play09:48

what we really look for is reliability and safety.

play09:53

Uh these are very interweaved and it's very important to make systems that

play09:58

of course,

play09:58

are increasingly capable,

play10:00

but that you can truly rely on and they are robust and that

play10:05

you can trust the output of the system.

play10:07

So we're kind of pushing in uh both of these vectors at the same time.

play10:12

And um you know,

play10:14

as we build the next model,

play10:17

the next set of technologies,

play10:18

we're both betting continuing to bet on scaling.

play10:22

But we're also looking at,

play10:24

you know,

play10:25

this other uh element of multimodality.

play10:28

Um because we want these models to kind of perceive the world in a

play10:33

similar way to how we do and,

play10:36

you know,

play10:36

we per perceive the world,

play10:37

not just in text but images and sounds and so on.

play10:41

So we want to have robust representations of the

play10:46

world um in,

play10:47

in these models will G BT five solve the

play10:51

hallucination problem?

play10:54

Well,

play10:54

I mean,

play10:55

actually,

play10:56

maybe like,

play10:57

let's see,

play10:58

um we've made a ton of progress on the hallucination issue um

play11:03

with G BT four,

play11:05

but we're still quite uh we're not where,

play11:08

where we need to be,

play11:09

but,

play11:09

you know,

play11:10

we're sort of on the right track and it's,

play11:13

it's unknown,

play11:13

it's research,

play11:14

it,

play11:14

it could be that uh continuing in this path of reinforcement learning with human

play11:19

feedback,

play11:20

we can get all the way to really reliable outputs.

play11:24

And we're also adding other elements like retrieval and search.

play11:29

So you can um you have the ability to,

play11:32

to provide more factual answers or to get more factual outputs from the model.

play11:37

So there is a combination of technologies that we're putting together to kind of reduce

play11:42

the hallucination issue.

play11:44

Sam,

play11:45

I'll,

play11:45

I'll ask you about the data,

play11:47

the training data.

play11:47

Obviously,

play11:48

there's,

play11:48

there's been,

play11:49

you know,

play11:49

maybe maybe some people in this audience who may not be thrilled about some of the data that you guys have

play11:54

used to train some of your models.

play11:56

Not too far from here in,

play11:57

in Hollywood,

play11:58

people have not been thrilled.

play11:59

Uh publishers when you're,

play12:01

when you're considering now as you're as you're walking through and to going to work towards this

play12:06

these next models,

play12:08

what are the conversations you're having around the data?

play12:12

So a few thoughts in different directions here,

play12:15

one,

play12:15

we obviously only wanna use data that people are excited about us

play12:21

using.

play12:21

Like we don't,

play12:23

we,

play12:23

we want the model of this new world to,

play12:25

to work for uh everyone.

play12:28

And we wanna find ways to make people say like,

play12:31

you know what I see why this is great.

play12:32

I see why this is like gonna be a new,

play12:34

it may be a new way that we think about some of these issues around data ownership

play12:39

and uh like how economic flows work.

play12:42

But we want to get to something that everybody feels really excited about.

play12:45

But one of the challenges has been people,

play12:48

you know,

play12:48

different kinds of data owners have very different pictures.

play12:50

So we're just experimenting with a lot of things we're doing partnerships of different shapes.

play12:55

Um And we think that like with any new field,

play12:58

we'll find something that sort of just becomes a,

play13:01

a new standard also,

play13:03

uh I think as these models get

play13:08

smarter and more capable,

play13:11

we will need less training data.

play13:13

So I think there's this view right now,

play13:14

which is that we're just gonna like,

play13:17

you know,

play13:17

models are gonna have to like train on every word humanity has ever produced or whatever.

play13:22

And I,

play13:23

I technically speaking,

play13:24

I don't think that's what's gonna be the long term path here,

play13:27

like we have existential proof with humans that that's,

play13:30

that's not the only way to become intelligent.

play13:32

Um And so I think the conversation gets a little bit um

play13:38

led astray by this because what,

play13:41

what really will matter in the future is like particularly valuable data.

play13:45

You know,

play13:45

people want people trust the Wall Street Journal and they want to see content from

play13:50

that.

play13:50

And the Wall Street Journal wants that too.

play13:51

And we find new models to make that work.

play13:54

But I think the,

play13:55

the conversation about data and the shape of all of this uh because of the technological

play14:00

progress we're making,

play14:01

it's about to,

play14:02

it's about to shift.

play14:04

Well,

play14:04

publishers like my mine who might be out there somewhere.

play14:08

They want money for that data is the future of this

play14:12

entire race about who can pay the most for the best data.

play14:16

Um No,

play14:18

that was sort of the point I was trying to make,

play14:20

I guess in elegantly the,

play14:22

but you still need some,

play14:25

you will need some.

play14:26

But the core,

play14:27

like the thing that is the thing that people really like about a GP T model uh

play14:32

is not fundamentally that it has that it knows particular knowledge,

play14:36

there's better ways to find that it's that it has this larval reasoning capacity and that's

play14:41

gonna get better and better.

play14:42

But that's,

play14:42

that's really what this is gonna be about.

play14:44

And then there will be ways that you can set up all sorts of economic arrangements as a user or as a company

play14:49

making the model or whatever to say.

play14:50

All right.

play14:51

Now,

play14:51

you know,

play14:52

I,

play14:52

I understand that you would like me to go get this data from the Wall Street Journal.

play14:55

I can do that,

play14:56

but here's the deal that's in place.

play14:58

So there will be things like that.

play14:59

But,

play14:59

but the fundamental thing about these models is not that they memorize a lot of data.

play15:03

So sort of like the model where also you right now you've got being integrated,

play15:07

it goes out looks for some of that data and can bring back some of that.

play15:10

And that's,

play15:10

you know,

play15:10

on the internet,

play15:11

we decided again,

play15:13

back in the early days the internet,

play15:14

there were a lot of conversations about the different models could be and we all kind of decided on,

play15:18

you know,

play15:18

here's the,

play15:19

the core framework and there's different pieces in there.

play15:21

Of course.

play15:21

And we're all gonna have to figure that out for a I,

play15:24

well,

play15:24

speaking of bing,

play15:26

you and Satya Nadella,

play15:27

your $10 billion friends or frenemies friends.

play15:30

Yeah,

play15:32

I won't pretend that it's like a perfect relationship but nowhere near the front of me category.

play15:37

It's really good.

play15:37

Like we have our squabbles.

play15:40

It just seems like increasingly as you guys are releasing more and more products that they,

play15:45

they seem to compete in some places.

play15:48

Um

play15:53

I mean,

play15:54

I think that that's,

play15:55

that there's something core about this language interface that is a big

play16:00

deal and so there's gonna be a lot of people doing things for that and,

play16:05

and then there's other places like,

play16:06

you know,

play16:07

we offer a version of API,

play16:08

they offer a version of API but like that just,

play16:11

that's like a very friendly thing and we all,

play16:14

we like,

play16:14

we work it out so that we all benefit and we're all happy and,

play16:16

and we just want like we jointly want as much usage of our models uh

play16:21

of our as,

play16:22

as possible.

play16:22

So we're super aligned there.

play16:24

Um But yeah,

play16:27

it's like it makes sense,

play16:30

friends,

play16:31

friends that text.

play16:32

We do a lot.

play16:34

OK.

play16:34

All right.

play16:35

All right.

play16:36

Um Mira,

play16:37

you,

play16:38

you over the last number of months have started to roll out a lot more

play16:42

personalization to the model,

play16:44

right?

play16:44

You can put in custom instructions.

play16:46

You get a,

play16:46

get the voice now and we're gonna bring out one of the,

play16:50

bring down the screen to hear one of these clips.

play16:52

But I wanted to play one of these conversations I had with chat GP TS

play16:56

just yesterday.

play17:00

It's a,

play17:00

it's a slow screen that G BT responds quicker.

play17:05

That's really embarrassing.

play17:07

Yeah.

play17:08

We're gonna get that better though.

play17:11

We'll get it faster that I can't help you with.

play17:13

But we'll get chas G BT.

play17:15

No,

play17:15

I'm saying cha G BT is faster than this.

play17:17

But,

play17:18

and I'm saying that's like a,

play17:19

it's not often that people say chag BT is faster than X.

play17:25

I've got a big interview with Open A I executives and I'm nervous.

play17:28

Any advice.

play17:32

Absolutely.

play17:33

Given your background as a technology columnist,

play17:36

you're well equipped for this.

play17:38

Just be yourself.

play17:39

Do your homework on Open A's latest developments and trust in your expertise.

play17:44

Remember,

play17:45

they'll likely appreciate thoughtful and well researched questions and hey,

play17:49

everyone gets nervous.

play17:51

Take a deep breath and rock it.

play17:55

Am I rocking it?

play17:56

I'm rocking it.

play17:57

All right.

play17:58

Are we rocking it up here?

play18:00

Ok.

play18:02

But what's so striking about that?

play18:04

First of all I want every,

play18:05

that's,

play18:05

that's the voice of Chat GP.

play18:07

T they have one of five.

play18:09

Yeah.

play18:10

Yeah.

play18:10

That,

play18:10

I mean,

play18:10

that is the voice.

play18:11

It,

play18:11

it is,

play18:11

it sounds so human.

play18:13

It sounds so natural.

play18:15

It knows about me because I've already put into custom instructions.

play18:17

I'm a tech journalist.

play18:19

It also knows I'm allergic to avocado.

play18:20

It's always putting that in there.

play18:21

Don't eat avocado.

play18:22

I'm like,

play18:22

I'm not asking about avocado.

play18:26

We got some work to do.

play18:27

Is there,

play18:28

is there a,

play18:28

a future and this is what you're maybe trying to build here where we have deep

play18:33

relationships with this type of bo it's going to be a

play18:37

significant relationship,

play18:39

right?

play18:39

Because,

play18:40

you know,

play18:40

we're,

play18:41

we're building the systems that are going to be everywhere in,

play18:44

at your home,

play18:45

in your educational environment,

play18:46

in your work environment.

play18:48

And maybe,

play18:49

you know,

play18:49

when you're having fun.

play18:50

And so that's why it's actually so important to get it right?

play18:55

And we have to be so careful about how we design this interaction so

play18:59

that ultimately,

play19:01

it's,

play19:01

you know,

play19:01

elevating and it's fun and it's uh it,

play19:04

it makes productivity better and it enhances creativity.

play19:08

Um And,

play19:09

you know,

play19:10

this is ultimately where we're trying to go.

play19:12

And as we increase the capabilities of the technology,

play19:15

we also want to make sure that,

play19:17

you know,

play19:17

on,

play19:18

on the product side,

play19:19

um we feel in control of this,

play19:24

these systems in the sense that we can steer them to do the things that we want

play19:29

them to do and the output is reliable,

play19:32

that's very important.

play19:33

And of course,

play19:34

we want it to be personalized,

play19:36

right?

play19:37

And as,

play19:38

as it has more information about your preferences,

play19:41

the things you like,

play19:42

the things you do um and the capabilities of the models

play19:47

increase and other features like memory and so on.

play19:50

It has,

play19:51

of course,

play19:52

it will become more personalized and that's,

play19:54

that's a goal,

play19:55

it will become more useful and it's,

play19:57

it's going to become uh more fun and more creative and it's not just one

play20:01

system,

play20:02

right?

play20:02

Like you can have many such systems personalized for specific

play20:07

domains and tasks.

play20:08

That's a big responsibility though.

play20:10

And you guys will be in the sort of control of people's friends,

play20:15

maybe people's,

play20:16

it gets to being people's lovers.

play20:18

Uh How do you,

play20:19

how do you guys think about that control?

play20:23

First of all,

play20:23

I think there's,

play20:25

we're not gonna be the only player here,

play20:27

like there's gonna be many people.

play20:28

So we have,

play20:29

we have,

play20:30

we get to put like our nudge on the trajectory of this technological development and we've got

play20:34

some opinions.

play20:35

Uh but a we really think that the decisions belong to sort of humanity,

play20:40

society as a whole,

play20:41

whatever you wanna call it.

play20:42

And b we will be one of many actors building sophisticated systems here.

play20:46

So it's gonna be a society wide discussion.

play20:50

It's,

play20:50

and,

play20:50

and there's gonna be all of the normal forces,

play20:52

there'll be competing products that offer different things,

play20:54

there will be different kind of like societal embraces and pushbacks,

play20:58

there'll be regulatory stuff.

play21:00

Uh It's gonna be like the same complicated mess that any new

play21:04

technological birthing process goes through and then we,

play21:07

we pretty soon will turn around and we'll all feel like we had smart A I in our lives forever.

play21:12

And,

play21:12

you know,

play21:12

that's just,

play21:13

that's,

play21:13

that's the way of progress and I think that's awesome.

play21:15

Um I personally have deep misgivings

play21:20

about this vision of the future where everyone is like super close to A I friends and not like more so than human

play21:25

friends or whatever.

play21:26

I personally don't want that.

play21:28

Uh I accept that other people are gonna want that.

play21:31

Um And you know,

play21:33

some people are gonna build that and if that's what the world wants and what we decide makes sense,

play21:38

we,

play21:38

we're gonna get that.

play21:40

I,

play21:40

I personally think that personalization is great.

play21:44

Personality is great,

play21:45

but it's important that it's not like person this and,

play21:50

and at least that,

play21:50

you know,

play21:51

when you're talking to A I and when you're not,

play21:53

uh you know,

play21:53

we named it Chat G BT and not,

play21:55

it's a long story behind that,

play21:56

but we name it Chat G BT and not a person's name very intentionally.

play22:00

And we do a bunch of subtle things in the way you use it to like,

play22:03

make it clear that you're not talking to a person.

play22:06

Um And I,

play22:07

I think what's gonna happen is that in the same way that people

play22:12

have a lot of relationships with people,

play22:14

they're gonna keep doing that.

play22:15

And then there will also be these like a is in the world but you kind of know they're just a different thing

play22:22

when you're saying this is another question for you.

play22:24

What is the ideal device that we'll interact with these on?

play22:28

And I'm wondering if you,

play22:30

I hear you and Johnny Ive have been talking,

play22:33

you bring something to show us.

play22:36

Um,

play22:38

I think,

play22:38

I think there is something great to do but I don't know what it is yet.

play22:42

You must have some idea,

play22:43

a lot of ideas.

play22:45

I mean,

play22:45

I'm interested in this topic.

play22:47

I think it is possible.

play22:48

I think most of the current thinking out there in the world is quite

play22:53

bad about what we can do with this new technology in terms of a new computing platform.

play22:58

And I do think every sufficiently big new technology uh

play23:03

it enables some new computing platform.

play23:06

Um but lots of ideas but like in the very nascent

play23:11

stage.

play23:12

So it doesn't,

play23:14

I guess the question for me is is there something about a smartphone or ear

play23:18

buds or a laptop or a speaker that doesn't quite work right now.

play23:23

Of course,

play23:24

so much smartphones are great.

play23:26

Like I have no interest in trying to go compete with a smartphone.

play23:31

Like it's a phenomenal thing uh at what it does.

play23:36

But I think the way what A I enables

play23:41

is so fundamentally new um that it is possible to and maybe

play23:45

we won't like,

play23:46

you know,

play23:47

maybe,

play23:48

maybe it's just like for a bunch of reasons doesn't happen.

play23:51

But I think it's like,

play23:51

well worth the effort of talking about or thinking about,

play23:55

you know,

play23:55

what can we make?

play23:56

Now that before we had computers that could think was,

play24:00

uh,

play24:01

or computers that could understand whatever you wanna call it was not possible.

play24:04

And if the answer is nothing,

play24:06

it would be like a little bit disappointed.

play24:10

Well,

play24:10

it sounds like it doesn't look like a humanoid robot,

play24:12

which is good.

play24:14

Definitely not.

play24:17

I don't think that quite works.

play24:18

Ok.

play24:19

Speaking of hardware,

play24:21

are you making your own chips?

play24:24

You want an answer now?

play24:26

Um Directed here.

play24:28

Uh Are we making our own chips?

play24:30

We are trying to figure out what it is going to take to

play24:35

scale to,

play24:36

to deliver at the scale that we think the world will demand.

play24:40

Um And at the model scale that we think that the research can support,

play24:43

um that might not require any custom hardware.

play24:49

Um And we have like wonderful partnerships right now with people who are doing amazing work.

play24:54

Um So the default path would certainly be not to,

play24:59

but I wouldn't,

play25:00

I would like,

play25:00

I would never rule it out.

play25:02

Are there any good alternatives to NVIDIA out there?

play25:06

Uh NVIDIA certainly has something amazing,

play25:09

amazing.

play25:10

Uh But,

play25:11

you know,

play25:11

I think like the magic of capitalism is doing its thing and a lot of other people are trying and

play25:16

we'll see where it all shakes out.

play25:17

We had Renee Haas here from arms.

play25:19

I hear you guys have been talking his friends.

play25:24

Oh,

play25:24

you said hello?

play25:25

Not as close as Sata.

play25:26

You're not,

play25:27

you're not as close as,

play25:28

not as,

play25:28

ok.

play25:28

Got it,

play25:29

got it.

play25:29

Um um this is where we're getting.

play25:34

Yeah,

play25:34

we're getting to the hard,

play25:35

we actually we're about to get to the hard hitting.

play25:36

So um my colleagues recently reported you guys are,

play25:40

are,

play25:40

are,

play25:41

are actually looking at the valuation is 80 to 90 billion and that you're

play25:45

expected to reach a billion in revenue.

play25:48

Are you raising money?

play25:50

No.

play25:50

Well,

play25:51

I mean,

play25:51

always but not like this minute.

play25:54

Not right now,

play25:54

not,

play25:55

not right now.

play25:55

There's the people here with money.

play25:57

All right,

play25:57

let's talk.

play26:00

Um We,

play26:00

we will need huge amounts of capital to complete our mission and we have

play26:05

been extremely upfront about that.

play26:08

Um There has got to be something more interesting to talk about in our limited time

play26:13

here together than our future capital raising plans,

play26:16

but we will need a lot more money.

play26:18

We don't know exactly how much we don't know exactly how it's gonna be structured,

play26:21

what we're gonna do.

play26:21

But um you know,

play26:24

it shouldn't come as a surprise because we have said this all the way through.

play26:29

Like it's just a tremendously expensive endeavor where,

play26:32

which part of the business though right now is growing the most mirror you can also

play26:37

jump in.

play26:38

Definitely in the product side.

play26:39

Yeah,

play26:39

with,

play26:40

with the research team is very important to have,

play26:42

you know,

play26:43

density of talent,

play26:44

small teams that innovate quickly the product side,

play26:47

you know,

play26:48

we're doing a lot of things.

play26:49

We're trying to push great uses of A I out there both on platform side and first

play26:54

party and work with customers.

play26:56

So that's certainly,

play26:58

and,

play26:58

and the revenue is coming mostly from that api

play27:03

the the revenue for the company revenue.

play27:05

Oh,

play27:06

I'd say both sides,

play27:07

both sides.

play27:08

Yeah.

play27:09

So my,

play27:10

my subscription to Chat G BT Plus.

play27:12

Is that?

play27:13

Yeah,

play27:13

yeah.

play27:14

How many people here actually are subscribers to Chat G BT Plus?

play27:17

Thank you all very much.

play27:19

Ok.

play27:19

You guys make a family plan.

play27:22

It's serious.

play27:24

It's serious because I'm spending on two and we'll talk about it.

play27:27

Ok.

play27:28

This is what we're really here for tonight.

play27:29

Um,

play27:31

moving out a little bit into policy and,

play27:33

and some of the fears it's not like super cheap to run if we had a way to like

play27:38

say like,

play27:38

hey,

play27:39

you know,

play27:39

you can have this for like we can give you like way more for the 20 bucks or whatever we would like to

play27:44

do that.

play27:45

And as we make the models more efficient,

play27:46

we'll be able to offer more,

play27:47

but it's,

play27:48

it's not for like lack of us wanting more people to use it that we don't do things like family,

play27:53

family plan for like $35 for two people that the kind of

play27:58

haggling,

play27:58

you know.

play27:59

Well,

play27:59

I gave you the sweatshirt.

play28:00

And so,

play28:01

you know,

play28:01

it's,

play28:02

there's,

play28:02

there's something we can do there.

play28:04

How do we go from the chat that we just heard that told me to rock it to one

play28:08

that I don't know,

play28:09

can rock the world and end the world.

play28:13

Well,

play28:13

I don't think we're gonna have like a chat bot that ends the world.

play28:16

But how do we go to this idea of?

play28:18

We have,

play28:18

uh,

play28:18

we,

play28:19

we've got simple chat bots are not simple.

play28:20

They're,

play28:21

they're advanced what you guys are doing.

play28:22

But how do we go from that idea to this fear that is now

play28:26

pervading everywhere.

play28:31

If,

play28:32

if we are right about the trajectory,

play28:34

things are going to stay on and if we are right about,

play28:37

not only the kind of like scaling of the GP TS but new techniques that we're interested in that

play28:42

could help generate new knowledge and someone with access to a,

play28:46

a system like this can say,

play28:47

like help me hack into this computer system or help me design

play28:52

uh you know,

play28:53

like a new biological pathogen that's much worse than COVID or any number of other things.

play28:57

It seems to us like it doesn't take much imagination to think about

play29:02

scenarios that deserve great caution.

play29:05

And,

play29:05

and again,

play29:06

we,

play29:06

we,

play29:07

we all come and do this because we're so excited about the tremendous upside

play29:11

and that the incredibly positive impact.

play29:14

And I think it would be like a moral failing not to go pursue that for humanity,

play29:18

but we've got to address and this happens with like many other technologies,

play29:22

we've got to address the downsides that come along uh with this.

play29:27

And it doesn't mean you don't do it,

play29:28

it doesn't mean you just say like this A A I thing.

play29:31

We,

play29:31

we're gonna like,

play29:32

you know,

play29:33

we're gonna like go like full dune and like blow up,

play29:35

you know,

play29:35

and have not have computers or whatever.

play29:37

Um But it means that you like,

play29:39

are thoughtful about the risks.

play29:41

You try to measure what the capabilities are and you try to build your own

play29:45

technology in a way and that,

play29:49

that mitigates those risks.

play29:50

And then when you say like,

play29:51

hey,

play29:51

here's a new safety technique,

play29:52

you make that available to others.

play29:55

And as you guys are thinking about building in,

play29:59

in,

play29:59

in this direction,

play30:02

what are some of those specific safety risks you're looking to put in?

play30:06

I mean,

play30:07

like Sim said,

play30:09

you've got the capabilities and then there is always a downside whenever you have such

play30:14

immense and great capability,

play30:16

there's always a downside.

play30:17

So we've got a fierce task ahead of us to figure

play30:22

out what are these downsides,

play30:24

discover,

play30:25

understand them,

play30:26

build the tools to mitigate them.

play30:29

And it's not,

play30:29

you know,

play30:30

like a single fix,

play30:32

you usually have to intervene everywhere from the data to the

play30:36

model to um the tools in the product.

play30:40

And of course,

play30:41

policy.

play30:42

And then thinking about the entire regulatory and um um

play30:46

societal infrastructure that can kind of keep up with these technologies that

play30:51

we're building.

play30:52

Because ultimately,

play30:53

what we want is to slowly roll out these capabilities

play30:58

in a way that makes sense and allow society to adapt.

play31:02

Um because,

play31:03

you know,

play31:04

the the progress is incredibly rapid and we

play31:08

want to allow for adaptation and for the whole

play31:13

infrastructure that's needed for these technologies to actually be absorbed

play31:18

productively to exist and be there.

play31:20

So,

play31:21

you know,

play31:21

when you think about what are sort of the concrete safety

play31:26

um uh measures along the way,

play31:30

I would say,

play31:31

number one is actually rolling out the technology um

play31:36

and slowly making contact with reality,

play31:38

understanding how it affects um uh certain use cases

play31:43

and industries and actually dealing with the implications of that,

play31:47

whether it's regulatory copyrights,

play31:49

um you know,

play31:50

whatever the impact is actually absorbing that and dealing with that

play31:55

and moving on to more and more capabilities.

play31:58

I don't think that building the technology in a lab in a vacuum without contact with the

play32:03

real world and with the friction that you see with reality is a

play32:07

good way to actually deploy it safely and this might be where you're

play32:12

going.

play32:12

But it,

play32:13

it seems like right now you're also policing yourself,

play32:16

right?

play32:16

You're setting this better and,

play32:17

and Sam,

play32:18

that's where I was gonna ask you.

play32:19

I mean,

play32:19

you seem to spend more time in Washington than Joe Biden's dogs right now and I'm sure

play32:24

I've only been twice this year.

play32:26

Really,

play32:26

that's,

play32:26

I think his dog like three days or so.

play32:28

Anyway.

play32:29

Um,

play32:29

but what is it specifically that you would rather the government and our

play32:33

regulators do versus you have to do?

play32:36

First?

play32:37

The point I was making,

play32:38

I think is,

play32:38

is really important that,

play32:40

that it's very difficult to make a technology safe in the lab.

play32:45

Um,

play32:46

society uses things in different ways and adapts in different ways.

play32:49

And I think the more we deploy A I,

play32:52

the more A I is used in the world,

play32:53

the safer A I gets and the more we kind of like,

play32:55

collectively decide,

play32:56

hey,

play32:56

here's a thing that is not an acceptable risk tolerance and this other thing that people are worried about,

play33:01

that's,

play33:01

that's totally ok.

play33:02

Um,

play33:03

and,

play33:04

you know,

play33:05

like we see this with many other technologies,

play33:08

airplanes have gotten unbelievably safe.

play33:10

Um,

play33:11

even though they didn't start that way and it was,

play33:13

uh,

play33:14

it was like careful,

play33:15

thoughtful engineering and,

play33:17

um,

play33:17

understanding why when something went wrong it went wrong and how to address it.

play33:21

And,

play33:22

you know,

play33:22

the shared best practices there,

play33:24

I think we're gonna see in all sorts of ways that the things that we worry about with A I in theory don't

play33:29

quite play out in practice.

play33:31

Um,

play33:32

you just like a ton of talk right now about deep fakes and,

play33:36

you know,

play33:36

the,

play33:37

the,

play33:37

the impact that's gonna have on uh,

play33:40

society in all these different ways.

play33:43

I think that's an example of where we were thinking about the last generation too much and a

play33:48

I will disrupt society in all of these ways.

play33:51

But,

play33:51

you know,

play33:51

we all kind of are like they're like,

play33:53

oh,

play33:53

that's a deep fake or oh,

play33:54

it might be a deep fake.

play33:55

Oh,

play33:55

that picture or video or audio like we,

play33:58

we learn quickly but,

play33:59

but maybe the real problem,

play34:01

this is like speculation.

play34:02

This is hard to know in advance is not the deep fake ability,

play34:06

but the sort of customized one on one persuasion.

play34:09

And that's where the influence happens.

play34:10

It's not,

play34:11

it's not like the fake image.

play34:12

It's the this thing has a subtle ability,

play34:15

these things have a subtle ability to influence people and then we learn that that's the problem and we,

play34:19

we adapt.

play34:20

Uh So in terms of what we'd like to see from governments,

play34:24

uh I think we've been like very mischaracterized here.

play34:27

We do think that international regulation is gonna be important for the most

play34:32

powerful models.

play34:33

Nothing that exists today,

play34:34

nothing that will exist next year.

play34:36

Uh But as we get towards a real super intelligence,

play34:39

as we get towards a system that is like more capable uh than like

play34:44

any humans.

play34:45

Um I think it's very reasonable to say we need to treat that with like caution

play34:50

and uh and a coordinate approach.

play34:52

But like we think what's happening with open source is great.

play34:55

We think start ups need to be able to train their own models and deploy them into the world and a regulatory

play35:00

response on that would be a disastrous mistake for this country or others.

play35:05

Um So the message we're trying to get across is you gotta embrace what's happening

play35:10

here.

play35:10

You gotta like make sure that we get the economic benefits and the societal benefits of it.

play35:15

But let's like,

play35:17

look forward at where this,

play35:18

where we believe this might go and let's not be caught flat footed if that happens.

play35:24

You mentioned deep fakes and I,

play35:25

I wanna talk about A I generated content that's all over the internet.

play35:29

Now,

play35:30

who do you guys think is responsible or,

play35:33

or should be responsible for policing some of this or not policing but

play35:38

detection of some of this is this on the social media companies?

play35:41

Is this on open A I and all the other A I companies,

play35:46

we're definitely responsible for the technologies that we develop and put out there and

play35:51

uh you know,

play35:51

misinformation and that's,

play35:53

that's clearly a big issue as we create more and more capable models.

play35:58

And we've been developing technologies to deal with um uh the

play36:02

provenance of an image or a text and detect output,

play36:07

but it's a bit complicated because,

play36:09

you know,

play36:09

you want to give the user sort of flexibility and

play36:14

they,

play36:14

you also don't want them to feel monitored.

play36:16

And so you have to consider the user and you also have to consider people that are impacted by the

play36:21

system that are not users.

play36:23

And so these are quite nuanced issues that require um a

play36:28

lot of interaction and input from not just your users of the product but also

play36:33

of society more broadly and figuring out,

play36:37

you know,

play36:37

also with partners um that,

play36:39

that bring on this technology and integrate it,

play36:42

what are the best ways to,

play36:44

to deal with these issues?

play36:45

Because right now there's no way or no tool from open A I,

play36:49

at least that I,

play36:50

that I can put in an image or some of the text.

play36:53

And ask,

play36:54

is this A I generated for image?

play36:56

We have actually technology that's uh really good almost,

play37:01

you know,

play37:02

99% reliable,

play37:04

but we're still testing it.

play37:05

It's early and we want to be sure that it's going to work.

play37:09

And even then it's not just a technology problem,

play37:12

misinformation is such a nuanced and broad problem.

play37:15

So you still have to be careful about how you roll it out where you integrate

play37:20

it.

play37:20

Um But we're certainly working on the research side and for,

play37:24

for image,

play37:25

at least we have a very reliable tool in,

play37:28

in the early stages.

play37:29

Yeah,

play37:30

and say it's worth,

play37:32

when might you release this?

play37:35

You said you,

play37:35

you said you're,

play37:36

you're working on this right now.

play37:37

Is this something you plan to release?

play37:39

Oh,

play37:40

yes,

play37:41

yes.

play37:41

For both images and text,

play37:43

for text,

play37:44

we're trying to figure out what actually makes sense.

play37:47

Um For,

play37:48

for images,

play37:49

it's a bit more straight,

play37:51

straightforward problem.

play37:53

Um But in either case,

play37:55

we definitely test it out because we don't have all the answers,

play37:58

right?

play37:58

Like we're building these technologies first,

play38:00

we don't have all the answers.

play38:01

So often we will experiment,

play38:04

we will put out something,

play38:05

we will get feedback,

play38:06

but we want to do it in a controlled way,

play38:08

right?

play38:09

Um And sometimes we'll take it back and we'll make it better and

play38:14

roll it out again.

play38:15

I,

play38:15

I'll also add that.

play38:16

I think this idea of watermarking content is not something that everybody has the

play38:21

same opinion about what is good and what is bad.

play38:23

There's a lot of people who really don't want their generated content watermarked and that's understandable in many

play38:28

cases.

play38:29

Uh Also it's not,

play38:30

it's not gonna be super robust to everything.

play38:32

Like maybe you could do it for images,

play38:34

maybe for longer text,

play38:35

maybe not for short text.

play38:37

But over time there will be systems that don't put the watermarks in.

play38:40

And also there will be people who really like,

play38:44

you know,

play38:44

this is like a tool and up to the human user,

play38:47

how you use the tool.

play38:48

And I don't like this is why we want to engage in the conversation.

play38:52

Like we,

play38:52

we are willing to sort of like follow the the collective wishes of

play38:57

society on this point.

play38:59

And I don't think it's a black and white issue.

play39:02

Uh at least think people are still evolving as they understand all the different ways we're gonna use these tools,

play39:07

they're still evolving,

play39:08

their thoughts about what they're gonna want here also to Sim's earlier point.

play39:12

It's not,

play39:12

you know,

play39:14

um it's not just about truthfulness,

play39:17

right?

play39:17

And what's,

play39:18

what's real and what's not real.

play39:21

Actually,

play39:21

I think in the world that we're going towards marching towards the,

play39:25

the bigger risk is really this individualized pers uh persuasion

play39:30

and,

play39:30

and how to deal that and that's going to be a very tricky problem to deal with,

play39:35

right?

play39:35

I realize I have five minutes left and we were gonna do some audience questions so we can get to one

play39:40

audience or two audience questions.

play39:42

I'm gonna finish 111 last thought here.

play39:45

Um I can actually not see a thing out there.

play39:48

So um I will ask one last question,

play39:50

then we'll,

play39:51

we'll hopefully have time for one or two.

play39:53

So 10 years you were here 10 years ago.

play39:57

What we,

play39:58

we touched on this as we were,

play39:59

we're starting here.

play40:00

But what is your biggest fear about the future?

play40:04

And what is your biggest hope with this technology?

play40:08

II,

play40:08

I think the future is gonna be,

play40:09

be like amazingly great.

play40:11

Uh We,

play40:11

we wouldn't come work so hard on this if we didn't,

play40:14

I,

play40:14

I think this is gonna be like,

play40:17

I think this is one of the most significant inventions humanity has yet

play40:21

done.

play40:22

Um So I'm super excited to see it all play out.

play40:27

Uh I think like things can get so much better for people than,

play40:32

uh,

play40:32

than they are right now.

play40:33

And I'm,

play40:34

I feel very hopeful about that.

play40:36

We,

play40:36

we covered a lot of the fears.

play40:37

It,

play40:37

it,

play40:37

like,

play40:38

again,

play40:38

we're clearly dealing with something very powerful that's gonna impact all of us in ways we,

play40:43

we can't perfectly foresee it.

play40:45

Um,

play40:46

but like what a time to be alive and,

play40:49

and,

play40:49

and get to witness this.

play40:52

You're not so fearful that I,

play40:53

I was gonna actually ask this,

play40:54

but I'll,

play40:54

I'll ask him.

play40:55

Now,

play40:55

do you have a bunker?

play40:58

This is the,

play40:59

this is,

play40:59

this is the question,

play41:00

the question,

play41:01

not better than you.

play41:02

I'm gonna let that clock run.

play41:03

I'm not gonna pay attention to that.

play41:04

But as we're thinking about fears,

play41:06

I just,

play41:07

I'm wondering what if you have a bunker and what I would say that you have that you say I have like

play41:11

structures,

play41:12

but I wouldn't say like a bunker structures.

play41:15

None of this is gonna help if a G I goes wrong.

play41:18

This is a,

play41:18

it's a ridiculous question to be honest.

play41:20

OK.

play41:20

Good,

play41:20

good,

play41:21

good,

play41:22

Mira.

play41:23

What's your hope and fear?

play41:26

I mean,

play41:26

the hope is definitely to push our civilization ahead with

play41:31

augmenting um,

play41:33

our collective intelligence and the fears.

play41:36

We talked a lot about the fears,

play41:37

but,

play41:37

you know,

play41:37

we've got this opportunity right now.

play41:40

Um,

play41:40

and we've got summers and winters in A I and so on.

play41:45

But,

play41:45

you know,

play41:46

when we look back 10 years from now,

play41:48

I hope that we get this right.

play41:51

And I think there are many ways to,

play41:54

to mess it up.

play41:55

Um And we've seen that with many technologies,

play41:59

so I hope we get it right.

play42:01

All right.

play42:02

We've got time right here.

play42:05

Hi.

play42:06

Um Pam Dylan,

play42:07

preferably uh sensory consumer products.

play42:11

A I my question has to do with the inflection point.

play42:14

We are where we are with respect to A I and A G I.

play42:19

What is the inflection point?

play42:21

How do you define that moment where we go from where we are now

play42:26

to however you would choose to define what is

play42:31

A G I,

play42:34

I think it's,

play42:37

it's gonna be much more continuous than that.

play42:39

We're just on this beautiful exponential curve.

play42:42

Whenever you're on a curve like that,

play42:44

you look forward,

play42:44

it looks vertical,

play42:45

you look back,

play42:46

it looks horizontal.

play42:47

That's true at any point on there.

play42:49

So a year from now we'll be in a dramatically more impressive place than a year ago.

play42:54

We were in a dramatically less impressive place,

play42:56

but it'll be hard to point.

play42:58

People will try and say,

play42:59

oh,

play42:59

it was Alphago that did it,

play43:00

it was GP T three that did it,

play43:01

it was GP T four that did it,

play43:03

but it's just brick by brick,

play43:04

1 ft in front of the other up climbing this exponential curve

play43:10

right here in the front.

play43:15

Thank you.

play43:16

My name is Mariana Michael.

play43:18

I'm the chief information officer at the Port of Long Beach,

play43:20

but I'm also a computer scientist by training a few decades ago.

play43:24

I'm older than you.

play43:25

I remember working with some of the early A I people.

play43:27

I have a general question.

play43:28

I agree with you.

play43:29

This is one of the most significant innovations to happen.

play43:34

One of the things I've struggled with over the last 20 years in thinking about this,

play43:38

we're about to change the nature of work.

play43:41

This is that significant and I feel that people are not talking about it,

play43:46

there will be a significant,

play43:47

there'll be a transition,

play43:48

time period where significant population in the world and in this country

play43:53

will not have had the types of discussion and the sense that we have.

play43:56

So they can,

play43:57

like you mentioned,

play43:58

society needs to be a part of it.

play43:59

There's a large portion of society that's not even in this discussion.

play44:03

So the nature of work will change.

play44:06

It used to be that things that were just um gonna be automated.

play44:10

There will be a time where people who define themselves by work

play44:15

since thousands of years will not have that and we're hurtling towards it.

play44:20

What can we do to make sure that we take that into account?

play44:23

Because when we talk about society,

play44:25

it's not like they're all together,

play44:26

ready to discuss this.

play44:27

Some of the effects of some of the technologies that we brought into the world have actually made people

play44:32

separate from each other.

play44:33

How do we get some of those not regulations but how do we come up with some of

play44:38

those frameworks and voluntarily bring things about that will actually result in a

play44:43

better world that doesn't leave everybody else behind.

play44:46

Thank you.

play44:50

OK.

play44:52

I,

play44:52

I'll give you my perspective.

play44:54

I,

play44:54

I think I completely agree with you that it's one of,

play44:59

it's the ultimate technology that could really increase inequality and make,

play45:03

make things so much worse for us as human beings and civilization.

play45:08

Or it could be,

play45:09

you know,

play45:09

really amazing and it could bring along a lot of creativity and

play45:14

productivity and enhance us and,

play45:16

you know,

play45:17

maybe a lot of people don't want to work um eight hours or 100 hours a

play45:22

week,

play45:22

maybe they want to work four hours a day and do a bunch of other things and,

play45:27

you know,

play45:28

um I,

play45:28

I think it's certainly going to lead to a lot of disruption in the

play45:33

workforce and we don't know exactly the scale of that,

play45:36

um or,

play45:37

or the trajectory along the way,

play45:40

but that's,

play45:41

that's for sure.

play45:42

And one of the things that,

play45:45

um I,

play45:46

in retrospect,

play45:47

it's not that we specifically planned it,

play45:50

but in retrospect I'm happy about is that with the release of Child G BT,

play45:53

we sort of brought a I into the,

play45:56

um you know,

play45:57

collective consciousness and people are kind of paying attention because they're not reading

play46:02

about it in the press.

play46:03

Um People are not just telling them about it but they can play with it.

play46:07

They can interact with it and get a sense for the capabilities.

play46:11

And so I think it's actually really important to bring these technologies into the

play46:16

world and make them as widely accessible as possible.

play46:19

Um You know,

play46:20

Sam mentioned earlier,

play46:21

like we're working really hard to make these models cheaper and faster,

play46:26

so they're accessible very broadly.

play46:29

But I think that's key for people themselves to actually interact with the

play46:33

technology and experience it.

play46:35

Um And sort of visualize how it might change their way of life,

play46:40

their way of being and participate uh as you know,

play46:44

uh as,

play46:45

as in providing uh product feedback.

play46:48

But also,

play46:49

you know,

play46:50

I institutions need to actually prepare for these changes in the workforce and

play46:54

economy.

play46:57

I'll give you the last word.

play46:57

Yes,

play46:58

I,

play46:58

I think it's a super important question.

play47:00

Um e every technological revolution affects the job market uh and

play47:05

over human history,

play47:07

you know,

play47:07

every maybe 100 years,

play47:08

you feel different numbers for this 150 years,

play47:11

half the kind of jobs go away,

play47:12

totally change whatever.

play47:14

Um I'm not afraid of that at all.

play47:16

In fact,

play47:16

I think that's good.

play47:17

I think that's the way of progress and we'll find new and better jobs.

play47:20

The thing that I think we do need to confront as a society is the speed at which this is going to happen.

play47:25

It seems like over,

play47:26

you know,

play47:27

two maximum three,

play47:28

probably two generations we can adapt,

play47:30

society can adapt to almost any amount of,

play47:32

of job market change.

play47:34

But a lot of people like their jobs or they dislike change and

play47:39

going to someone and saying,

play47:40

hey,

play47:40

the future will be better.

play47:41

I promise you and society is gonna win but you're gonna lose here.

play47:44

That,

play47:44

that doesn't work.

play47:45

That's not a,

play47:46

that's not cool.

play47:47

Like that's,

play47:47

that's not a nice,

play47:48

that's not an easy message to get across.

play47:51

And al although I tremendously believe that we're not gonna run

play47:56

out of things to do people that want to work less fine,

play47:58

they'll be able to work less.

play47:59

But,

play47:59

you know,

play48:00

probably many people here don't need to keep working and,

play48:02

and we all do like,

play48:03

we,

play48:03

we,

play48:03

there's like great satisfaction in expressing yourselves in,

play48:06

in being useful and sort of contributing back to society that's not going away.

play48:10

Uh That,

play48:11

that is such an innate human desire like evolution doesn't work that fast.

play48:14

Uh Also the sort of ability to creatively express yourself and to sort of

play48:19

leave something to,

play48:20

to,

play48:21

to add something back to the trajectory of the species is

play48:25

that,

play48:26

that's,

play48:27

that's like a wonderful part of the human experience.

play48:29

So we're gonna keep finding things to do and the people in the future will probably

play48:34

think some of the things that we,

play48:36

we think some of the things those people do are very silly and not real work in a way that like a hunter

play48:40

gatherer probably wouldn't think this is real work either.

play48:43

You know,

play48:43

we're just trying to like entertain ourselves with some silly status game.

play48:46

That's fine with me,

play48:47

that's how it goes.

play48:48

Um The,

play48:51

but we are gonna have to really do something about this transition.

play48:55

It is not enough to just give people a universal basic income.

play48:59

People need to have agency,

play49:02

the ability to influence this.

play49:03

They need,

play49:04

we need to sort of jointly be architects of the future.

play49:06

And one of the reasons that we feel so strongly about de deploying this technology as

play49:11

we do,

play49:12

as you said,

play49:13

not everybody is in these discussions but more and more every year.

play49:16

And by putting this out in people's hands and making this super widely available and getting billions of people to use chat G

play49:20

BT,

play49:21

not only do people have the opportunity to think about what's

play49:26

coming and participate in that conversation.

play49:28

Um but people use the tool to push the future forward.

play49:32

Um And that's really important to us.

Rate This

5.0 / 5 (0 votes)

Related Tags
AI技術将来展望人工知能技術革新社会影響労働変化クリエイティビティ安全性倫理人間関係
Do you need a summary in English?