Sam Altman talks with MIT President about AI (May 2024)

Nero Bird
2 May 202452:35

Summary

TLDRMITの18代目の総長であるサリー・コーエンと、OpenAIの共同創設者でありCEOであるサム・アルトマンが対話し、AIの将来性と社会的影響について議論しました。コーエンはAIが社会に広く利益をもたらすべくMITが取り組んでいると証言し、アルトマンはAIの進化とそれに伴う職業消失、新しい職業の創出について語りました。二人はAIによる教育の変革、科学的発見の促進、個人のプライバシーとAIのバランスについても洞察を共有しました。この会話は、AI技術の進歩がもたらす可能性と課題に興味を持つ聴衆にとって非常に興味深いものとなっています。

Takeaways

  • 😀 サリー・コーエンは2023年1月にMITの18代目の総長に就任し、その卓越した管理者として知られています。
  • 🧠 サム・アルトマンはオープンAIの共同創設者であり、CEOでもあります。彼らのミッションは人工知能を通じて人類全員に利益をもたらすことです。
  • 🤖 AIの「絶滅の可能性」という問いに対して、サムは0ではないと答えますが、現在はまだ非常に初期の段階だと考えています。
  • 🔍 サムはAIの進化について、過去10年間でより現実的になり、技術革新の一つとして捉えられるようになったと語っています。
  • 🛠️ AIのバイアスを取り除くためには、システムを特定の価値観に従わせる必要があるとサムは考えています。
  • 🏥 AIは医療など特定の分野で、人間のバイアスを超える可能性があるとサムは述べています。
  • 🔒 AIの進化と個人情報のプライバシーに関するバランスが重要な課題であるとサムは指摘しています。
  • 🌐 オープンAIは完全にオープンソースではないが、他の方法で公開し、広く利用可能にしています。
  • 💡 AIの発展は科学技術の進歩に重要な役割を果たし、持続可能な経済成長を促進するでしょう。
  • 🚀 AIの進歩は人類の生活と仕事に大きな影響を与えるが、人間は創造性と有用性を追求し続けるでしょう。

Q & A

  • サリー・コーエン氏はMITの第18代総長として、どのような背景を持っていますか?

    -サリー・コーエン氏は、セル生物学者であり、デューク大学で8年間副学長を務め、優れた管理者、創造的な問題解決者、教員の優れたさと学生の福祉を推進するリーディングアドボケートとして知られています。MITでの多くのイニシアチブの中で、人工知能(AI)にも焦点を当てており、AIが社会全体に幅広く有益になるようにMITの努力を進めています。

  • サム・アルトマン氏はどのようにAIの可能性を考えていますか?

    -サム・アルトマン氏は、AIが社会に積極的な影響を与えるために非常に重要なツールであると考えており、AIの発展を通じて人類の技術的な進化を促進したいと考えています。また、AIが持つ可能性を最大限に活かすために、安全性とユーザビリティのバランスを慎重に考えていく必要があると述べています。

  • AIの進化によって生じる偏りやバイアスを取り除くために何が必要ですか?

    -AIの進化に伴い、システムを特定の価値観に沿って行動するように調整する必要があります。しかし、誰がバイアスの意味や価値観を決定すべきか、そしてシステムがどのようにすべきかを決定するために、社会的な合意が必要です。また、AIシステムは人間に比べてバイアスを少なく保有できる可能性があり、そのように努力することが重要です。

  • AIの発展と個人のプライバシーの関係についてどのように考えていますか?

    -AIの発展は個人のプライバシーと密接な関係がありますが、プライバシーとユーティリティ、そして安全性の間のトレードオフを適切にナビゲートする必要があります。将来的には、AIが個人の生活全体にアクセスできるようなサービスが登場する可能性があり、それに伴いプライバシーに関する新たな定義や取り決めが必要になるでしょう。

  • オープンAIはどのようにして他の人々にAIの恩恵を提供していますか?

    -オープンAIは、無料で利用できる優れたAIツールを提供し、何百万人、将来的には何億人もの人々が使用できるようにしています。広告を出さず、公的な善として提供し、人々が手軽に利用できるように努めています。また、AIを通じて人々が生活をよりよく、より豊かにすることを目指しています。

  • AIの発展によって生じるエネルギー消費の増加と、気候変動との戦いへの貢献という間の緊張感はどのように解消されますか?

    -AIは非常に大きなエネルギーを必要としますが、AIを用いて非炭素基盤のエネルギーや、より良い炭素捕捉技術を開発することができれば、そのエネルギー消費は大きな勝利になるでしょう。また、AIは遠隔地での仕事を可能にし、エネルギーを節約する助けにもなります。

  • AIが科学や工学分野にどのように貢献する予定ですか?

    -オープンAIは、AIを用いて科学発見の速度を増やし、人類の進歩のコアエンジンを支えることに興味を持っています。また、教育やビジネス、コンシューマ分野にも貢献する可能性がありますが、科学分野でのAIの影響は特に期待されています。

  • 若手研究者や起業家に対してアドバイスをいただけますか?

    -現在はキャリアをスタートさせるのに非常に興奮的な時代であり、AIに関連する何かを行うことを強くお勧めします。また、自分のキャリアで何が最も興味深いかを早期に把握し、それに向かって動くことが大切です。また、テクノロジーの豊かな将来を目指して、人類全体の生活を向上させることに貢献することが重要です。

  • AIが金融分野にどのような影響を与えると予想されますか?

    -AIは他のすべての分野と同じくらい金融分野にも大きな影響を与えると予想されますが、具体的な変化については詳細は述べられていません。教育やヘルスケアのような特定の分野でのAIの活用については、すでに多くの可能性が見込まれています。

  • AIの発展による偏りやバイアスの排除とプライバシーの保護はどのようにバランスを取るべきですか?

    -AIの発展による偏りやバイアスの排除には、システムを特定の価値観に従わせる必要がありますが、その決定は社会的な合意に基づいて行われる必要があります。プライバシーの保護については、ユーティリティと安全性の間のトレードオフを適切に管理し、個人のプライバシーを保護しながらもAIの恩恵を享受できるようにすることが求められます。

  • オープンAIは今後どのような方向发展を予定していますか?

    -オープンAIは、AIの能力を大幅に向上させ続け、それを誰もが手軽に利用できるようにすることを目指しています。AGI(人工一般知能)という言葉は使いづらく、代わりに特定の能力X、Y、Zを達成するまでの時間を考える方が良いとされています。今後もAIの恩恵を広く提供し、経済的な価値を創造するシステムを目指して開発を進めていく予定です。

Outlines

00:00

😀 トップクラスの対話

MITの18代目の総長である生物学者のサリー・コーエンと、OpenAIの共同創設者でCEOであるサンジャイ・アルマンが対話に臨む。コーエンは2023年にMITに就任し、多くのイニシアチブに取り組む一方で、AIの社会的貢献にも力を入れている。一方、アルマンは技術革新を通じて人類全般に貢献することを目指す。対話は、多くの学生からの質問を通じて行われる。

05:01

🤔 AIの危機とバイアスの問題

AIがもたらす危機とそれに関連するバイアスの問題が議論される。アルマンは「P Doom」という概念について批判的な立場を示し、AIの安全性と将来性について考えていく。また、AIシステムにおけるバイアスの排除と倫理的な問題についても触れている。

10:02

🔍 プライバシーとAIのバランス

AIの進歩とそれに伴うプライバシー問題が議論された。個人情報の取扱いとAIの学習プロセスにおけるデータの使用について、社会がどう対応すべきかが問われた。アルマンは、プライバシーとユーティリティのトレードオフについて考えている。

15:02

🌐 AIのエネルギー消費と環境への影響

AIのエネルギー消費が環境に与える影響が話題に挙がる。AIのトレーニングに必要なエネルギーの大きさと、それが環境に及ぼす影響について話し、AIが環境問題にも貢献できる可能性についても言及されている。

20:03

🚀 AIの科学への貢献と期待

AIが科学分野に与える影響と期待が語られる。科学の進歩を加速させるAIの可能性と、それが持つ経済成長への貢献について期待を寄せている。科学者や研究者の支援ツールとしてAIが役立つとされている。

25:04

💡 AIと人間の創造性

AIが創造性と人間のアイデアをどのように支援するかが議論される。AIは人間の創造性と協力し、より良いアイデアの創出を促進する可能性があるとされている。

30:04

🛠️ AIのビジネスとスタートアップへの影響

AIがビジネスとスタートアップに与える影響が話し合う。AI技術の進歩がビジネスモデルにどのような影響を与えるか、また新しいスタートアップがどのようにAIを活用して成功を収めるかが議論された。

35:05

🏛️ AIと民主主義の未来

AIが民主主義と選挙プロセスに与える影響が懸念される。AI技術が選挙プロセスをどのように変え、それに伴うリスクと対策について考えている。

40:08

🤖 AIと教育の変革

AIが教育分野に与える影響が議論される。AIを活用した教育のカスタマイズと、学習プロセスへの貢献について期待が寄せられている。

45:09

🔮 AIの将来性と人間の役割

AIの将来性とそれに伴う人間の役割が語られる。AIが進化する中で、人間の創造性と関わり方、そして人間のユニークな能力がどう変わっていくかが議論された。

50:11

👟 創業者の旅とAIへの情熱

OpenAIの創設者であるサンジャイ・アルマンの経歴とAIへの情熱が紹介される。彼の過去の経験と、AI技術に対する情熱と期待について話されている。

🤝 人材の採用とAI企業への道

OpenAIが人材を採用する際の基準と、AI企業への道について話される。創造的思考、情熱、以及其他の重要なスキルが求められるとされている。

🚧 AI開発の進歩と人間の協力

AIの開発が進む中で、人間の協力が不可欠であることが強調される。AIと人間の相補的な関係がどのように未来を形塑くかが議論されている。

🛑 AIの限界と人間の独自性

AIの限界と人間の独自性について考えられる。AIが再現できない人間の能力と、それらが未来にどのように価値を持ちうるかが語られている。

💡 解決策の創造と新しいアイデア

問題解決のためのアイデア創造と新しいアプローチが議論される。コンテキストを変えることの重要性と、新しい環境での思考の変革について話されている。

Mindmap

Keywords

💡人工知能(AI)

人工知能とは、人間のように思考や学びを可能にする技術を指します。このビデオでは、AIが社会に与える影響や、MITの取り組み方、OpenAIの使命などが議論されています。例えば、AIが広範に有益であるためにMITは努力していると述べられています。

💡MITの総長

MITの総長とは、マサチューセッツ工科大学の最高責任者のことです。ビデオでは、Sally Kornbluth氏が2023年に18代目のMIT総長に就任し、多くのイニシアチブに加えて、AIの社会的貢献に注力していると紹介されています。

💡OpenAI

OpenAIは、人工知能研究と展開を行う企業であり、人類全般に利益をもたらすことがミッションです。ビデオでは、その共同創設者でCEOであるSam Altmanが、AIの将来性とその社会的影響について語っています。

💡技術革新

技術革新とは、新しい技術の開発とその普及によって社会が変貌するプロセスを指します。ビデオでは、AIが持つ潜在的な力と、それを通じて未来をより良いものにすることが議論されています。

💡バイアス

バイアスとは、特定の方向性や見方に傾斜がある状態を指します。ビデオでは、AIシステムからバイアスを除去するために必要なことや、現在のAIシステムにおけるバイアスの問題が議論されています。

💡個人情報のプライバシー

個人情報のプライバシーとは、個人が持つ情報に関する秘密保持と適切な取り扱いに関する概念です。ビデオでは、AIモデルのトレーニングに必要なデータと個人のプライバシーのバランスが探求されています。

💡環境への影響

環境への影響とは、人間の活動が自然環境に及ぼす影響を指します。ビデオでは、AIとデータセンターが能源消費と環境に与える影響が話し合いされており、AIが持つ環境問題への解決策の可能性も触れられています。

💡科学の進歩

科学の進歩とは、人類が科学的な知識を深めることによって得る利益や発展を指します。ビデオでは、AIが科学発見の速度を高めることで、持続可能な経済成長を促進する可能性が語られています。

💡教育

教育とは、知識や技能、態度を学ぶプロセスを指します。ビデオでは、AIが教育に与える影響や、個々の学習者への適応性の高い教育方法の可能性が議論されています。

💡経済的価値

経済的価値とは、商品やサービスが持つ経済的な重要性や価値を指します。ビデオでは、AIが経済システムに与える影響や、経済的価値の創出に関する議論がされています。

Highlights

Sally Kornbluth became MIT's 18th president in 2023, known for her administrative skills and advocacy for faculty and student welfare.

Sam Altman, CEO of OpenAI, discusses the company's mission to ensure AI benefits humanity.

Altman believes the 'probability of Doom' question is poorly formed and emphasizes the importance of creating a positive future with AI.

AI has evolved from a naive conception to a transformative technological revolution, according to Altman.

OpenAI has made progress in aligning AI systems with values and reducing bias.

The challenge of deciding what values and biases AI systems should have and how society defines these.

AI systems like GPT can potentially be less biased than humans and trained to behave better.

Privacy concerns are raised with personalized AI that has access to extensive personal data.

Altman discusses the balance between personal privacy and the utility of AI.

OpenAI aims to keep its AI tool widely available as a public good without ads.

AI's environmental impact is a concern, but it also has potential to assist in decarbonization efforts.

Altman emphasizes the importance of maintaining a positive outlook and striving for progress despite challenges.

Advice for young researchers to take risks, work hard, and trust their ability to figure things out.

Importance of having a guiding principle or mission statement for career decisions.

Altman's personal interest in using AI to increase the rate of scientific discovery.

Startups can leverage the current platform shift in AI to their advantage.

Warning against creating startups that rely on short-term growth without building a lasting business.

AI's impact on jobs will be significant, with some being eliminated, others transformed, and new ones created.

The need for a balanced approach to AI regulation that supports innovation while ensuring safety.

Concerns about AI's potential to influence democratic processes and the importance of preparing for new challenges.

MIT's focus on training leaders who are fluent in computer science and AI to advance their fields.

AI's potential to transform the financial sector, though specifics are not detailed.

The importance of human interaction and empathy in a world increasingly influenced by AI.

OpenAI's focus on improving cognitive capabilities and delivering them broadly and inexpensively.

Altman's personal interest in fusion energy as an alternative field of study.

Strategies for dealing with being stuck on a problem, such as changing context or seeking new perspectives.

Transcripts

play00:05

thank you both for being here um they

play00:08

probably neither of them probably need

play00:10

an introduction but of course uh for the

play00:12

record uh Sally corn became mit's 18th

play00:15

president in January 1st 2023 a cell

play00:18

biologist who 8year tenure at duk's

play00:20

University as Provost earned her

play00:23

reputation as a brilliant administrator

play00:25

a creative Problem Solver a leading

play00:27

advocate for faculty excellence and

play00:29

student well being and in her first year

play00:31

of MIT amongst many initiatives she has

play00:34

focused as well on AI and has testified

play00:37

mit's efforts to make sure that AI is

play00:39

broadly beneficial for society uh Sam

play00:42

Alman is an entrepreneur an investor a

play00:45

programmer he co-founded open AI in 2015

play00:48

he is their CEO and open AI is an AI

play00:52

research and deployment company whose

play00:55

mission is to ensure that artificial

play00:57

general intelligence benefits all of

play00:59

humanity Sam and Sally thank you for

play01:01

being here I'm going to turn it over to

play01:03

both of you great thanks so much

play01:05

[Applause]

play01:11

Mark so uh welcome I will say I think

play01:13

this is the most uh popular event I've

play01:16

seen since I arrived at MIT um and I

play01:19

should say that the questions I'm asking

play01:20

were uh submitted by students and sort

play01:23

of gone through and curated but this

play01:25

really reflects the Curiosity of our

play01:28

community here and enthusiasm for you

play01:30

being here so let me start with oh sorry

play01:33

thank you for having me absolutely um so

play01:36

let me let me just dive in so according

play01:38

to columnist Uh Kevin Rose and others in

play01:41

some circles the question is what is

play01:42

your P Doom it's a common Icebreaker or

play01:45

so I'm told most of you probably know

play01:47

that P Doom or probability of Doom is

play01:50

calculated on a scale from 1 to 100 and

play01:52

the higher the score you give the more

play01:54

strongly you believe we end up in a

play01:56

doomsday scenario where AI eliminates

play01:58

all human life so Sam just to break the

play02:01

ice what is your

play02:03

P um I I I think it's sort

play02:07

of the badly formed question um hey I'm

play02:12

glad I didn't take any responsibility I

play02:14

I think it's like a great question for

play02:16

people that uh it's a great way to like

play02:18

sound smart and important and it's like

play02:20

I I have as much fun pontificating on

play02:22

numbers as anybody else but you know

play02:25

whether you say it's 2 or 10 or 20 or 90

play02:29

um the point is it's not zero um and I

play02:35

think another reason I think it's a

play02:37

badly formed question is that it sort of

play02:38

assumes that it's a static system um I

play02:41

think what we need to do is find a way

play02:44

to make the future great make the future

play02:48

exist like not tolerate any uh you know

play02:51

branches of it we just sort of have the

play02:53

the the Doom come into play but um I

play02:56

think Society always

play03:01

holds space for doomsayers there's value

play03:03

to that um I'm happy that they exist I

play03:06

think it makes us think harder about

play03:08

what we're doing but I think the better

play03:10

question is what needs to happen to

play03:12

navigate safety sufficiently well right

play03:15

so be aware as you're developing things

play03:17

of that possibility but not indulge in

play03:19

them too much more than aware like

play03:21

really take confront it and take it

play03:23

extremely seriously yes fair enough so

play03:26

you know this is you've been uh in this

play03:28

business for a little while now how have

play03:30

your views of AI changed over the past

play03:31

decade five or 10 years ago did you

play03:34

expect AI GPT would become as powerful

play03:37

as it

play03:38

has well I honestly still think it's not

play03:41

very good uh I I think we will make it

play03:44

very good but we have a ton of work in

play03:46

front of us

play03:48

uh I think 10 years ago I probably had a

play03:51

more naive conception of AI as this like

play03:55

creature that was going to be off doing

play03:57

stuff or this like you know magic super

play03:59

intelligence in the sky that would like

play04:02

figure things out and Rain money on us

play04:04

and we were going to try to like figure

play04:05

out how to live our lives but it was all

play04:07

sort of confusing and now I think of it

play04:09

much more like any other technological

play04:12

Revolution hopefully the biggest and the

play04:15

best and the most important and the

play04:16

greatest benefits but you know we have

play04:19

like a new tool in the tech tree of

play04:21

humanity and people are using it to

play04:23

create amazing things I think it will

play04:26

continue to get way more capable and way

play04:28

more autonomous over time

play04:30

um but even then like I think it's just

play04:32

going to integrate into society in

play04:35

a in an important and transformative way

play04:39

but something that is somehow going to

play04:41

be you know I think like if AGI got

play04:44

built

play04:45

tomorrow and you asked me what would

play04:47

happen the next day 10 years ago I would

play04:49

have said can't really imagine it it

play04:51

should be just this like absolute

play04:54

transformation Singularity everything is

play04:56

different all at once and I now think it

play04:59

won't really be like that at all so you

play05:01

think in some in some extent you were

play05:03

optimistic but you couldn't have placed

play05:05

yourself in the current moment in either

play05:07

way and it's the same for I I don't know

play05:09

like optimistic or pessimistic I think I

play05:10

was just like somewhat wrong I mean

play05:13

there were ways in which it was too

play05:14

optimistic and too pessimistic at the

play05:15

same time um you know if we make like if

play05:19

we make something that is like you know

play05:22

as smart as all of the super smart

play05:24

students here uh that's a great

play05:27

accomplishment in some sense there's

play05:29

already a lot of like smart people in

play05:31

the world so maybe things go faster um

play05:33

maybe quality of life goes up maybe the

play05:35

economy Cycles a little bit faster but

play05:38

you know like if the rate of scientific

play05:40

discovery becomes 10 times faster than

play05:42

it is

play05:43

today I I don't know how different

play05:45

that'll feel to us living through it at

play05:47

that time oh that's interesting that's

play05:49

interesting so you know just moving

play05:51

stepping back a little bit and looking

play05:53

at the current models thinking about AI

play05:55

systems like je chat gbt um what do you

play05:59

think is NE necessary to remove bias

play06:01

from the systems and can you offer an

play06:03

example of thinking about bias that are

play06:05

in today's AI systems and how we might

play06:07

think about that going

play06:10

forward I I think we've made

play06:12

surprisingly good progress about how we

play06:14

can align the system to behave according

play06:16

to a certain set of values um I think

play06:19

you know for as much as people love to

play06:21

talk about this and say that oh you

play06:23

can't use these things because they're

play06:24

just like spewing toxic waste all the

play06:27

time like if fuse GPT for behaves kind

play06:30

of the way you want it to and reasonably

play06:32

well and you know we're able to get it

play06:34

to follow not perfectly well but better

play06:37

than I at least thought was going to be

play06:39

possible by this point um a given set of

play06:42

values but that that gets to a now

play06:44

harder question which is who decides

play06:46

what what what bias means and what

play06:49

values mean how do we how do we

play06:51

decide what the system is supposed to do

play06:54

uh how much you know how much does

play06:56

society Define broad bounds around the

play06:58

edges versus how much do we say

play07:02

um you as a user like we trust you to

play07:05

use the Tool uh you know not everybody

play07:07

will use it in a way we like but that's

play07:09

kind of the ca the case of tools um I

play07:12

think it's important to give people a

play07:14

lot of control over how they use these

play07:17

tools um and even if that means that

play07:20

they may use them in ways that you or I

play07:22

don't always like um but there are some

play07:25

things that a system just shouldn't do

play07:27

uh and will have to kind of collectively

play07:31

negotiate what those are I mean you know

play07:33

it's interesting thinking about whether

play07:35

you can make the model sort of less

play07:37

biased to than we are as human beings in

play07:38

a sense because you know you talk about

play07:41

things like in you know in medicine uh

play07:44

you know bias against certain

play07:45

demographic groups for instance they're

play07:48

actually trained on the way our human

play07:49

doctors are behaving

play07:52

right they are but then we do this rhf

play07:55

step where we can exert quite a lot of

play07:58

influence uh

play08:00

humans are clearly very biased creatures

play08:02

and often unaware of it and I don't

play08:04

think that GPT 4 or five shares our same

play08:08

psychological exactly flaws probably it

play08:11

has its own different ones um but yeah I

play08:14

think these systems can be way less bias

play08:15

than

play08:16

humans something to strive for you know

play08:19

aside from bias other things that you

play08:21

know have sort of been in part of the

play08:23

public Consciousness in terms of

play08:25

concerns Etc are privacy issues which

play08:28

Loom large for a lot of people when

play08:30

considering uh the future of llms so you

play08:33

know how do we navigate the balance

play08:34

between personal privacy and the need

play08:36

for shared data to train AI

play08:44

models I can imagine this future in

play08:47

which if you want you have a

play08:49

personalized AI that knows that has read

play08:53

every email every text every message

play08:55

you've ever sent or received has an

play08:57

access to a full recording of your life

play08:59

um knows every document you've ever

play09:01

looked at every TV show you've ever read

play09:03

every everything you've ever said or

play09:05

heard or seen like all of your bits of

play09:07

input in and out

play09:10

and you can imagine that that would be a

play09:12

super helpful thing to have you can also

play09:15

Imagine the privacy concerns that that

play09:17

would present and I think if we stick on

play09:20

that frame and not say well should you

play09:22

know AI be able to train on this data or

play09:24

to that data but how are we going to

play09:25

navigate the Privacy versus utility

play09:29

versus safety tradeoffs or security

play09:31

tradeoffs that come with that um and

play09:33

like what does it even mean like do we

play09:35

need a new definition of like privileged

play09:38

information so that your AI companion

play09:40

never has to like testify against you or

play09:41

can't be subpoena by a court I don't

play09:43

even know what right the problems are

play09:44

going to be um but this question of

play09:48

where we all will individually set the

play09:51

Privacy versus utility tradeoffs and the

play09:54

advantages that will be possible for

play09:56

someone to have if you say I am going to

play09:57

let this thing train on my entire life

play10:00

um that's like a new thing for society

play10:02

to navigate I don't know what the

play10:04

answers will be I don't know where most

play10:05

people will make the tradeoffs I don't

play10:07

know what we'll say or like is even

play10:08

permissible in the bounds um but we

play10:13

faced a little bit of that before about

play10:14

where we trade off some privacy for

play10:16

utility with the services we all

play10:19

use but that can go so incredibly far

play10:22

with AI there are all these things that

play10:24

we've had to negotiate with the internet

play10:26

things about how we think about privacy

play10:28

how we think about online ads um that

play10:31

when you intersect them with AI become

play10:34

much higher stakes and much bigger

play10:35

trade-offs uh that I think we're going

play10:37

to start really facing yeah I know

play10:39

that's interesting in terms of um also

play10:42

how much individual control there's

play10:44

exerted and other words when you're

play10:46

talking about aggregated data you know

play10:47

good example when you're talk about

play10:49

higher Stakes again is you know sort of

play10:51

health record health record data Etc and

play10:54

you know how much we can build into it

play10:56

some sort of personal ability to sort of

play10:57

set that sliding scale uh with how much

play11:00

information you're willing to have as

play11:01

part of that training I think in that

play11:04

case we are a little

play11:06

bit you know a little bit off in terms

play11:09

of how we have the conversation um what

play11:11

what you want out of GPT 5 or six or

play11:13

whatever is for it to be the best

play11:15

reasoning engine possible um it is true

play11:19

that right now the way the only way we

play11:22

currently know how to do that is by

play11:24

training on tons and tons of data and in

play11:26

the process of that it is learning

play11:28

something about how to do very very

play11:31

limited reasoning or cognition or

play11:33

whatever you want to call it but the

play11:35

fact that it can memorize data or the

play11:37

fact that it's storing data at all in

play11:39

its parameter space I think we'll look

play11:42

back and say that was kind of like a

play11:43

weird waste of resources like it is true

play11:48

that gp4 can kind of act like a database

play11:51

but barely it's slow it's expensive it

play11:53

doesn't work very well it's not it's not

play11:56

really what you want um it's just kind

play11:58

of as a side effect of the only way we

play12:00

know how to make a model that a

play12:02

reasoning engine right now um it has all

play12:04

these other properties but I assume at

play12:07

some point we'll figure out how to sort

play12:09

of separate the reasoning engine n from

play12:14

all of the uh you know needing tons of

play12:17

data or storing the data in there and

play12:18

we'll be able to treat them as sort of

play12:21

separate things and I think that'll make

play12:22

some of these privacy issues easier no

play12:24

that makes a lot of sense um speaking

play12:26

about openness there's been a

play12:28

considerable discussion of whether open

play12:30

AI is actually open you said that while

play12:33

it may not be completely open source

play12:34

it's open in other ways can you say a

play12:36

little bit more about this and how you

play12:37

think about it um we make a great free

play12:42

AI tool available hundreds of millions

play12:45

of people hopefully billions of people

play12:46

will use it in the future uh we don't

play12:49

run ads we just do this as like a public

play12:52

good because we think it's important to

play12:53

put the tool in people's hands um and we

play12:56

want it to be very widely available very

play12:58

easy to use very helpful

play13:00

um I think that's just a cool thing it

play13:04

is a cool

play13:09

thing you know it is funny how um how it

play13:13

starts as you're talking about how AI

play13:15

gets built into a or our normal lives

play13:17

and that'll evolve I mean I think

play13:18

probably most of us remember the first

play13:20

time we saw chat GPT and we're like oh

play13:23

my God that is so cool now we're trying

play13:25

to think about what the next Generations

play13:27

you know what are the next generation of

play13:29

coolness going to be by the way I think

play13:31

that's great I think it's awesome that

play13:34

for um you know 2 weeks everybody was

play13:37

freaking out about gp4 and thought it

play13:39

was super cool and then by the third

play13:41

week everyone was like exactly come on

play13:43

where's GPT 5 I'm tired of waiting EXA

play13:45

exactly I I actually think that says

play13:47

something like legitimately great about

play13:49

human expectation and striving and why

play13:51

we all have to like continue to make

play13:52

things better um and I think it's like

play13:55

great that a baby born today uh will

play13:58

never know a world in which the products

play14:00

and services they use are not

play14:02

intelligent we'll never know a world in

play14:04

which cognition is not like abundant and

play14:07

part of everything that you use so I

play14:10

think this like this human discontent

play14:12

with the state of things and the

play14:14

expectation that the world should get

play14:17

better every year uh I think that's

play14:20

awesome yeah no great I agree um so in

play14:23

the past year uh new electricity

play14:26

speaking of less exciting edge of things

play14:28

the new electricity demand from Ai and

play14:30

data centers has been cited as an

play14:32

environmental concern at the same time

play14:34

many you know talk about AI assisting

play14:36

and decarbonization what are your

play14:38

thoughts about this the tension between

play14:40

its effect on climates climate its

play14:43

ability to potentially help us uh fight

play14:46

the impact of climate change uh as it

play14:48

moves

play14:51

forward I'll answer that specifically

play14:53

and a more General observation um it is

play14:56

true that AI needs a huge amount of

play14:59

energy but not huge relative to what the

play15:02

rest of the world needs if we have to

play15:04

spend and I I don't even think you know

play15:07

if we spent 1% of the world's

play15:09

electricity training powerful Ai and

play15:12

that AI helped us figure out how to get

play15:14

to uh non-carbon based energy or do

play15:17

carbon capture better that would be a

play15:19

massive win um and even if we didn't do

play15:22

that uh if that 1% of compute that we

play15:26

spent on AI let people live their lives

play15:28

better and have to

play15:30

like yeah I read this thing about the

play15:32

compute Google used once compared to the

play15:35

amount of carbon that people used to

play15:37

spend driving in their cars places to

play15:39

get information and you know you have

play15:41

people saying Google's so horrible we

play15:43

should shut it down it's like spending

play15:44

El energy and it's it was intellectually

play15:46

a very dishonest thing to say because it

play15:48

was a net Savings in energy uh I think

play15:51

pretty clearly the internet in general

play15:53

and the you know what it lets us do for

play15:55

telecommuting probably also a savings um

play15:59

so you know for like AI yeah it's going

play16:01

to need a lot of energy we're going to

play16:03

keep figuring out way more efficient

play16:04

algorithms way more efficient chips

play16:06

we're going to get Fusion we're going to

play16:07

power the stuff this way um so I think

play16:10

it is like important to address this

play16:13

issue um but we will in all of these

play16:17

fantastic

play16:18

ways but I think this points to

play16:20

something else which is the you know you

play16:22

open asking about P doom and the level

play16:26

of doomeris in society right now I think

play16:30

the way we are teaching our young people

play16:32

that the world is totally screwed that

play16:33

it's hopeless to try to solve problems

play16:35

that all we can do is like sit in our

play16:37

bedrooms in the dark and think about how

play16:38

awful we are is a really deeply

play16:41

unproductive streak and I hope MIT is

play16:44

different than a lot of other college

play16:45

campuses I assume it is but you all need

play16:48

to like make it part of your life

play16:49

mission to fight against this Prosperity

play16:52

abundance um you know a better life next

play16:55

year a better life for our children that

play16:57

is the only path forward that is the

play16:58

only way to have a functioning society

play17:00

and there will always be people who want

play17:01

to sit around and say we shouldn't do AI

play17:04

because we may burn a little more carbon

play17:05

or we shouldn't do AI because you know

play17:07

we haven't fully addressed bias and it

play17:09

turns out a couple years later we made a

play17:11

lot of progress on both of those things

play17:13

and the anti-progress streak the anti

play17:15

likee people deserve a great life streak

play17:17

who are usually the people that have

play17:19

quite a lot of privilege in the first

play17:20

place um is something I hope you all

play17:22

fight against God yeah

play17:27

[Applause]

play17:30

you know I I uh you know coming to MIT

play17:33

fairly fresh from the outside I think

play17:34

this is sort of core to the MIT ethos

play17:36

which is you know naming the problems

play17:38

and figuring out a way to solve them I'm

play17:39

very happy to be here yeah it's

play17:41

fantastic we're very happy you are here

play17:43

the other thing you said that really

play17:44

struck me as we think about the costs of

play17:46

AI I don't just mean monetary costs

play17:48

whether it's climate or anything else it

play17:50

is this notion of

play17:51

deducting what the uh what AI can

play17:55

contribute to the problem as not a c you

play17:57

know what I mean as sort of a long-term

play17:58

term uh balancing of The Ledger that I

play18:01

think is important yeah so that's

play18:03

interesting um does open AI intend to

play18:05

build tools that will specifically

play18:07

impact science and engineering or will

play18:10

you be more focused on sort of business

play18:12

and consumer

play18:13

applications um for sure we intend to do

play18:17

that I think the most like personally

play18:19

the thing I am most most interested in

play18:22

is how we use AI to increase the rate of

play18:25

scientific discovery I believe that is

play18:27

the core engine of human progress and

play18:30

that it is the only way we drive the

play18:32

sustainable economic growth that we were

play18:33

talking about earlier people aren't

play18:35

content with gp4 they want gp5 they want

play18:38

things to get better um everyone wants

play18:40

like more and better and faster uh and

play18:43

science is how we get there so of all of

play18:46

the things of all the great things that

play18:47

AI will do um I am personally most

play18:51

passionate about the impact that I hope

play18:53

it will have expect it will have on

play18:55

science that said um it may this may all

play18:59

be more one-dimensional than we think um

play19:02

if we make a great AI tool that can help

play19:06

people solve any kind of problem in

play19:08

front of them that can help people

play19:10

reason in new ways uh that's great for

play19:12

consumers that's great for scientists

play19:14

that's great for businesses it's great

play19:15

for Education it it may the the G of AGI

play19:19

the general is sort of the surprising

play19:21

piece yeah I know that's really

play19:22

interesting and now I come back to your

play19:24

comment about you know sort of getting

play19:26

in your old car and driving to the

play19:27

library you know you know a lot of that

play19:30

the the creativity part is still human

play19:32

but a lot of the um aggregating all of

play19:34

the knowledge that you that you can use

play19:36

as a launching Point can really be

play19:38

expedited you know by asking a few key

play19:41

questions uh to AI up front totally I

play19:44

mean again the the what any one

play19:47

individual and certainly what any group

play19:49

of us will be capable of um I think it's

play19:53

going to if we could go see what each of

play19:56

us can do 10 or 20 years in the future I

play19:58

think it would astonish us today yeah um

play20:01

if you know it's like maybe in a few

play20:02

years it's like each of us has like a

play20:05

great Chief of Staff or uh like PhD

play20:08

student or whatever analogy you want

play20:10

that's off like helping us optimize

play20:11

ourselves and do our best work and our

play20:14

best ideas and whatever and then maybe

play20:16

at some point it's like each of us has

play20:18

like a full company full of like

play20:20

brilliant Experts of anything um just

play20:23

working super productively together cool

play20:26

so you know what do you have what advice

play20:28

do you have for we have a lot of young

play20:30

researchers or people who are aspiring

play20:32

to be young researchers in the audience

play20:34

uh what's sort of your general advice uh

play20:36

for making a real impact in the world

play20:38

and you know you alluded to it in terms

play20:39

of thinking about possibilities and not

play20:42

sitting in your bedroom in the dark I

play20:44

think that's a good base recommendation

play20:46

um but I'm just wondering what else uh

play20:48

what else you might want to say to this

play20:50

audience about that um first of all I

play20:54

think this is probably the most exciting

play20:57

time to be launching your career um in

play21:01

many decades maybe ever I don't know but

play21:03

it's like whatever it is it's a really

play21:04

big deal and the fact that you have this

play21:05

huge Tailwind means I think you can um I

play21:09

think you can take more risk than usual

play21:11

I think if you do something doesn't work

play21:12

out there's just going to be phenomenal

play21:13

opportunities for a long time uh I think

play21:16

you have you can have more impact than

play21:17

normal and so there's like a premium on

play21:20

you know having this be a period where

play21:21

you work really hard I certainly would

play21:23

be biased to do something with AI um but

play21:28

like of course I'm going to say that so

play21:30

maybe it's wrong

play21:33

um I think in general the the the kind

play21:35

of

play21:38

core the the most important to lesson to

play21:41

learn um early on in your career is that

play21:43

you can kind of figure anything out um

play21:46

and that no one has all of the answers

play21:49

when they start out but you just sort of

play21:51

like stumble your way through it have

play21:54

like a fast iteration speed try to like

play21:56

drift towards the most

play21:59

interesting problems to you and be

play22:00

around the most impressive people and

play22:03

have this like trust that you'll

play22:06

successively iterate to the right thing

play22:08

and and you can kind of like you can do

play22:10

more than you think faster than you

play22:12

think uh and people it takes a while to

play22:15

learn that lesson um but it it it it

play22:19

gets you know you see it work a few

play22:22

times and you really start to trust it

play22:24

um and so like you can just do stuff

play22:28

sounds like not real advice or like very

play22:31

empty advice but I think is like it's

play22:33

it's much more profound than it sounds

play22:35

on the surface the other thing I would

play22:37

say is figuring

play22:39

out relatively early on and this takes

play22:41

some practice kind of like what your own

play22:45

personal I don't even know what to call

play22:47

it like passion missionstatement like

play22:50

the kind of way you want to spend your

play22:52

time or what you really care about um

play22:55

and we talked about like this concept of

play22:58

like techno abundance as a way to

play23:01

drive um like prosperity and better

play23:05

lives for people that that's been

play23:06

something for me that has always really

play23:08

resonated and I've always tried to

play23:09

figure out like how to work on that but

play23:12

having some sort of like letting letting

play23:13

yourself develop some sort of like

play23:15

guiding principle of how you make

play23:17

decisions about how to allocate your

play23:18

time and where to try to like steer

play23:21

things that that was like that's been

play23:23

very helpful to me yeah I think this

play23:25

follow follow your passion and also you

play23:27

know from you're painting a picture of a

play23:29

world where there's sort of infinite

play23:31

possibilities and you know doing you

play23:34

can't always be so strategic about what

play23:36

you think is good for you to do you want

play23:38

to do something that imagines all those

play23:40

possibilities and follows those passions

play23:43

it yeah for me it's like like passion is

play23:46

not

play23:47

quite the right word it's like something

play23:50

closer to like what is the moral

play23:53

obligation for me to work on and then

play23:56

and I on like the really bad days is

play23:58

when I'm not having fun that's somehow

play24:00

like it's much more motivating than just

play24:02

the thing I like doing the most that's

play24:05

interesting um another element of this

play24:08

and this is actually a really core part

play24:10

of uh mit's culture is entrepreneurship

play24:12

and so we have a lot of aspiring

play24:14

entrepreneurs so there's you know

play24:17

developing the sort of underlying ideas

play24:19

but how do you think about building um

play24:21

successful companies in today's

play24:23

ecosystem and what part of the value

play24:25

chain should new where where should new

play24:26

startups sort of focus their effort

play24:32

again I think this is like the best time

play24:34

for new startups in particular in a very

play24:37

long time uh startups tend to succeed uh

play24:40

right around the time of big platform

play24:42

shifts big companies are slower and less

play24:46

Innovative than startups but they have a

play24:48

lot of other advantages um when you get

play24:52

the speed and iteration and cycle time

play24:54

Advantage the most is when like the

play24:57

ground is shaking

play24:59

um and right now I think you can there

play25:02

was like a moment like this right when

play25:03

the internet happened there was a Moment

play25:05

Like This although smaller after mobile

play25:08

uh there was another moment also smaller

play25:09

after AWS and this idea of like cloud

play25:12

services and then for a very long time

play25:15

like more than a decade we've just been

play25:18

sort of waiting and I think now we

play25:20

finally have a new platform and so if

play25:23

history is a guide which usually it is

play25:25

and I suspect it will be this time um

play25:28

it's an amazing time to start a company

play25:30

and the advantages you have as a company

play25:33

are you can move much faster you can

play25:37

like live in the future more than like

play25:39

big companies that have quarterly or

play25:41

annual or whatever they have planning

play25:42

Cycles um and that's how you win and I

play25:45

think this is a great time to do it

play25:48

excellent I think there's a lot of

play25:49

people here who uh are take that to

play25:51

heart can I say one more thing about

play25:53

that um I will issue like a that was all

play25:55

the positive um here

play25:59

here's a warning

play26:01

um with any new tech platform you can

play26:06

always drive phenomenal short-term

play26:08

growth and so you have this class of AI

play26:11

startups like you used to have a class

play26:13

of mobile startups and before that like

play26:14

you used to have a class of Internet

play26:15

startups that were not building an

play26:17

enduring business um but instead we're

play26:19

building this sort of like novelty thing

play26:22

that was and be and you kind of delude

play26:25

yourself because you get amazing fast

play26:27

growth

play26:28

and because there's this like magic new

play26:30

technology and you know the dust hasn't

play26:33

settled yet just because there is a

play26:35

magic new technology it does not excuse

play26:37

you from like the laws of physics of a

play26:40

business you still have to figure out a

play26:42

way to build um some sort of switching

play26:44

cost some sort of relationship with

play26:46

customers some sort of compounding

play26:49

advantage over time and in

play26:54

the in the Gold Rush moments I think

play26:58

startups at their Peril often forget

play27:00

that so you still have to like do all

play27:02

things a business always has to do yeah

play27:04

that's really that's I think important

play27:07

important advice um you know the other

play27:10

thing you know it's interesting how this

play27:11

question was phrased and I'll read it in

play27:13

a minute but now that I've heard you

play27:14

talk a little bit I might phrase it a

play27:16

little bit differently the question was

play27:18

phrased in what ways might technology

play27:20

like chat GPT threaten versus help the

play27:22

future of work but uh it sounds like you

play27:26

you tilt much more towards help but also

play27:29

thinking about you know what that means

play27:30

in in real terms how does it how does it

play27:32

help people in their future employment

play27:35

one of the things that annoys me most

play27:37

about people who work on AI is when they

play27:39

stand up and with a straight face say oh

play27:40

this will never cause any job

play27:42

elimination yeah you know this is just

play27:43

an additive thing this is just going to

play27:45

it's all going to be great like this is

play27:47

going to eliminate a lot of current jobs

play27:49

and this is going to

play27:51

change the way that a lot of current

play27:54

jobs function and this is going to

play27:56

create entirely new jobs

play28:00

it that always happens with technology

play28:03

um it's probably never happened this

play28:05

fast although again we may be like

play28:06

drinking the Kool-Aid too much and the

play28:08

inertia of society may be such that it's

play28:10

slower than we think

play28:13

but I kind of expect we're only a

play28:16

generation or two away from models that

play28:19

for the first time show some degree of

play28:21

real economic impact good and bad um but

play28:25

something you can measure and there will

play28:28

be classes of jobs that totally go away

play28:30

there will be classes of jobs where you

play28:32

have to change what you do a lot there

play28:34

will be classes of jobs where the

play28:35

productivity compensation whatever you

play28:37

want to talk about whatever measure goes

play28:39

up by like a giant Factor um and then

play28:43

there will be things that feel like jobs

play28:45

to the people of the future that to us

play28:48

today look like a complete Indulgence

play28:51

and waste of time as what many of us do

play28:54

today would look like to people from

play28:55

hundreds of years ago uh

play29:00

I think as long as you believe that

play29:02

humans very deeply want

play29:05

to create and be useful and feel like

play29:10

they're making like relative

play29:12

differential progress all of which are

play29:14

things I would bet on hard um we're not

play29:17

going to run out of things to do

play29:20

um I love reading contemporaneous

play29:23

accounts from people living through

play29:24

previous technological revolutions and

play29:27

what they say about man we're all going

play29:28

to only work four hours a week if we

play29:30

work at all and we're just going to you

play29:32

know it like you say it every time

play29:36

it in some way it does feel to me like

play29:39

this time is

play29:41

different and as a matter of degree it

play29:44

might be and as a matter of speed I

play29:46

really think it will be um and I have

play29:49

some concern about how quickly we can

play29:50

adapt to this kind of

play29:52

change but I have no real concern that

play29:54

we can eventually adapt to this kind of

play29:56

change um I'm sure the social contract

play29:59

will change um I'm

play30:01

sure most jobs will be different in the

play30:04

future than they are today but like the

play30:08

Deep human drivers don't seem to me

play30:10

likely to go anywhere interesting and

play30:13

and obviously different categories of

play30:14

jobs are going to be really

play30:15

differentially affected for sure um so

play30:19

uh with President Biden's recent

play30:20

executive order on AI as well as the

play30:22

Congressional hearings on AI regulation

play30:25

um there is a concern that regulatory

play30:26

Frameworks might solidify the position

play30:29

of established players might stifle

play30:31

Innovation competition accessibility how

play30:33

do you envision AI be regulation because

play30:36

we really are at a critical moment being

play30:38

designed to uphold Innovation and

play30:39

competition while ensuring that the

play30:41

field remains accessible for merging uh

play30:44

players to Pioneer the next

play30:45

transformative

play30:48

Technologies um I I think we faced

play30:50

versions of this with other kinds of

play30:53

Regulation like you want to know that

play30:56

the food you buy in a g grocery store is

play30:59

unlikely to make you sick and we kind of

play31:01

all agree that regulation there is good

play31:04

but you also want to be able to like

play31:05

grow food in your backyard without

play31:07

having to like go through a bunch of

play31:09

like hoops and you get to do that too

play31:14

um I think for AI systems there will be

play31:18

some threshold above which we say okay

play31:22

uh the system presents a level of risk

play31:26

that we don't want to take without

play31:29

reasonable safety precautions and then I

play31:31

think there will be a level of AI

play31:33

systems where we say even though there's

play31:34

going to be misuse um we should open

play31:38

source these and let people use them and

play31:39

there should be no regulatory burden on

play31:41

companies developing them um because

play31:43

we're willing to make the Innovation and

play31:45

freedom

play31:46

tradeoff for the negative safety

play31:48

consequences at level X right and then

play31:51

level y can be totally different

play31:54

and I totally get the impulse to say

play31:59

any regulatory action is unacceptable

play32:03

because it's just big companies that are

play32:06

going to use it to like for regulatory

play32:09

capture um and you know if Society

play32:11

decides we don't want to regulate AI at

play32:13

all and we'll just take our chances I'll

play32:16

accept the outcome of a democratic

play32:18

process it seems to me good to have some

play32:23

voices saying like well you open with a

play32:25

p Doom question exactly um if if the

play32:28

framing of that question is

play32:30

correct uh then it seems to me useful to

play32:33

have some voices

play32:35

saying let's not act out of fear but

play32:38

proceed with some reasonable caution no

play32:40

that makes sense um speaking of which um

play32:43

you've said you believe that the

play32:44

upcoming presidential election won't be

play32:47

the same as the last one I think we all

play32:48

think that uh but there are lessons to

play32:50

be learned from 2020 uh what are these

play32:52

lessons how can we mitigate the risk AI

play32:55

Pro opposes to the Democratic process

play32:58

and to uh the future of Democracy in

play33:02

America you

play33:05

know maybe maybe the use of advanced AI

play33:08

will be the least interesting thing

play33:09

about this election the way it's shaping

play33:11

up uh

play33:14

I I do

play33:19

think yeah I think there will be like

play33:21

better deep fakes of course and there

play33:23

will be better like troll Farms of

play33:26

course um

play33:29

what I think is more interesting is

play33:31

trying to get ahead of to the degree

play33:34

that we can this is easier said than

play33:35

done but trying to get ahead of the new

play33:38

things that just weren't possible before

play33:41

um so like customize one-on-one

play33:43

persuasion where uh an AI system reads

play33:47

all of your social media posts and

play33:49

targets something just at you that

play33:51

wasn't really possible with all of the

play33:53

sort of like online disinformation and

play33:55

trolling of the last election right and

play33:58

that's the kind of new thing that I wish

play34:00

we were taken more seriously yeah I mean

play34:02

that's the logical extension as you

play34:04

mentioned of you know advertisements

play34:06

coming up when you're surfing the web

play34:08

people know you buy X kinds of shoes or

play34:10

whatever how you think about things

play34:12

absolutely um a more local question uh

play34:15

one of mit's educational priorities is

play34:18

to train tomorrow's leaders to be in

play34:20

essence Computing bilingual meaning that

play34:22

regardless of their chosen field they

play34:24

will need to be fluent in computer

play34:26

science and AI to advance their work

play34:27

work can you comment on the impact of

play34:30

our way of thinking I don't know if

play34:31

people talk you about blending Blended

play34:33

Computing thinking about how we CH train

play34:35

bilinguals how they can start thinking

play34:37

about the future

play34:39

careers really learning two different

play34:41

fields and using Computing as a way into

play34:44

into other

play34:46

areas one of the general observations I

play34:49

make about the history of computing it

play34:51

has it gotten

play34:53

increasingly more accessible and more

play34:55

natural over time M um

play34:59

you know like I heard these stories of

play35:02

people with like punch cards that they

play35:03

would like have these crazy systems

play35:05

about how they sorted them and they

play35:06

would drop them and that was a big mess

play35:07

and it was like that was like a not a

play35:10

natural thing to use

play35:12

um undergraduate education sounds tough

play35:16

um low-level programming languages were

play35:18

a step forward um but still like not a

play35:21

thing that most of the world knew how to

play35:23

use or I think a very natural tool in

play35:26

some sense um and then you get to like

play35:29

current programming languages and

play35:30

they're way more accessible and way

play35:32

easier to use also way more expressive

play35:33

and more powerful

play35:36

um along with that Evolution you go from

play35:40

uh like the command line um to a gooey

play35:44

to like a mouse and a keyboard to just

play35:46

like touching your phone like you don't

play35:48

that's that was like not very that was

play35:50

pretty natural right um and then you go

play35:53

to language which is Supernatural um you

play35:55

know like people are very good at

play35:58

language as a way to use the computer

play36:00

you can sort of ask chat GPT something

play36:02

but also way to program the computer so

play36:03

you saw these things like converge to

play36:05

this one interface

play36:08

and you don't have to be that bilingual

play36:10

anymore right like right the the you get

play36:14

like the same way that you like talk to

play36:16

a a friend or a colleague you can talk

play36:18

to a

play36:19

computer and this is I think a more

play36:21

profound thing than it sounds like on

play36:23

the surface um the degree to which we

play36:26

can push

play36:29

Ai and people to have the same kind of

play36:32

interface um so I'm like more excited

play36:34

about humanoid robots than I am about

play36:36

other forms because I think the world is

play36:37

just very designed for humans and we

play36:38

should keep it that way but we want the

play36:40

benefits of robots that can help us um I

play36:42

think we want AI systems to do their

play36:44

cognition and language to communicate

play36:46

with us in language um it's a very

play36:50

human Focus thing um so my hope is we

play36:54

don't all have to be a bilingual that's

play36:56

interesting I mean because we have I

play36:58

would

play36:59

say an over an overwhelming number of

play37:02

our students are obviously interested in

play37:03

CS and now that we've sort of rolled out

play37:06

you know these Blended areas I think you

play37:08

know it may be sort of the sort of first

play37:10

generation of what you're talking about

play37:12

in other words people who who then you

play37:14

know next Generations it will be

play37:16

completely intuitive totally yeah so

play37:18

that's really interesting um let's see

play37:23

uh there's there's some uh backup things

play37:26

here that other folks subit so how do

play37:28

you think AI will impact the financial

play37:31

sector thinking about sort of banking

play37:34

and Equity have you thought about that

play37:36

at all um first of all if we're if we're

play37:39

on the backup questions and we want to

play37:40

like let people shout out if you want to

play37:41

do that I'm totally down otherwise we

play37:43

can do them either way let me see we can

play37:46

we can do some uh well let me ask you

play37:47

then well why to answer that I have a

play37:49

couple other things and then then if

play37:50

people really feel compelled to yell out

play37:52

some questions related to this interview

play37:56

um then then

play37:58

then we can do that but go ahead

play38:02

um

play38:03

I I haven't thought as much as I would

play38:06

like to about any specific area cuz

play38:09

figuring out how to get the general

play38:10

purpose intelligence and what that means

play38:13

has been pretty allc consuming um you

play38:17

know like education and Healthcare are

play38:18

kind of maybe the two specifics that

play38:20

I've thought about the most on something

play38:23

like the financial system uh I expect AI

play38:28

to like impact that kind of as much as

play38:30

everything else but I don't think I have

play38:32

like a deeply insightful specific thing

play38:34

to say about here's how this really

play38:36

transforms let's think a little bit more

play38:38

than about the educational sector you

play38:40

know we think a lot about how um you

play38:44

know people worry about things about AI

play38:45

in the classroom but I really think

play38:47

there's just a huge potential for how we

play38:48

teach and how we tailor things to

play38:50

individuals Etc so if you could say a

play38:53

word about that I think that would be

play38:55

great

play38:58

I think with what you're seeing people

play39:01

do already just with like regular gbd4

play39:04

in chat gbt when you say like please

play39:07

pretend you're a tutor and help me learn

play39:08

this thing if that can work so well then

play39:12

as people start to take Next Generation

play39:14

models and customize them for learning

play39:16

experiences we're we're going to be in a

play39:18

very good place um and it's it's like

play39:23

pretty awesome to see what people are

play39:24

building already um it's great to hear

play39:28

from teachers about the impact it's had

play39:30

on their students it's great to hear

play39:31

from students about what they're

play39:32

learning on their own but this one seems

play39:34

like a kind of Slam Dunk good use no

play39:37

absolutely so this isn't this isn't one

play39:39

of the sort of pres submitted questions

play39:40

but I'm always really interested when I

play39:42

meet people who are doing really

play39:43

interesting things how you came to this

play39:46

in other words what was your sort of

play39:48

path and how did you how did you think

play39:50

about and make choices as you came to

play39:51

what you're

play39:53

doing um well I was like a very nerdy

play39:56

kid uh I spent like a lot of time

play39:59

reading sci-fi or watching Star Trek or

play40:01

whatever and uh I

play40:04

like that was you know that was like the

play40:08

that was already a time where all

play40:10

contemporary sci-fi was pretty uh

play40:13

dystopic but the older stuff the older

play40:15

Star Trek episodes those were still like

play40:17

kind of optimistic and cool and you sort

play40:19

of like saw how AI was going to be great

play40:23

and kind of how that was obviously the

play40:26

future um and I kind of like

play40:29

abstractedly thought like I would love

play40:30

to work on that someday that would be so

play40:31

cool I never thought it would actually

play40:32

happen um but I was always interested uh

play40:36

and then like you know life went on I

play40:37

got a little bit older uh I went to

play40:40

school and I decided to study in the AI

play40:43

lab

play40:45

um I also got really interested in

play40:47

energy um but this was kind of when I

play40:49

started to believe that like if we could

play40:50

use tech technology to deliver abundance

play40:53

that would not solve the elb problem

play40:55

people would still find all sorts of

play40:56

ways to be unhappy but it was like

play40:57

something I really wanted to do um

play41:02

and uh worked in the AI lab it was like

play41:05

clear that AI was nowhere close to

play41:08

working it's like back in 2004 um got

play41:12

sidetracked a bunch and went off and did

play41:14

a bunch of other stuff and you know did

play41:15

a startup became a startup investor and

play41:18

then in 2012 uh noticed the Alex net

play41:23

paper took me a couple of years to

play41:26

really internalize maybe this thing

play41:27

finally working and I was like should

play41:29

really do something

play41:31

here excellent now I think there's

play41:33

probably one some question on the minds

play41:35

of some of the people here uh what do

play41:38

you look for when you hire somebody I

play41:40

mean there'll be people sitting in this

play41:41

audience who are wondering that

play41:44

intensely first of all we would really

play41:45

love to hear from you and I think this

play41:47

is probably the most the most or second

play41:50

most exciting time in open a history

play41:52

right now um so uh this is like this be

play41:55

a good time to give us a call

play41:58

um number up on the screen what do we

play42:01

look for

play42:02

uh kind of all the obvious stuff like

play42:06

original thinkers smart driven dedicated

play42:09

people people who sort of like are

play42:11

particularly driven to work on AI versus

play42:15

you know there there's always like a set

play42:16

of people who will go work on whatever

play42:19

like the hot area is uh and then there's

play42:21

like a set of people who are like no

play42:22

this is like the thing and we like

play42:23

people for whom this is their thing um

play42:28

but that's kind of it nothing I mean all

play42:30

of the obvious things and nothing that

play42:32

unusual interesting if you were doing

play42:34

something else what would it be I would

play42:36

go work on Fusion full-time Fusion was

play42:38

going to be something else yeah there's

play42:39

definitely people here who are

play42:40

interested in that as well um you know

play42:43

we do have a few minutes um yeah so we

play42:46

don't have any mics I see monus's hand

play42:49

up right away uh but you got to shout

play42:52

I'll repeat the question hi first of all

play42:54

thanks for all all you're doing to the

play42:56

world

play43:00

andit um so I have some of the smartest

play43:04

kids in the world even smartest myam are

play43:06

coming to me and saying um how do I how

play43:10

do I want to plan my time for the next 5

play43:13

years while humans are still helpful and

play43:16

useful and it it's it's striking because

play43:19

you know I see AI as an extraordinary

play43:22

tool and I am so looking forward to the

play43:26

future of work where I can do amazing

play43:28

things that I can't do right now where

play43:30

all the mundane stuff that I'm doing now

play43:31

is replaced and I can do super awesome

play43:34

new things but there will come a time

play43:37

given the two curves of how fast human

play43:39

intelligence has progressed through the

play43:41

old school Evolution and nurture of

play43:44

course and the curve with which AI is

play43:48

progressing which is just ridiculous so

play43:50

these curves are going to cross very

play43:52

soon and I'm just curious what are some

play43:56

unique capabilities that humans have

play43:59

that you think we will not be able to

play44:01

replicate with AI what are some of the

play44:03

architectures that will help us maybe

play44:06

replicate those things and push beyond

play44:08

that and how do you see the future of

play44:10

human versus machine intelligence more

play44:12

broadly two conflicting answers to that

play44:15

um number one I kind of suspect

play44:18

that forever onwards from now it's

play44:21

always going to feel like man this next

play44:23

5year period is so critical this is when

play44:25

I can really contribute and after that

play44:27

who knows what happens but in practice

play44:30

there's always going to be um it's

play44:33

always going to feel like man this curve

play44:35

of intelligence is rising so fast right

play44:37

now I can use these tools to like out

play44:39

aieve but eventually they outrun it but

play44:42

at any point on that exponential um

play44:46

you'll always be able to use the tools

play44:48

to do amazing things and it'll always

play44:49

feel like then you get totally outrun

play44:51

but you never quite do because like we

play44:54

just become more capable um and I didn't

play44:58

used to really think this way I'm still

play45:00

not sure I'm right uh but it does seem

play45:03

to me

play45:06

like we will just be able to like

play45:09

accomplish more do more things we'll

play45:11

have ridiculous the expectations will go

play45:13

up too to like participate in the

play45:14

economy in some sense but what we can

play45:18

get done

play45:20

uh before like we feel like this wave is

play45:23

going to crash over us we'll always feel

play45:25

like man in this 5 years this is my WI

play45:27

but it's going to be a rolling 5 years

play45:28

forever um and I think we'll find that

play45:32

humans are really good at like humans

play45:34

are we're just we're so wired to care

play45:36

about other humans we're so wired to

play45:41

like focus our energy know what other

play45:44

people want be focused on like

play45:45

delivering value for others uh that you

play45:50

know like I can I can see very extreme

play45:52

worlds where like human money and

play45:54

machine money are just different things

play45:56

um but but there's like an increasing

play45:59

premium on human the human money part of

play46:03

that of That World um I don't know

play46:05

exactly what it's going to look like but

play46:06

I do kind of believe in the biological

play46:09

drivers of humanity not changing that

play46:11

much uh and then the answer on the other

play46:15

side

play46:16

is and this is just speaking personally

play46:18

this is not a well-reasoned or defens

play46:20

like logically defended thing at all

play46:22

this is just what I feel I do feel like

play46:27

like I know I'm going to be nostalgic

play46:28

for this

play46:29

time and it's sort of a strange thing to

play46:32

like feel that while you're living

play46:33

through

play46:34

it that's really

play46:36

interesting I see another hand back

play46:38

there well I'm going to have to ask you

play46:40

to just hand the microphone to whoever

play46:41

you see because I

play46:48

uh can you hear me here sure

play47:06

more bullish on startups building and

play47:08

user applications or infrastructure

play47:12

uh probably at this current phase of

play47:15

where we are on end user applications

play47:17

you got to like pick the right ones you

play47:19

got to pick the ones that that benefit

play47:21

from the models getting better not the

play47:22

ones that are betting the model actually

play47:24

doesn't get better and kind of are

play47:25

fixing the current generation proc s

play47:28

but I think it's

play47:31

like I think there's like a lot of value

play47:33

to unlock there um building

play47:36

infrastructure can be great too I think

play47:38

you can really succeed both ways but

play47:40

like the number of what feel like 100

play47:42

billion dollar application layer

play47:44

companies right now where you can really

play47:46

like do something incredibly useful to

play47:48

people quickly seems pretty exciting to

play47:54

me yeah

play47:57

uh

play48:07

yeah thank you so much for giving this

play48:09

talk I just had a question in terms of

play48:11

the future that you see specifically for

play48:13

open AI I was just wondering you can

play48:15

come out with GPT like 6 7 8 9 10 20 30

play48:18

40 50 60 um you have like your creation

play48:22

of Sora um a lot of companies are

play48:24

creating like life-size robots that are

play48:26

utilizing AI as the backend or a lot of

play48:29

technological companies also using chat

play48:32

GPT as like a foundation for their own

play48:33

llms so I was just wondering for you

play48:37

guys specifically at open AI what do you

play48:39

guys foresee as your niche in the future

play48:42

once you start I guess not perfecting

play48:46

but getting to near Perfection for like

play48:48

your llm

play48:51

models I'm going to go over there um I

play48:55

think it's so far

play48:57

we're so far away from when we start to

play49:00

level off

play49:02

um that that's not currently on our mind

play49:06

um I think you know at least for the

play49:09

next three four model

play49:11

Generations I believe we can make it so

play49:14

incredibly much better every time um we

play49:17

should focus on that and if we can do

play49:19

that everything else will kind of work

play49:21

out uh like I think our our Niche or

play49:23

whatever is we want to deliver

play49:27

great useful impressive cognitive

play49:30

capability for as uh abundantly and

play49:32

inexpensively as we can and there's

play49:35

other good things we can do but if we

play49:36

can just focus on that I think it'll be

play49:38

like a great service to the world and we

play49:39

can go on that for a

play49:45

while uh I have two questions one where

play49:47

did you get your

play49:49

shoes uh Adidas did a Lego collaboration

play49:52

I love

play49:53

Legos amazing cool um my my serious

play49:57

question is how soon do you predict that

play49:59

we'll get to artificial general

play50:00

intelligence and what is open ai's role

play50:02

uh in getting us there you know I don't

play50:06

I don't I I I no longer think there will

play50:08

be a time where the world agrees okay

play50:11

this was the year we crossed the AGI

play50:13

threshold I think the the phrase has

play50:15

become

play50:18

so uh so overloaded as how as a

play50:22

definition I think there are people who

play50:24

would say we'll get there soon I think

play50:26

there's like a lot of people who will

play50:27

say by 2040 like when we have these

play50:29

unbelievably capable systems ah it's not

play50:31

quite AGI yet I can't do this one thing

play50:34

um so I think I think the question is

play50:36

like the only way I know how to like

play50:38

form the question well at this point is

play50:41

what is the range of time that we get to

play50:43

like capability x y and z um but I think

play50:46

that the AGI like I can't make myself do

play50:50

this cuz it's like I it's too much of

play50:52

like a it's too like deep in the OS at

play50:56

this point but I like I try to not use

play50:58

the word AGI anymore it's a total like

play51:01

I'm never going to succeed at that but

play51:03

it's like you know when can we when can

play51:05

we do that new scientific discovery in

play51:07

some areas when can we uh when can we

play51:09

like add a lot of economic value um I

play51:12

don't know I expect that like by the end

play51:14

of this decade we have systems that

play51:17

create really significant economic

play51:20

value excellent so maybe last question

play51:24

uh from the audience where whoever's

play51:28

last one here maybe and then uh I'll

play51:31

close this

play51:33

out

play51:34

yeah thank you so much Sam for the talk

play51:37

um I guess I have more a general

play51:39

question I guess what do you do when

play51:41

you're have a problem to solve and

play51:42

you're stuck and you don't know the way

play51:43

forward um guess what's your thought

play51:45

process on trying to get unstuck I

play51:48

somehow try to like change context I try

play51:50

to talk to different people about it um

play51:52

I try to like in an extreme case I'll

play51:54

like go travel somewhere to like really

play51:56

like kind of change things like the one

play51:58

time I like jet lag is if I'm like

play51:59

really stuck on a problem and I wake up

play52:00

in the middle of the night in some new

play52:01

context and it seems to be helpful um

play52:05

but like some way or other I try to

play52:06

change

play52:08

context well with that um I really want

play52:10

to thank you that was fascinating I

play52:12

think everybody greatly enjoyed the

play52:14

conversation thank you all very much and

play52:16

[Applause]

play52:18

uh thank you thank you

play52:24

awesome sure

play52:28

can I sign this real

play52:29

[Applause]

play52:33

quick he went that

Rate This

5.0 / 5 (0 votes)

Related Tags
人工知能MITOpenAI未来予測技術革新社会貢献経済影響教育改革倫理問題スタートアップキャリアアドバイス
Do you need a summary in English?