The Possibilities of AI [Entire Talk] - Sam Altman (OpenAI)

Stanford eCorner
1 May 202445:48

Summary

TLDR在斯坦福大学举办的企业家思想领袖研讨会上,Sam Altman,OpenAI的联合创始人兼首席执行官,分享了他对于人工智能未来的深刻见解。Altman认为,我们正处于创业的最佳时期,AI的发展将带来前所未有的机遇。他强调了迭代部署的重要性,认为社会和技术的共同演化对于形成有益的产品至关重要。同时,他也提到了对于AI强大能力的负责任使用,以及对AI带来的潜在风险的担忧。Altman还讨论了OpenAI的组织结构和使命,以及他们如何适应不断变化的环境。最后,他以对AI未来进步的乐观态度结束了演讲,认为尽管存在挑战,但AI将为人类带来巨大的积极影响。

Takeaways

  • 🎓 Sam Altman 是 OpenAI 的联合创始人和 CEO,该公司是聊天机器人 Dolly 和 Sora 背后的研究和部署公司。
  • 🌟 Sam Altman 的生活是一个不断突破界限和超越可能性的模式,无论是对他自己还是对世界。
  • 🚀 Sam 认为,鉴于世界的变化程度和影响力的机会,现在可能是几个世纪以来创业的最佳时机。
  • 🤖 他预见到人工智能(AI)将每年变得更加卓越,并且最伟大和最具影响力的公司和产品将在这种时代诞生。
  • 🧠 Sam 强调,对于想要创业的人来说,追随自己的直觉和非共识的想法是非常重要的。
  • 💡 OpenAI 通过迭代部署其产品,让社会和技术共同演化,从而学习并变得更好。
  • 🌐 Sam 讨论了 AI 对全球政治格局和权力平衡可能产生的影响,尽管这不是他主要关注的问题。
  • 💰 关于 OpenAI 的经济模型,Sam 表示他不担心烧钱问题,只要最终为社会创造的价值远超过成本。
  • 📈 Sam 认为,随着技术的发展,社会将需要适应新的形式,并且他担心社会适应新变化的速度。
  • 🔍 他提到,尽管人们倾向于关注 AI 的灾难性风险,但更应该关注那些微妙的危险,因为这些往往被忽视。
  • 🎉 最后,Sam 强调了 OpenAI 文化中的团队凝聚力和对实现 AGI(人工通用智能)这一共同使命的忠诚。

Q & A

  • Sam Altman 是如何描述他在斯坦福大学作为本科生时的感受的?

    -Sam Altman 描述他在斯坦福大学作为本科生时的感受用了三个词:兴奋(excited)、乐观(optimistic)和好奇(curious)。

  • Sam Altman 认为现在是一个创业的好时机吗?

    -是的,Sam Altman 认为现在可能是几百年来创业最好的时机,他认为世界正在发生巨大的变化,并且有机会去影响这个变化,无论是创立公司还是进行人工智能研究,都是非常好的机会。

  • Sam Altman 对于想要进入人工智能领域的斯坦福本科生有什么建议?

    -Sam Altman 建议如果学生确定想要创业,那么最好的学习方式就是去实际运营一个初创公司。他认为,尽管加入一个现有的公司可以学习到很多东西,但直接创业可以让你更快地学习和成长。

  • Sam Altman 认为未来几年人工智能领域最大的挑战是什么?

    -Sam Altman 没有具体说明他认为未来几年人工智能领域最大的挑战是什么,他建议听众不要从别人那里获取关于创业点子的建议,而应该自己去发现那些不明显的想法,并信任自己的直觉和思考过程。

  • Sam Altman 对于人工智能的未来发展有什么愿景?

    -Sam Altman 认为未来几年我们将拥有比现在强大得多的系统,他放弃了尝试给出人工通用智能(AGI)的具体时间表,但强调每年都会有更加强大的系统出现。

  • Sam Altman 如何看待人工智能可能带来的危险?

    -Sam Altman 更担心人工智能带来的微妙危险,而不是灾难性事件,因为灾难性危险已经得到了很多人的关注和讨论,而微妙的危险可能会被忽视。

  • Sam Altman 认为人工智能的未来发展对社会的影响会是怎样的?

    -Sam Altman 认为即使有了比人类在许多领域都更聪明的人工智能,人类的日常生活可能不会有太大的不同,但是在某些方面,比如拥有丰富的智能工具,将会有很大的不同。

  • Sam Altman 对于人工智能在空间探索或殖民化中的作用有什么看法?

    -Sam Altman 认为,由于太空对生物生命不太友好,发送机器人去探索或殖民可能更容易。

  • Sam Altman 如何评估创业想法是否具有非共识性?

    -Sam Altman 认为,评估一个想法是否具有非共识性是复杂的,因为不同的群体对技术的看法不同。他认为,最重要的是相信自己的直觉和思考过程,并且随着时间的推移,这种能力会变得更加容易。

  • Sam Altman 对于未来能源需求的变化和如何实现可再生能源的普及有什么看法?

    -Sam Altman 认为能源需求将会上升,并且他希望我们能够达到一个高标准,使得能源需求确实上升。他预测,最终可能是核聚变或者太阳能加储存成为地球上的主要电力来源。

  • Sam Altman 在离开然后又回到 OpenAI 的过程中学到了什么?

    -Sam Altman 学到了团队的韧性和能力,他意识到即使没有他,团队也能够运营公司。此外,他也认识到自己对 OpenAI、团队、文化和使命的热爱,这促使他决定回来继续共同推进公司的发展。

  • Sam Altman 如何看待 OpenAI 的组织结构,特别是非营利组织拥有营利性公司的部分?

    -Sam Altman 表示,OpenAI 的结构是随着时间的推移逐渐形成的,他们没有预见到会需要这么多资金用于计算,也没有预见到会有这么好的商业模式。他认为,尽管这个结构不是他们如果能够重新来过会做的选择,但他们在重要的事情上是正确的,并且他们将会继续根据需要调整结构。

Outlines

00:00

🎓 斯坦福大学创业思想领袖研讨会介绍

本段落介绍了斯坦福大学创业思想领袖(ETL)研讨会的背景,由斯坦福技术创新与创业工程中心(STVP)和斯坦福商业协会(Basis)共同举办。主讲人Rvike Balani教授对Sam Altman进行了介绍,Sam是OpenAI的联合创始人和CEO,该公司是研究和部署通用人工智能(AI)的先驱,旨在造福全人类。Sam的个人经历和成就被详细阐述,包括他在斯坦福的学习经历、参与Y Combinator的经历、以及他在Loopt公司的创业经历。

05:02

🚀 Sam Altman对未来AI研究和创业的展望

Sam Altman分享了他对AI未来的看法,包括如果回到19岁,他会选择投身于AI研究,并且可能会选择在私营部门而非学术界进行研究。他强调了独立思考和追求非共识想法的重要性,认为这是开创性工作的关键。他还提到了OpenAI面临的挑战,如构建更大规模的计算机系统,以及如何将高级智能转化为产品并对社会产生积极影响。

10:04

💰 OpenAI的财务状况和Sam对AI未来的预测

Sam讨论了OpenAI的财务状况,包括公司在AI模型研发上的巨额投资以及对未来技术发展的乐观态度。他提到了计算成本的增长,以及如何通过提供强大的工具来激发人们的创造力。Sam还表达了对AI技术逐步部署和迭代的重视,以及对AI技术未来发展的愿景,包括到2030年的世界将会如何变化。

15:06

🤖 AGI的定义、风险和对人类生活的影响

在这一段中,Sam讨论了人工通用智能(AGI)的定义,并对AGI的潜在风险和对人类生活的可能影响进行了深入探讨。他表达了对AI技术带来的微妙变化的担忧,并强调了社会适应新技术的速度。他还提到了对AI技术发展速度的不确定性,以及对AI技术如何影响全球经济和人类生活的深思。

20:08

🧘♂️ 自我认知与内在驱动力的探讨

Sam反思了他对自我驱动力的理解,包括他的优势、潜在的弱点以及他的内在动机。他讨论了如何识别和利用自己的多元技能,以及他如何平衡对技术的乐观态度和对潜在风险的认识。此外,他还探讨了不同阶段的职业生涯中驱动力的变化,以及他对未来AI技术发展的期待。

25:09

🌐 全球AI基础设施的建设和AGI的地缘政治影响

Sam讨论了全球AI基础设施的重要性,以及如何实现全球公平的AI技术访问。他还提到了不同国家对于建设本地AI基础设施的意识,以及OpenAI在其中可能扮演的角色。此外,他也谈到了AI技术对地缘政治和全球力量平衡可能产生的影响。

30:11

🤔 AI的不确定性识别与未来工作的挑战

Sam强调了构建能够识别自身不确定性和缺陷的AI系统的重要性。他认为,随着AI模型变得更加强大,我们需要更细致地迭代部署AI技术,并建立更紧密的反馈机制。他还讨论了如何负责任地部署AI,以及AI在空间探索等未来领域的潜在应用。

35:13

🔥 OpenAI的组织结构和文化对其成功的影响

Sam描述了OpenAI独特的组织结构,包括非营利组织拥有营利性公司的模式,以及这种结构如何适应公司的发展。他强调了OpenAI团队的韧性和对使命的忠诚,以及这些因素如何塑造了公司的文化和推动了其成功。他还提到了对于AI技术潜在滥用的担忧,以及如何通过建立反馈机制和与社会的合作来缓解这些风险。

40:14

🎉 Sam Altman的生日庆祝和对未来AI的展望

在最后的段落中,Sam反思了AI技术可能带来的变革,包括对未来工作方式和人类能力的影响。他讨论了AI技术如何成为社会智能的一部分,并为后代提供新的工具和可能性。Sam还提到了AI技术可能带来的挑战,以及如何通过建立一个反馈机制来确保AI技术的积极影响。

Mindmap

Keywords

💡创业思想领袖研讨会(ETL)

创业思想领袖研讨会(ETL)是斯坦福大学为有志于创业的学生举办的一个系列讲座。在视频中,ETL邀请了Sam Altman作为嘉宾,他是OpenAI的联合创始人和首席执行官。ETL由斯坦福创业工程中心(stvp)和斯坦福创业学生商业协会(Basis)共同提供,是视频中讨论的起点和背景。

💡OpenAI

OpenAI是一个研究和部署公司,以其开发的通用人工智能(AI)技术而闻名,如聊天机器人Dolly和Sora。Sam Altman在2015年共同创立了OpenAI,旨在创建对全人类有益的通用人工智能。在视频中,Sam讨论了OpenAI的使命和成就,以及AI技术的快速发展。

💡人工智能(AI)

人工智能(AI)是指由计算机系统或机器模仿、扩展和执行人类智能相关功能的技术。在视频中,AI是核心主题,讨论了AI如何改变世界、如何影响经济和日常生活,以及AI技术的未来发展方向。

💡迭代部署

迭代部署是一种产品开发方法,它允许产品在完成最终版本之前,通过一系列的小步骤或迭代进行改进和发布。在视频中,Sam Altman提到OpenAI采用迭代部署的方法来推出AI模型,如Chat GTB,以便社会和技术可以共同进化。

💡人工通用智能(AGI)

人工通用智能(AGI)是指能够执行任何智能任务的AI系统,与人类智能的广泛能力相当。在视频中,Sam Altman讨论了AGI的发展,以及它对社会和经济的潜在影响。

💡计算能力

计算能力指的是执行数据处理和执行算法的能力。在AI领域,随着模型变得更加复杂,对计算能力的需求也在增加。视频中提到,随着AI技术的发展,对更大、更快的计算机和数据中心的需求也在增长。

💡非共识思维

非共识思维是指持有与众不同的观点或想法,这些观点可能不被广泛接受或理解。在视频中,Sam Altman强调了独立思考和追求非共识想法的重要性,他认为这是创新和创业成功的关键。

💡自我意识

自我意识是指个人对自己内在思想、感受和动机的认识。在视频中,Sam Altman被问及他的自我意识,以及他如何识别自己的长处、弱点和驱动力,这些都是个人成长和领导力发展的重要组成部分。

💡韧性

韧性是指在面对挑战和逆境时的适应和恢复能力。在视频中,韧性被提到作为未来几十年中,随着技术和社会的快速变化,个人和组织需要培养的关键生活技能之一。

💡能源需求

能源需求指的是社会和经济活动对能源的消耗需求。在视频中,讨论了随着AI和其他技术的发展,能源需求将如何变化,以及如何实现可持续和可再生能源的未来发展。

💡全球创新

全球创新涉及世界各地的创新活动和合作,以促进技术和解决方案的发展。在视频中,Sam Altman讨论了全球创新的重要性,特别是在AI领域,以及如何确保全球公平地获得使用先进计算资源的机会。

Highlights

Sam Altman作为OpenAI的联合创始人和首席执行官,讨论了AI的未来和对人类社会的影响。

OpenAI是研究和部署通用人工智能的公司,目标是造福全人类。

Sam Altman的个人经历,从在St. Louis长大,到斯坦福大学学习,再到成为Y Combinator的总裁。

OpenAI创造了历史上增长最快的应用程序,ChatGBT在两个月内达到了1亿活跃用户。

Sam Altman对于AI研究和创业的看法,认为现在是创业的最佳时期。

对于想要进入AI领域的创业者,Sam建议他们追求非共识的想法。

Sam讨论了AI技术发展的速度,以及如何负责任地部署AI。

关于AI基础设施的重要性,以及OpenAI如何考虑整个生态系统。

Sam对AI技术成本增长的看法,以及OpenAI如何平衡研发成本和社会价值。

讨论了AI技术的迭代部署,以及如何让社会与技术共同进步。

Sam对AGI(人工通用智能)的定义和我们当前的理解进行了讨论。

关于AI带来的潜在危险,Sam更担心的是那些不易察觉的危险。

Sam分享了他在OpenAI的领导角色中获得的经验和教训。

讨论了OpenAI的组织结构,包括非营利组织和盈利公司的共生关系。

Sam对AI技术在全球化和地缘政治中的作用的看法。

关于AI系统如何识别并传达自身的不确定性和缺陷。

Sam讨论了OpenAI的企业文化,以及它是如何推动团队成功的。

对于AI技术的滥用问题,Sam认为需要整个社会共同努力来最小化负面影响。

Sam对于创造比人类更聪明的AI的前景表示既兴奋又谨慎。

Transcripts

play00:01

[Music]

play00:13

welcome to the entrepreneurial thought

play00:15

leader seminar at Stanford

play00:21

University this is the Stanford seminar

play00:23

for aspiring entrepreneurs ETL is

play00:25

brought to you by stvp the Stanford

play00:27

entrepreneurship engineering center and

play00:29

basis The Business Association of

play00:31

Stanford entrepreneurial students I'm

play00:33

rvie balani a lecturer in the management

play00:35

science and engineering department and

play00:36

the director of Alchemist and

play00:38

accelerator for Enterprise startups and

play00:40

today I have the pleasure of welcoming

play00:42

Sam Altman to ETL

play00:50

um Sam is the co-founder and CEO of open

play00:53

AI open is not a word I would use to

play00:55

describe the seats in this class and so

play00:57

I think by virtue of that that everybody

play00:58

already play knows open AI but for those

play01:00

who don't openai is the research and

play01:02

deployment company behind chat gbt Dolly

play01:05

and Sora um Sam's life is a pattern of

play01:08

breaking boundaries and transcending

play01:10

what's possible both for himself and for

play01:13

the world he grew up in the midwest in

play01:15

St Louis came to Stanford took ETL as an

play01:19

undergrad um for any and we we held on

play01:22

to Stanford or Sam for two years he

play01:24

studied computer science and then after

play01:26

his sophomore year he joined the

play01:27

inaugural class of Y combinator with a

play01:29

Social Mobile app company called looped

play01:32

um that then went on to go raise money

play01:33

from Sequoia and others he then dropped

play01:36

out of Stanford spent seven years on

play01:38

looped which got Acquired and then he

play01:40

rejoined Y combinator in an operational

play01:42

role he became the president of Y

play01:44

combinator from 2014 to 2019 and then in

play01:48

2015 he co-founded open aai as a

play01:50

nonprofit research lab with the mission

play01:52

to build general purpose artificial

play01:54

intelligence that benefits all Humanity

play01:57

open aai has set the record for the

play01:58

fastest growing app in history with the

play02:01

launch of chat gbt which grew to 100

play02:03

million active users just two months

play02:05

after launch Sam was named one of

play02:08

times's 100 most influential people in

play02:10

the world he was also named times CEO of

play02:12

the year in 2023 and he was also most

play02:15

recently added to Forbes list of the

play02:17

world's billionaires um Sam lives with

play02:19

his husband in San Francisco and splits

play02:20

his time between San Francisco and Napa

play02:22

and he's also a vegetarian and so with

play02:24

that please join me in welcoming Sam

play02:27

Altman to the stage

play02:35

and in full disclosure that was a longer

play02:36

introduction than Sam probably would

play02:37

have liked um brevity is the soul of wit

play02:40

um and so we'll try to make the

play02:41

questions more concise but this is this

play02:44

is this is also Sam's birth week it's it

play02:47

was his birthday on Monday and I

play02:49

mentioned that just because I think this

play02:50

is an auspicious moment both in terms of

play02:52

time you're 39 now and also place you're

play02:55

at Stanford in ETL that I would be

play02:57

remiss if this wasn't sort of a moment

play02:59

of just some reflection and I'm curious

play03:01

if you reflect back on when you were

play03:03

half a lifee younger when you were 19 in

play03:05

ETL um if there were three words to

play03:08

describe what your felt sense was like

play03:09

as a Stanford undergrad what would those

play03:11

three words be it's always hard

play03:13

questions

play03:17

um I was like ex uh you want three words

play03:20

only okay uh you can you can go more Sam

play03:23

you're you're the king of brevity uh

play03:25

excited optimistic and curious okay and

play03:29

what would be your three words

play03:30

now I guess the same which is terrific

play03:33

so there's been a constant thread even

play03:35

though the world has changed and you

play03:37

know a lot has changed in the last 19

play03:39

years but that's going to pale in

play03:40

comparison what's going to happen in the

play03:41

next 19 yeah and so I need to ask you

play03:44

for your advice if you were a Stanford

play03:46

undergrad today so if you had a Freaky

play03:47

Friday moment tomorrow you wake up and

play03:49

suddenly you're 19 in inside of Stanford

play03:52

undergrad knowing everything you know

play03:54

what would you do would you drop be very

play03:55

happy um I would feel like I was like

play03:58

coming of age at the luckiest time

play04:00

um like in several centuries probably I

play04:03

think the degree to which the world is

play04:05

is going to change and the the

play04:07

opportunity to impact that um starting a

play04:10

company doing AI research any number of

play04:13

things is is like quite remarkable I

play04:15

think this is probably the best time to

play04:20

start I yeah I think I would say this I

play04:22

think this is probably the best time to

play04:23

start a companies since uh the internet

play04:25

at least and maybe kind of like in the

play04:27

history of technology I think with what

play04:29

you can do with AI is like going to just

play04:33

get more remarkable every year and the

play04:35

greatest companies get created at times

play04:38

like this the most impactful new

play04:40

products get built at times like this so

play04:43

um I would feel incredibly lucky uh and

play04:46

I would be determined to make the most

play04:47

of it and I would go figure out like

play04:50

where I wanted to contribute and do it

play04:52

and do you have a bias on where would

play04:53

you contribute would you want to stay as

play04:55

a student um would and if so would you

play04:56

major in a certain major giving the pace

play04:58

of of change probably I would not stay

play05:01

as a student but only cuz like I didn't

play05:04

and I think it's like reasonable to

play05:05

assume people kind of are going to make

play05:06

the same decisions they would make again

play05:09

um I think staying as a student is a

play05:11

perfectly good thing to do I just I it

play05:13

would probably not be what I would have

play05:15

picked no this is you this is you so you

play05:17

have the Freaky Friday moment it's you

play05:18

you're reborn and as a 19-year-old and

play05:20

would you

play05:22

yeah what I think I would again like I

play05:25

think this is not a surprise cuz people

play05:27

kind of are going to do what they're

play05:28

going to do I think I would go work on

play05:31

research and and and where might you do

play05:33

that Sam I think I mean obviously I have

play05:36

a bias towards open eye but I think

play05:37

anywhere I could like do meaningful AI

play05:39

research I would be like very thrilled

play05:40

about but you'd be agnostic if that's

play05:42

Academia or Private Industry

play05:46

um I say this with sadness I think I

play05:48

would pick

play05:50

industry realistically um I think it's I

play05:53

think to you kind of need to be the

play05:55

place with so much compute M MH okay and

play05:59

um if you did join um on the research

play06:02

side would you join so we had kazer here

play06:04

last week who was a big advocate of not

play06:06

being a Founder but actually joining an

play06:08

existing companies sort of learn learn

play06:09

the chops for the for the students that

play06:11

are wrestling with should I start a

play06:13

company now at 19 or 20 or should I go

play06:15

join another entrepreneurial either

play06:17

research lab or Venture what advice

play06:19

would you give them well since he gave

play06:22

the case to join a company I'll give the

play06:24

other one um which is I think you learn

play06:28

a lot just starting a company and if

play06:29

that's something you want to do at some

play06:30

point there's this thing Paul Graham

play06:32

says but I think it's like very deeply

play06:34

true there's no pre-startup like there

play06:36

is Premed you kind of just learn how to

play06:38

run a startup by running a startup and

play06:40

if if that's what you're pretty sure you

play06:42

want to do you may as well jump in and

play06:43

do it and so let's say so if somebody

play06:45

wants to start a company they want to be

play06:46

in AI um what do you think are the

play06:48

biggest near-term challenges that you're

play06:52

seeing in AI that are the ripest for a

play06:54

startup and just to scope that what I

play06:56

mean by that are what are the holes that

play06:58

you think are the top priority needs for

play07:00

open AI that open AI will not solve in

play07:03

the next three years um yeah

play07:08

so I think this is like a very

play07:10

reasonable question to ask in some sense

play07:13

but I think it's I'm not going to answer

play07:15

it because I think you should

play07:19

never take this kind of advice about

play07:21

what startup to start ever from anyone

play07:24

um I think by the time there's something

play07:26

that is like the kind of thing that's

play07:29

obvious enough that me or somebody else

play07:31

will sit up here and say it it's

play07:33

probably like not that great of a

play07:34

startup idea and I totally understand

play07:37

the impulse and I remember when I was

play07:38

just like asking people like what

play07:39

startup should I start

play07:42

um but I I think like one of the most

play07:46

important things I believe about having

play07:48

an impactful career is you have to chart

play07:50

your own course if if the thing that

play07:53

you're thinking about is something that

play07:54

someone else is going to do anyway or

play07:57

more likely something that a lot of

play07:58

people are going to do anyway

play08:00

um you should be like somewhat skeptical

play08:01

of that and I think a really good muscle

play08:04

to build is coming up with the ideas

play08:07

that are not the obvious ones to say so

play08:09

I don't know what the really important

play08:12

idea is that I'm not thinking of right

play08:13

now but I'm very sure someone in this

play08:15

room does it knows what that answer is

play08:18

um and I think learning to trust

play08:21

yourself and come up with your own ideas

play08:24

and do the very like non-consensus

play08:26

things like when we started open AI that

play08:27

was an extremely non-consensus thing to

play08:30

do and now it's like the very obvious

play08:31

thing to do um now I only have the

play08:34

obvious ideas CU I'm just like stuck in

play08:36

this one frame but I'm sure you all have

play08:38

the other

play08:38

ones but are there so can I ask it

play08:41

another way and I don't know if this is

play08:42

fair or not but are what questions then

play08:44

are you wrestling with that no one else

play08:47

is talking

play08:49

about how to build really big computers

play08:51

I mean I think other people are talking

play08:52

about that but we're probably like

play08:54

looking at it through a lens that no one

play08:56

else is quite imagining yet um

play09:02

I mean we're we're definitely wrestling

play09:05

with how we when we make not just like

play09:09

grade school or middle schooler level

play09:11

intelligence but like PhD level

play09:12

intelligence and Beyond the best way to

play09:14

put that into a product the best way to

play09:16

have a positive impact with that on

play09:19

society and people's lives we don't know

play09:20

the answer to that yet so I think that's

play09:22

like a pretty important thing to figure

play09:23

out okay and can we continue on that

play09:25

thread then of how to build really big

play09:27

computers if that's really what's on

play09:28

your mind can you share I know there's

play09:30

been a lot of speculation and probably a

play09:33

lot of here say too about um the

play09:35

semiconductor Foundry Endeavor that you

play09:38

are reportedly embarking on um can you

play09:41

share what would make what what's the

play09:43

vision what would make this different

play09:45

than it's not just foundies although

play09:47

that that's part of it it's like if if

play09:50

you believe which we increasingly do at

play09:52

this point that AI infrastructure is

play09:55

going to be one of the most important

play09:57

inputs to the Future this commodity that

play09:58

everybody's going to want and that is

play10:01

energy data centers chips chip design

play10:04

new kinds of networks it's it's how we

play10:06

look at that entire ecosystem um and how

play10:09

we make a lot more of that and I don't

play10:12

think it'll work to just look at one

play10:13

piece or another but we we got to do the

play10:15

whole thing okay so there's multiple big

play10:18

problems yeah um I think like just this

play10:21

is the Arc of human technological

play10:25

history as we build bigger and more

play10:26

complex systems and does it gross so you

play10:29

know in terms of just like the compute

play10:30

cost uh correct me if I'm wrong but chat

play10:33

gbt 3 was I've heard it was $100 million

play10:36

to do the model um and it was 100 175

play10:41

billion parameters gbt 4 was cost $400

play10:44

million with 10x the parameters it was

play10:47

almost 4X the cost but 10x the

play10:49

parameters correct me adjust me you know

play10:52

it I I do know it but I won oh you can

play10:54

you're invited to this is Stanford Sam

play10:57

okay um uh but the the even if you don't

play11:00

want to correct the actual numbers if

play11:01

that's directionally correct um does the

play11:05

cost do you think keep growing with each

play11:07

subsequent yes and does it keep growing

play11:12

multiplicatively uh probably I mean and

play11:15

so the question then becomes how do we

play11:18

how do you capitalize

play11:20

that well look I I kind of think

play11:26

that giving people really capable tools

play11:30

and letting them figure out how they're

play11:32

going to use this to build the future is

play11:34

a super good thing to do and is super

play11:36

valuable and I am super willing to bet

play11:39

on the Ingenuity of you all and

play11:42

everybody else in the world to figure

play11:44

out what to do about this so there is

play11:46

probably some more business-minded

play11:48

person than me at open AI somewhere that

play11:50

is worried about how much we're spending

play11:52

um but I kind of

play11:53

don't okay so that doesn't cross it so

play11:55

you

play11:56

know open ey is phenomenal chat gbt is

play11:59

phenomenal um everything else all the

play12:01

other models are

play12:02

phenomenal it burned you've earned $520

play12:05

million of cash last year that doesn't

play12:07

concern you in terms of thinking about

play12:09

the economic model of how do you

play12:11

actually where's going to be the

play12:12

monetization source well first of all

play12:14

that's nice of you to say but Chachi PT

play12:16

is not phenomenal like Chachi PT is like

play12:20

mildly embarrassing at best um gp4 is

play12:24

the dumbest model any of you will ever

play12:26

ever have to use again by a lot um but

play12:29

you know it's like important to ship

play12:31

early and often and we believe in

play12:33

iterative deployment like if we go build

play12:35

AGI in a basement and then you know the

play12:38

world is like kind

play12:40

of blissfully walking blindfolded along

play12:44

um I don't think that's like I don't

play12:46

think that makes us like very good

play12:47

neighbors um so I think it's important

play12:49

given what we believe is going to happen

play12:51

to express our view about what we

play12:52

believe is going to happen um but more

play12:54

than that the way to do it is to put the

play12:56

product in people's hands um

play13:00

and let Society co-evolve with the

play13:03

technology let Society tell us what it

play13:06

collectively and people individually

play13:08

want from the technology how to

play13:09

productize this in a way that's going to

play13:11

be useful um where the model works

play13:13

really well where it doesn't work really

play13:14

well um give our leaders and

play13:17

institutions time to react um give

play13:20

people time to figure out how to

play13:21

integrate this into their lives to learn

play13:23

how to use the tool um sure some of you

play13:25

all like cheat on your homework with it

play13:27

but some of you all probably do like

play13:28

very amazing amazing wonderful things

play13:29

with it too um and as each generation

play13:32

goes on uh I think that will expand

play13:38

and and that means that we ship

play13:40

imperfect products um but we we have a

play13:43

very tight feedback loop and we learn

play13:45

and we get better um and it does kind of

play13:49

suck to ship a product that you're

play13:50

embarrassed about but it's much better

play13:52

than the alternative um and in this case

play13:54

in particular where I think we really

play13:56

owe it to society to deploy tively

play14:00

um one thing we've learned is that Ai

play14:02

and surprise don't go well together

play14:03

people don't want to be surprised people

play14:05

want a gradual roll out and the ability

play14:07

to influence these systems um that's how

play14:10

we're going to do it and there may

play14:13

be there could totally be things in the

play14:15

future that would change where we' think

play14:17

iterative deployment isn't such a good

play14:19

strategy um but it does feel like the

play14:24

current best approach that we have and I

play14:26

think we've gained a lot um from from

play14:29

doing this and you know hopefully s the

play14:31

larger world has gained something too

play14:34

whether we burn 500 million a year or 5

play14:38

billion or 50 billion a year I don't

play14:40

care I genuinely don't as long as we can

play14:43

I think stay on a trajectory where

play14:45

eventually we create way more value for

play14:47

society than that and as long as we can

play14:49

figure out a way to pay the bills like

play14:51

we're making AGI it's going to be

play14:52

expensive it's totally worth it and so

play14:54

and so do you have a I hear you do you

play14:56

have a vision in 2030 of what if I say

play14:58

you crushed it Sam it's 2030 you crushed

play15:01

it what does the world look like to

play15:03

you

play15:06

um you know maybe in some very important

play15:08

ways not that different uh

play15:12

like we will be back here there will be

play15:15

like a new set of students we'll be

play15:17

talking about how startups are really

play15:19

important and technology is really cool

play15:21

we'll have this new great tool in the

play15:23

world it'll

play15:25

feel it would feel amazing if we got to

play15:27

teleport forward six years today and

play15:30

have this thing that was

play15:31

like smarter than humans in many

play15:34

subjects and could do these complicated

play15:36

tasks for us and um you know like we

play15:40

could have these like complicated

play15:41

program written or This research done or

play15:43

this business

play15:44

started uh and yet like the Sun keeps

play15:48

Rising the like people keep having their

play15:50

human dramas life goes on so sort of

play15:53

like super different in some sense that

play15:55

we now have like abundant intelligence

play15:58

at our fingertips

play16:00

and then in some other sense like not

play16:01

different at all okay and you mentioned

play16:04

artificial general intellig AGI

play16:05

artificial general intelligence and in

play16:07

in a previous interview you you define

play16:09

that as software that could mimic the

play16:10

median competence of a or the competence

play16:12

of a median human for tasks yeah um can

play16:16

you give me is there time if you had to

play16:18

do a best guess of when you think or

play16:20

arrange you feel like that's going to

play16:21

happen I think we need a more precise

play16:23

definition of AGI for the timing

play16:26

question um because at at this point

play16:29

even with like the definition you just

play16:30

gave which is a reasonable one there's

play16:32

that's your I'm I'm I'm paring back what

play16:34

you um said in an interview well that's

play16:36

good cuz I'm going to criticize myself

play16:37

okay um it's it's it's it's too loose of

play16:41

a definition there's too much room for

play16:42

misinterpretation in there um to I think

play16:45

be really useful or get at what people

play16:47

really want like I kind of think what

play16:50

people want to know when they say like

play16:52

what's the timeline to AGI is like when

play16:55

is the world going to be super different

play16:57

when is the rate of change going to get

play16:58

super high when is the way the economy

play17:00

Works going to be really different like

play17:01

when does my life change

play17:05

and that for a bunch of reasons may be

play17:08

very different than we think like I can

play17:10

totally imagine a world where we build

play17:13

PhD level intelligence in any area and

play17:17

you know we can make researchers way

play17:18

more productive maybe we can even do

play17:20

some autonomous research and in some

play17:22

sense

play17:24

like that sounds like it should change

play17:26

the world a lot and I can imagine that

play17:28

we do that and then we can detect no

play17:32

change in global GDP growth for like

play17:34

years afterwards something like that um

play17:37

which is very strange to think about and

play17:38

it was not my original intuition of how

play17:40

this was all going to go so I don't know

play17:43

how to give a precise timeline of when

play17:45

we get to the Milestone people care

play17:46

about but when we get to systems that

play17:49

are way more capable than we have right

play17:52

now one year and every year after and

play17:56

that I think is the important point so

play17:57

I've given up on trying to give the AGI

play17:59

timeline but I think every year for the

play18:03

next many we have dramatically more

play18:05

capable systems every year um I want to

play18:07

ask about the dangers of of AGI um and

play18:10

gang I know there's tons of questions

play18:11

for Sam in a few moments I'll be turning

play18:13

it up so start start thinking about your

play18:15

questions um a big focus on Stanford

play18:17

right now is ethics and um can we talk

play18:20

about you know how you perceive the

play18:21

dangers of AGI and specifically do you

play18:24

think the biggest Danger from AGI is

play18:26

going to come from a cataclysmic event

play18:27

which you know makes all the papers or

play18:29

is it going to be more subtle and

play18:31

pernicious sort of like you know like

play18:33

how everybody has ADD right now from you

play18:35

know using Tik Tok um is it are you more

play18:37

concerned about the subtle dangers or

play18:39

the cataclysmic dangers um or neither

play18:42

I'm more concerned about the subtle

play18:43

dangers because I think we're more

play18:45

likely to overlook those the cataclysmic

play18:47

dangers uh a lot of people talk about

play18:50

and a lot of people think about and I

play18:52

don't want to minimize those I think

play18:53

they're really serious and a real thing

play18:57

um but I think we at least know to look

play19:01

out for that and spend a lot of effort

play19:03

um the example you gave of everybody

play19:05

getting add from Tik Tok or whatever I

play19:07

don't think we knew to look out for and

play19:10

that that's a really hard the the

play19:13

unknown unknowns are really hard and so

play19:15

I'd worry more about those although I

play19:16

worry about both and are they unknown

play19:18

unknowns are there any that you can name

play19:19

that you're particularly worried about

play19:21

well then I would kind of they'd be

play19:22

unknown unknown um you can

play19:27

I I am am worried just about so so even

play19:31

though I think in the short term things

play19:32

change less than we think as with other

play19:35

major Technologies in the long term I

play19:37

think they change more than we think and

play19:40

I am worried about what rate Society can

play19:43

adapt to something so new and how long

play19:47

it'll take us to figure out the new

play19:48

social contract versus how long we get

play19:50

to do it um I'm worried about that okay

play19:54

um I'm going to I'm going to open up so

play19:55

I want to ask you a question about one

play19:56

of the key things that we're now trying

play19:58

to in

play19:59

into the curriculum as things change so

play20:01

rapidly is resilience that's really good

play20:04

and and you

play20:05

know and the Cornerstone of resilience

play20:08

uh is is self-awareness and so and I'm

play20:11

wondering um if you feel that you're

play20:14

pretty self-aware of your driving

play20:16

motivations as you are embarking on this

play20:19

journey so first of all I think um I

play20:23

believe resilience can be taught uh I

play20:25

believe it has long been one of the most

play20:27

important life skills um and in the

play20:29

future I think in the over the next

play20:31

couple of decades I think resilience and

play20:33

adaptability will be more important

play20:36

theyve been in a very long time so uh I

play20:39

think that's really great um on the

play20:42

self-awareness

play20:44

question I think I'm self aware but I

play20:48

think like everybody thinks they're

play20:50

self-aware and whether I am or not is

play20:52

sort of like hard to say from the inside

play20:54

and can I ask you sort of the questions

play20:55

that we ask in our intro classes on self

play20:57

awareness sure it's like the Peter duer

play20:59

framework so what do you think your

play21:01

greatest strengths are

play21:04

Sam

play21:07

uh I think I'm not great at many things

play21:10

but I'm good at a lot of things and I

play21:12

think breath has become an underrated

play21:15

thing in the world everyone gets like

play21:17

hypers specialized so if you're good at

play21:19

a lot of things you can seek connections

play21:21

across them um I think you can then kind

play21:25

of come up with the ideas that are

play21:26

different than everybody else has or

play21:28

that sort of experts in one area have

play21:30

and what are your most dangerous

play21:32

weaknesses

play21:36

um most dangerous that's an interesting

play21:39

framework for it

play21:41

uh I think I have like a general bias to

play21:45

be too Pro technology just cuz I'm

play21:47

curious and I want to see where it goes

play21:49

and I believe that technology is on the

play21:50

whole a net good thing but I think that

play21:54

is a worldview that has overall served

play21:56

me and others well and thus got like a

play21:58

lot of positive

play22:00

reinforcement and is not always true and

play22:03

when it's not been true has been like

play22:05

pretty bad for a lot of people and then

play22:07

Harvard psychologist David mcland has

play22:09

this framework that all leaders are

play22:10

driven by one of three Primal needs a

play22:13

need for affiliation which is a need to

play22:15

be liked a need for achievement and a

play22:17

need for power if you had to rank list

play22:19

those what would be

play22:22

yours I think at various times in my

play22:24

career all of those I think there these

play22:26

like levels that people go through

play22:29

um at this point I feel driven by like

play22:32

wanting to do something useful and

play22:34

interesting okay and I definitely had

play22:37

like the money and the power and the

play22:38

status phases okay and then where were

play22:40

you when you most last felt most like

play22:45

yourself I I

play22:48

always and then one last question and

play22:50

what are you most excited about with

play22:51

chat gbt five that's coming out that uh

play22:55

people

play22:56

don't what are you what are you most

play22:57

excited about with the of chat gbt that

play22:59

we're all going to see

play23:01

uh I don't know yet um I I mean I this

play23:05

this sounds like a cop out answer but I

play23:07

think the most important thing about gp5

play23:09

or whatever we call that is just that

play23:11

it's going to be smarter and this sounds

play23:13

like a Dodge but I think that's like

play23:17

among the most remarkable facts in human

play23:19

history that we can just do something

play23:21

and we can say right now with a high

play23:23

degree of scientific certainty GPT 5 is

play23:25

going to be smarter than a lot smarter

play23:26

than GPT 4 GPT 6 going to be a lot

play23:28

smarter than gbt 5 and we are not near

play23:30

the top of this curve and we kind of

play23:32

know what know what to do and this is

play23:34

not like it's going to get better in one

play23:35

area this is not like we're going to you

play23:37

know it's not that it's always going to

play23:39

get better at this eval or this subject

play23:41

or this modality it's just going to be

play23:43

smarter in the general

play23:45

sense and I think the gravity of that

play23:48

statement is still like underrated okay

play23:50

that's great Sam guys Sam is really here

play23:52

for you he wants to answer your question

play23:54

so we're going to open it up hello um

play23:57

thank you so much for joining joining us

play23:59

uh I'm a junior here at Stanford I sort

play24:01

of wanted to talk to you about

play24:02

responsible deployment of AGI so as as

play24:05

you guys could continually inch closer

play24:07

to that how do you plan to deploy that

play24:10

responsibly AI uh at open AI uh you know

play24:13

to prevent uh you know stifling human

play24:15

Innovation and continue to Spur that so

play24:19

I'm actually not worried at all about

play24:20

stifling of human Innovation I I really

play24:22

deeply believe that people will just

play24:24

surprise us on the upside with better

play24:26

tools I think all of history suggest

play24:28

that if you give people more leverage

play24:30

they do more amazing things and that's

play24:32

kind of like we all get to benefit from

play24:34

that that's just kind of great I am

play24:37

though increasingly worried about how

play24:39

we're going to do this all responsibly I

play24:41

think as the models get more capable we

play24:42

have a higher and higher bar we do a lot

play24:44

of things like uh red teaming and

play24:47

external Audits and I think those are

play24:48

all really good but I think as the

play24:51

models get more capable we'll have to

play24:53

deploy even more iteratively have an

play24:55

even tighter feedback loop on looking at

play24:58

how they're used and where they work and

play24:59

where they don't work and this this

play25:01

world that we used to do where we can

play25:02

release a major model update every

play25:04

couple of years we probably have to find

play25:07

ways to like increase the granularity on

play25:09

that and deploy more iteratively than we

play25:11

have in the past and it's not super

play25:13

obvious to us yet how to do that but I

play25:16

think that'll be key to responsible

play25:17

deployment and also the way we kind of

play25:21

have all of the stakeholders negotiate

play25:24

what the rules of AI need to be uh

play25:27

that's going to get more comp Lex over

play25:28

time too thank you next question where

play25:32

here you mentioned before that there's a

play25:34

growing need for larger and larger

play25:36

computers and faster computers however

play25:38

many parts of the world don't have the

play25:40

infrastructure to build those data

play25:41

centers or those large computers how do

play25:44

you see um Global Innovation being

play25:46

impacted by that so two parts to that

play25:49

one

play25:50

um no matter where the computers are

play25:52

built I think Global and Equitable

play25:56

access to use the computers for training

play25:57

as well inference is super important um

play26:01

one of the things that's like very C to

play26:02

our mission is that we make chat GPT

play26:05

available for free to as many people as

play26:07

want to use it with the exception of

play26:08

certain countries where we either can't

play26:10

or don't for a good reason want to

play26:12

operate um how we think about making

play26:14

training compute more available to the

play26:16

world is is uh going to become

play26:18

increasingly important I I do think we

play26:21

get to a world where we sort of think

play26:23

about it as a human right to get access

play26:24

to a certain amount of compute and we

play26:26

got to figure out how to like distribute

play26:28

that to people all around the world um

play26:30

there's a second thing though which is I

play26:32

think countries are going to

play26:34

increasingly realize the importance of

play26:36

having their own AI infrastructure and

play26:38

we want to figure out a way and we're

play26:40

now spending a lot of time traveling

play26:41

around the world to build them in uh the

play26:44

many countries that'll want to build

play26:45

these and I hope we can play some small

play26:47

role there in helping that happen trfic

play26:50

thank

play26:51

you U my question was what role do you

play26:55

envision for AI in the future of like

play26:57

space exploration or like

play26:59

colonization um I think space is like

play27:02

not that hospitable for biological life

play27:05

obviously and so if we can send the

play27:07

robots that seems

play27:16

easier hey Sam so my question is for a

play27:19

lot of the founders in the room and I'm

play27:21

going to give you the question and then

play27:23

I'm going to explain why I think it's

play27:25

complicated um so my question is about

play27:28

how you know an idea is

play27:30

non-consensus and the reason I think

play27:32

it's complicated is cu it's easy to

play27:34

overthink um I think today even yourself

play27:37

says AI is the place to start a company

play27:40

I think that's pretty

play27:42

consensus maybe rightfully so it's an

play27:44

inflection point I think it's hard to

play27:47

know if idea is non-consensus depending

play27:50

on the group that you're talking about

play27:52

the general public has a different view

play27:54

of tech from The Tech Community and even

play27:57

Tech Elites have a different point of

play27:58

view from the tech community so I was

play28:01

wondering how you verify that your idea

play28:03

is non-consensus enough to

play28:07

pursue um I mean first of all what you

play28:11

really want is to be right being

play28:13

contrarian and wrong still is wrong and

play28:15

if you predicted like 17 out of the last

play28:17

two recessions you probably were

play28:20

contrarian for the two you got right

play28:22

probably not even necessarily um but you

play28:24

were wrong 15 other times and and

play28:28

and so I think it's easy to get too

play28:30

excited about being contrarian and and

play28:33

again like the most important thing to

play28:35

be right and the group is usually right

play28:39

but where the most value is um is when

play28:42

you are contrarian and

play28:45

right

play28:47

and and that doesn't always happen in

play28:50

like sort of a zero one kind of way like

play28:54

everybody in the room can agree that AI

play28:57

is the right place to start the company

play28:59

and if one person in the room figures

play29:00

out the right company to start and then

play29:02

successfully executes on that and

play29:03

everybody else thinks ah that wasn't the

play29:05

best thing you could do that's what

play29:07

matters so it's okay to kind of like go

play29:11

with conventional wisdom when it's right

play29:13

and then find the area where you have

play29:14

some unique Insight in terms of how to

play29:17

do that um I do think surrounding

play29:21

yourself with the right peer group is

play29:23

really important and finding original

play29:24

thinkers uh is important but there is

play29:28

part of this where you kind of have to

play29:30

do it Solo or at least part of it Solo

play29:33

or with a few other people who are like

play29:35

you know going to be your co-founders or

play29:36

whatever

play29:38

um and I think by the time you're too

play29:41

far in the like how can I find the right

play29:43

peer group you're somehow in the wrong

play29:45

framework already um so like learning to

play29:48

trust yourself and your own intuition

play29:51

and your own thought process which gets

play29:53

much easier over time no one no matter

play29:55

what they said they say I think is like

play29:57

truly great at this this when they're

play29:58

just starting out you because like you

play30:02

kind of just haven't built the muscle

play30:03

and like all of your Social pressure and

play30:07

all of like the evolutionary pressure

play30:09

that produced you was against that so

play30:11

it's it's something that like you get

play30:12

better at over time and and and don't

play30:15

hold yourself to too high of a standard

play30:16

too early on

play30:19

it Hi Sam um I'm curious to know what

play30:22

your predictions are for how energy

play30:24

demand will change in the coming decades

play30:26

and how we achieve a future where

play30:28

renewable energy sources are 1 set per

play30:29

kilowatt

play30:31

hour

play30:32

um I mean it will go up for sure well

play30:36

not for sure you can come up with all

play30:37

these weird ways in which

play30:39

like we all depressing future is where

play30:42

it doesn't go up I would like it to go

play30:43

up a lot I hope that we hold ourselves

play30:46

to a high enough standard where it does

play30:47

go up I I I forget exactly what the kind

play30:50

of world's electrical gener generating

play30:53

capacity is right now but let's say it's

play30:54

like 3,000 4,000 gwatt something like

play30:57

that even if we add another 100 gwatt

play31:00

for AI it doesn't materially change it

play31:02

that much but it changes it some and if

play31:06

we start at a th gwatt for AI someday it

play31:08

does that's a material change but there

play31:10

are a lot of other things that we want

play31:11

to do and energy does seem to correlate

play31:14

quite a lot with quality of life we can

play31:16

deliver for people

play31:18

um my guess is that Fusion eventually

play31:21

dominates electrical generation on Earth

play31:24

um I think it should be the cheapest

play31:25

most abundant most reliable densest

play31:27

source

play31:28

I could could be wrong with that and it

play31:30

could be solar Plus Storage um and you

play31:33

know my guess most likely is it's going

play31:35

to be 820 one way or the other and

play31:37

there'll be some cases where one of

play31:38

those is better than the other but uh

play31:42

those kind of seem like the the two bets

play31:43

for like really global scale one cent

play31:46

per kilowatt hour

play31:51

energy Hi Sam I have a question it's

play31:54

about op guide drop what happened last

play31:56

year so what's the less you learn cuz

play31:59

you talk about resilience so what's the

play32:01

lesson you learn from left that company

play32:04

and now coming back and what what made

play32:06

you com in back because Microsoft also

play32:09

gave you offer like can you share more

play32:11

um I mean the best lesson I learned was

play32:14

that uh we had an incredible team that

play32:17

totally could have run the company

play32:18

without me and did did for a couple of

play32:20

days

play32:22

um and you never and also that the team

play32:26

was super resilient like we knew that a

play32:29

CRA some crazy things and probably more

play32:31

crazy things will happen to us between

play32:33

here and AGI um as different parts of

play32:37

the world have stronger and stronger

play32:40

emotional reactions and the stakes keep

play32:41

ratcheting up and you know I thought

play32:45

that the team would do well under a lot

play32:46

of pressure but you never really know

play32:49

until you get to run the experiment and

play32:50

we got to run the experiment and I

play32:52

learned that the team was super

play32:54

resilient and like ready to kind of run

play32:56

the company um in terms of why I came

play32:59

back you know I originally when the so

play33:02

it was like the next morning the board

play33:04

called me and like what do you think

play33:05

about coming back and I was like no um

play33:07

I'm mad um

play33:11

and and then I thought about it and I

play33:13

realized just like how much I loved open

play33:14

AI um how much I loved the people the C

play33:17

the culture we had built uh the mission

play33:19

and I kind of like wanted to finish it

play33:21

Al

play33:23

together you you you emotionally I just

play33:25

want to this is obviously a really

play33:26

sensitive and one of one of oh it's it's

play33:29

not but was I imagine that was okay well

play33:32

then can we talk about the structure

play33:33

about it because this Russian doll

play33:35

structure of the open AI where you have

play33:38

the nonprofit owning the for-profit um

play33:40

you know when we're we're trying to

play33:41

teach principal ger entrepreneur we got

play33:43

here we got to the structure gradually

play33:46

um it's not what I would go back and

play33:47

pick if we could do it all over again

play33:49

but we didn't think we were going to

play33:50

have a product when we started we were

play33:52

just going to be like a AI research lab

play33:54

wasn't even clear we had no idea about a

play33:56

language model or an API or chat GPT so

play33:59

if if you're going to start a company

play34:01

you got to have like some theory that

play34:03

you're going to sell a product someday

play34:04

and we didn't think we were going to we

play34:06

didn't realize we're were going to need

play34:07

so much money for compute we didn't

play34:08

realize we were going to like have this

play34:09

nice business um so what was your

play34:11

intention when you started it we just

play34:13

wanted to like push AI research forward

play34:15

we thought that and I know this gets

play34:17

back to motivations but that's the pure

play34:18

motivation there's no motivation around

play34:21

making money or or power I cannot

play34:24

overstate how foreign of a concept like

play34:28

I mean for you personally not for open

play34:30

AI but you you weren't starting well I

play34:32

had already made a lot of money so it

play34:33

was not like a big I mean I I like I

play34:36

don't want to like claim some like moral

play34:38

Purity here it was just like that was

play34:41

the of my life a dver driver okay

play34:44

because there's this so and the reason

play34:46

why I'm asking is just you know when

play34:47

we're teaching about principle driven

play34:48

entrepreneurship here you can you can

play34:49

understand principles inferred from

play34:51

organizational structures when the

play34:52

United States was set up the

play34:54

architecture of governance is the

play34:55

Constitution it's got three branches of

play34:58

government all these checks and balances

play35:00

and you can infer certain principles

play35:02

that you know there's a skepticism on

play35:04

centralizing power that you know things

play35:06

will move slowly it's hard to get things

play35:08

to change but it'll be very very

play35:10

stable if you you know not to parot

play35:13

Billy eish but if you look at the open

play35:14

AI structure and you think what was that

play35:16

made for um it's a you have a like your

play35:18

near hundred billion dollar valuation

play35:20

and you've got a very very limited board

play35:22

that's a nonprofit board which is

play35:24

supposed to look after it's it's its

play35:26

fiduciary duties to the again it's not

play35:28

what we would have done if we knew then

play35:30

what we know now but you don't get to

play35:31

like play Life In Reverse and you have

play35:34

to just like adapt there's a mission we

play35:36

really cared about we thought we thought

play35:38

AI was going to be really important we

play35:39

thought we had an algorithm that learned

play35:42

we knew it got better with scale we

play35:43

didn't know how predictably it got

play35:44

better with scale and we wanted to push

play35:46

on this we thought this was like going

play35:47

to be a very important thing in human

play35:50

history and we didn't get everything

play35:52

right but we were right on the big stuff

play35:54

and our mission hasn't changed and we've

play35:56

adapted the structure as we go and will

play35:57

adapt it more in the future um but you

play36:00

know like you

play36:04

don't like life is not a problem set um

play36:08

you don't get to like solve everything

play36:09

really nicely all at once it doesn't

play36:11

work quite like it works in the

play36:12

classroom as you're doing it and my

play36:14

advice is just like trust yourself to

play36:16

adapt as you go it'll be a little bit

play36:18

messy but you can do it and I just asked

play36:20

this because of the significance of open

play36:21

AI um you have a you have a board which

play36:23

is all supposed to be independent

play36:25

financially so that they're making these

play36:26

decisions as a nonprofit thinking about

play36:29

the stakeholder their stakeholder that

play36:30

they are fiduciary of isn't the

play36:32

shareholders it's Humanity um

play36:34

everybody's independent there's no

play36:36

Financial incentive that anybody has

play36:38

that's on the board including yourself

play36:40

with hope and AI um well Greg was I okay

play36:43

first of all I think making money is a

play36:44

good thing I think capitalism is a good

play36:46

thing um my co-founders on the board

play36:48

have had uh financial interest and I've

play36:50

never once seen them not take the

play36:52

gravity of the mission seriously um but

play36:56

you know we've put a structure in place

play36:58

that we think is a way to get um

play37:02

incentives aligned and I do believe

play37:03

incentives are superpowers but I'm sure

play37:06

we'll evolve it more over time and I

play37:08

think that's good not bad and with open

play37:09

AI the new fund you're not you don't get

play37:11

any carry in that and you're not

play37:12

following on investments onto those okay

play37:15

okay okay thank you we can keep talking

play37:16

about this I I I know you want to go

play37:18

back to students I do too so we'll go

play37:19

we'll keep we'll keep going to the

play37:20

students how do you expect that AGI will

play37:23

change geopolitics and the balance of

play37:24

power in the world um like maybe more

play37:29

than any

play37:30

other technology um I don't I I think

play37:34

about that so much and I have such a

play37:37

hard time saying what it's actually

play37:38

going to do um I or or maybe more

play37:42

accurately I have such a hard time

play37:44

saying what it won't do and we were

play37:46

talking earlier about how it's like not

play37:47

going to CH maybe it won't change

play37:48

day-to-day life that much but the

play37:50

balance of power in the world it feels

play37:53

like it does change a lot but I don't

play37:55

have a deep answer of exactly how

play37:58

thanks so much um I was wondering sorry

play38:02

I was wondering in the deployment of

play38:03

like general intelligence and also

play38:05

responsible AI how much do you think is

play38:08

it necessary that AI systems are somehow

play38:12

capable of recognizing their own

play38:14

insecurities or like uncertainties and

play38:16

actually communicating them to the

play38:18

outside world I I always get nervous

play38:21

anthropomorphizing AI too much because I

play38:23

think it like can lead to a bunch of

play38:25

weird oversights but if we say like how

play38:28

much can AI recognize its own

play38:31

flaws uh I think that's very important

play38:34

to build and right now and the ability

play38:38

to like recognize an error in reasoning

play38:41

um and have some sort of like

play38:43

introspection ability like that that

play38:46

that seems to me like really important

play38:47

to

play38:51

pursue hey s thank you for giving us

play38:54

some of your time today and coming to

play38:55

speak from the outside looking in we we

play38:57

all hear about the culture and together

play38:59

togetherness of open AI in addition to

play39:00

the intensity and speed of what you guys

play39:02

work out clearly seen from CH gbt and

play39:05

all your breakthroughs and also in when

play39:07

you were temporarily removed from the

play39:08

company by the board and how all the all

play39:10

of your employees tweeted open air is

play39:11

nothing without its people what would

play39:13

you say is the reason behind this is it

play39:15

the binding mission to achieve AGI or

play39:16

something even deeper what is pushing

play39:18

the culture every

play39:19

day I think it is the shared Mission um

play39:22

I mean I think people like like each

play39:23

other and we feel like we've you know

play39:25

we're in the trenches together doing

play39:26

this really hard thing um

play39:30

but I think it really is like deep sense

play39:33

of purpose and loyalty to the mission

play39:36

and when you can create that I think it

play39:39

is like the strongest force for Success

play39:42

at any start at least that I've seen

play39:43

among startups um and you know we try to

play39:47

like select for that and people we hire

play39:49

but even people who come in not really

play39:51

believing that AGI is going to be such a

play39:54

big deal and that getting it right is so

play39:55

important tend to believe it after the

play39:56

first three months or whatever and so

play39:58

that's like that's a very powerful

play40:00

cultural force that we have

play40:03

thanks um currently there are a lot of

play40:06

concerns about the misuse of AI in the

play40:08

immediate term with issues like Global

play40:10

conflicts and the election coming up

play40:12

what do you think can be done by the

play40:14

industry governments and honestly People

play40:16

Like Us in the immediate term especially

play40:18

with very strong open- Source

play40:22

models one thing that I think is

play40:25

important is not to pretend like this

play40:27

technology or any other technology is

play40:29

all good um I believe that AI will be

play40:32

very net good tremendously net good um

play40:36

but I think like with any other tool

play40:40

um it'll be misused like you can do

play40:43

great things with a hammer and you can

play40:45

like kill people with a hammer um I

play40:48

don't think that absolves us or you all

play40:50

or Society from um trying to mitigate

play40:55

the bad as much as we can and maximize

play40:56

the good

play40:58

but I do think it's important to realize

play41:02

that with any sufficiently powerful Tool

play41:06

uh you do put Power in the hands of tool

play41:09

users or you make some decisions that

play41:12

constrain what people in society can do

play41:15

I think we have a voice in that I think

play41:17

you all have a voice on that I think the

play41:19

governments and our elected

play41:20

representatives in Democratic process

play41:21

processes have the loudest voice in

play41:24

that but we're not going to get this

play41:26

perfectly right like we Society are not

play41:28

going to get this perfectly right

play41:31

and a tight feedback loop I think is the

play41:34

best way to get it closest to right um

play41:37

and the way that that balance gets

play41:39

negotiated of safety versus freedom and

play41:42

autonomy um I think it's like worth

play41:44

studying that with previous Technologies

play41:47

and we'll do the best we can here we

play41:49

Society will do the best we can

play41:51

here um gang actually I've got to cut it

play41:54

sorry I know um I'm wanty to be very

play41:56

sensitive to time I know the the

play41:58

interest far exceeds the time and the

play42:00

love for Sam um Sam I know it is your

play42:03

birthday I don't know if you can indulge

play42:04

us because I know there's a lot of love

play42:05

for you so I wonder if we can all just

play42:07

sing Happy Birthday no no no please no

play42:09

we want to make you very uncomfortable

play42:11

one more question I'd much rather do one

play42:13

more

play42:14

question this is less interesting to you

play42:17

thank you we can you can do one more

play42:18

question

play42:20

quickly day dear

play42:23

Sam happy birthday to you

play42:27

20 seconds of awkwardness is there a

play42:29

burner question somebody who's got a

play42:30

real burner and we only have 30 seconds

play42:32

so make it

play42:34

short um hi I wanted to ask if the

play42:38

prospect of making something smarter

play42:41

than any human could possibly be scares

play42:44

you it of course does and I think it

play42:47

would be like really weird and uh a bad

play42:50

sign if it didn't scare me um humans

play42:54

have gotten dramatically smarter and

play42:56

more capable over time you are

play42:59

dramatically more capable than your

play43:02

great great grandparents and there's

play43:05

almost no biological drift over that

play43:07

period like sure you eat a little bit

play43:08

better and you got better healthare um

play43:11

maybe you eat worse I don't know um but

play43:14

that's not the main reason you're more

play43:16

capable um you are more capable because

play43:20

the infrastructure of

play43:22

society is way smarter and way more

play43:25

capable than any human and and through

play43:27

that it made you Society people that

play43:30

came before you um made you uh the

play43:34

internet the iPhone a huge amount of

play43:37

knowledge available at your fingertips

play43:39

and you can do things that your

play43:41

predecessors would find absolutely

play43:44

breathtaking

play43:47

um Society is far smarter than you now

play43:50

um Society is an AGI as far as you can

play43:52

tell and the

play43:57

the way that that happened was not any

play43:59

individual's brain but the space between

play44:01

all of us that scaffolding that we build

play44:03

up um and contribute to Brick by Brick

play44:08

step by step uh and then we use to go to

play44:11

far greater Heights for the people that

play44:13

come after us um things that are smarter

play44:16

than us will contribute to that same

play44:18

scaffolding um you will

play44:21

have your children will have tools

play44:23

available that you didn't um and that

play44:25

scaffolding will have gotten built up to

play44:28

Greater Heights

play44:32

and that's always a little bit scary um

play44:35

but I think it's like more way more good

play44:38

than bad and people will do better

play44:40

things and solve more problems and the

play44:42

people of the future will be able to use

play44:45

these new tools and the new scaffolding

play44:47

that these new tools contribute to um if

play44:49

you think about a world that has um AI

play44:54

making a bunch of scientific discovery

play44:56

what happens to that scientific progress

play44:58

is it just gets added to the scaffolding

play45:00

and then your kids can do new things

play45:02

with it or you in 10 years can do new

play45:03

things with it um but the way it's going

play45:07

to feel to people uh I think is not that

play45:10

there is this like much smarter entity

play45:14

uh because we're much smarter in some

play45:17

sense than the great great great

play45:19

grandparents are more capable at least

play45:21

um but that any individual person can

play45:23

just do

play45:25

more on that we're going to end it so

play45:27

let's give Sam a round of applause

play45:35

[Music]

Rate This

5.0 / 5 (0 votes)

相关标签
斯坦福大学创业精神人工智能OpenAISam Altman技术影响社会责任AGI技术发展未来展望领导力创新思维科技趋势伦理问题全球影响智能系统