The Possibilities of AI [Entire Talk] - Sam Altman (OpenAI)

Stanford eCorner
1 May 202445:48

Summary

TLDRIn a Stanford Entrepreneurial Thought Leader Seminar, Sam Altman, co-founder and CEO of OpenAI, discusses the rapid advancements in AI, emphasizing the transformative potential of artificial general intelligence (AGI). Altman, who has played a pivotal role in the tech industry as the former president of Y Combinator and now leading the development of AI models like Chat GPT, shares his insights on the current state and future trajectory of AI technology. He addresses the importance of responsible AI deployment, the economic model of AI innovation, and the subtle and significant dangers that come with AGI. Altman also reflects on his personal journey, from his early days at Stanford to his current work, and offers advice to aspiring entrepreneurs and students on how to navigate the dynamic landscape of technology and entrepreneurship.

Takeaways

  • ๐ŸŽ“ Sam Altman, co-founder and CEO of OpenAI, emphasizes the importance of education and his journey starting with ETL at Stanford, highlighting the value of diverse experiences in fostering innovation.
  • ๐Ÿš€ Altman's perspective on the current era as an optimal time for starting companies, especially in AI, due to the rapid advancements and potential for impact.
  • ๐Ÿค– OpenAI's mission is to develop artificial general intelligence (AGI) that benefits all of humanity, with a focus on iterative deployment and societal co-evolution with technology.
  • ๐Ÿ’ก The significance of non-consensus ideas in driving innovation, as Sam encourages individuals to trust their intuition and pursue their unique insights.
  • ๐ŸŒ Altman's vision for global access to AI, stressing the importance of equitable access and the potential for AI infrastructure to be considered a human right.
  • ๐Ÿ’ธ OpenAI's economic model involves significant investment in AI development, with Altman expressing willingness to support the costly process for the greater societal good.
  • ๐Ÿ“ˆ Acknowledgment of the rising costs associated with AI model development, such as the increased expenses for training larger models like GPT-3 and GPT-4.
  • ๐ŸŒŸ Altman's belief in the underestimated value of generalism in a world that often favors specialization, allowing for cross-disciplinary connections and unique ideas.
  • โš–๏ธ Concerns about the subtle dangers of AI, such as unintended societal impacts, which may be overlooked in favor of more obvious, cataclysmic risks.
  • ๐Ÿ”ฎ A future outlook with AGI where the world may not feel vastly different on a day-to-day basis, but where abundant intelligence could lead to significant advancements.
  • ๐Ÿค” Altman's personal growth and self-awareness, reflecting on his strengths, weaknesses, and motivations, and the importance of resilience and adaptability in the face of rapid technological change.

Q & A

  • What is Sam Altman's current position and what is OpenAI known for?

    -Sam Altman is the co-founder and CEO of OpenAI, a research and deployment company known for developing technologies like chat GPT, Dolly, and Sora.

  • What was Sam Altman's educational background before he joined Y Combinator?

    -Sam Altman studied computer science at Stanford University before joining the inaugural class of Y Combinator with his Social Mobile app company called Loopt.

  • What was the mission of OpenAI when it was founded?

    -OpenAI was founded as a nonprofit research lab with the mission to build general-purpose artificial intelligence that benefits all of humanity.

  • How did Sam Altman describe his feelings when he was 19 and a Stanford undergraduate?

    -Sam Altman described his feelings as 'excited,' 'optimistic,' and 'curious' when he was 19 and a Stanford undergraduate.

  • What does Sam Altman believe about the current era for starting companies?

    -Sam Altman believes that this is the best time to start companies since the internet, at least, and possibly in the history of technology, due to the potential of AI.

  • What is Sam Altman's view on the importance of creating non-obvious ideas in the field of AI?

    -Sam Altman emphasizes the importance of charting one's own course and coming up with non-obvious ideas, as he believes that the most impactful and successful endeavors often come from non-consensus perspectives.

  • How does Sam Altman perceive the future of AI in terms of societal impact?

    -Sam Altman sees AI as a transformative force that will have a profound impact on society, with the potential to create remarkable tools and products that can shape the future of technology.

  • What is Sam Altman's stance on the responsible deployment of AI?

    -Sam Altman is concerned about responsible deployment and believes in iterative deployment, where society can co-evolve with the technology, providing feedback and allowing for a gradual integration of AI into daily life.

  • What are some of the challenges that Sam Altman sees in the development of AGI (Artificial General Intelligence)?

    -Sam Altman acknowledges the challenges in building AGI, particularly in terms of how to integrate superhuman intelligence into products and ensure a positive societal impact.

  • How does Sam Altman view the potential economic model for OpenAI?

    -Sam Altman is not concerned about the economic model or the costs associated with developing AGI. He believes that as long as OpenAI continues to create value for society that exceeds its expenses, the investment is worthwhile.

  • What is Sam Altman's vision for the future role of AI in space exploration?

    -Sam Altman suggests that AI, particularly in the form of robots, could play a significant role in space exploration, as space is not hospitable for biological life.

  • How does Sam Altman approach the potential risks and dangers of AGI?

    -Sam Altman is more concerned about subtle dangers that people might overlook than cataclysmic events. He emphasizes the importance of being aware of unknown unknowns and adapting to new challenges as they arise.

Outlines

00:00

๐ŸŽ“ Introduction to Sam Altman and ETL Seminar

The first paragraph introduces the setting of the Entrepreneurial Thought Leader (ETL) seminar at Stanford University, highlighting the event's purpose and the entities behind it, such as the Stanford Technology Ventures Program (STVP) and The Business Association of Stanford Entrepreneurial Students. Sam Altman, co-founder and CEO of OpenAI, is welcomed as the guest speaker. His background, from his time at Stanford to his entrepreneurial journey with Loopt and presidency at Y Combinator, is outlined. The paragraph also mentions OpenAI's rapid growth with the launch of Chat GPT and Sam's recognition as an influential figure. The dialogue reflects on Sam's past and present feelings towards his work and the potential for future impact.

05:02

๐Ÿค– The Future of AI and Its Impact

In the second paragraph, the discussion shifts towards the future of AI and its potential impact on society. Sam Altman expresses his belief that the current era presents an unparalleled opportunity for starting companies and making an impact through AI research. He also discusses the challenges of creating and integrating advanced AI into society, emphasizing the need for iterative deployment and societal co-evolution with technology. The paragraph explores the economic model behind OpenAI's operations and the importance of providing capable tools to people to drive innovation.

10:04

๐Ÿš€ AGI: The Path to Superintelligence

The third paragraph delves into the concept of Artificial General Intelligence (AGI) and its potential timeline. Sam Altman discusses the need for a more precise definition of AGI and expresses his belief that the world will experience increasingly capable systems each year. He also addresses the dangers of AGI, voicing more concern over subtle, overlooked risks than cataclysmic events. The paragraph concludes with a reflection on the potential societal changes brought about by AGI and the importance of self-awareness and resilience in the face of rapid technological advancement.

15:06

๐ŸŒ Global Access to AI and Infrastructure

The fourth paragraph addresses the global implications of AI, focusing on the need for equitable access to AI tools and infrastructure. It discusses the importance of making AI technology available worldwide and the challenges faced by regions lacking the necessary infrastructure. The paragraph also touches on the future of space exploration with AI and the criteria for identifying a truly non-consensus idea in the tech industry.

20:08

๐Ÿ’ก The Role of AI in Energy and Geopolitics

In the fifth paragraph, the conversation explores the future demand for energy in the context of AI's growth and the potential for renewable energy sources. It also examines the structure of OpenAI, its valuation, and the nonprofit board's fiduciary duties. The paragraph highlights the importance of AGI in geopolitics and the balance of power, emphasizing the unpredictable nature of these changes.

25:09

๐Ÿค Building Trust in AI Systems

The sixth paragraph emphasizes the importance of AI systems being able to recognize and communicate their own uncertainties and insecurities. It discusses the need for anthropomorphizing AI with caution and the significance of building AI that can introspect and self-assess. The paragraph also touches on the cultural aspects of OpenAI, the shared mission driving its people, and the broader implications of AI on society, including the potential for misuse.

30:11

๐ŸŽ‰ Closing Remarks and Birthday Celebration

The final paragraph wraps up the discussion with a focus on the potential fears and excitement surrounding the creation of superintelligent AI. It concludes with a round of applause for Sam Altman and a light-hearted refusal to sing 'Happy Birthday,' opting instead for one more question from the audience. The summary encapsulates the overarching themes of the seminar, including the transformative power of AI, the importance of responsible development and deployment, and the collective effort required to navigate the future of technology.

Mindmap

Keywords

๐Ÿ’กEntrepreneurial Thought Leader Seminar

The Entrepreneurial Thought Leader Seminar is a series of events hosted by Stanford University where leaders in various fields share their insights. In the transcript, it is the platform where Sam Altman, co-founder and CEO of OpenAI, is invited to speak, indicating the significance of his work and ideas in the entrepreneurial and technological community.

๐Ÿ’กOpenAI

OpenAI is a research and deployment company focused on developing artificial intelligence technologies. Sam Altman co-founded it with a mission to create general-purpose AI that benefits humanity. The transcript discusses the rapid growth of one of OpenAI's products, Chat GTB, which gained 100 million active users within two months of launch.

๐Ÿ’กArtificial General Intelligence (AGI)

Artificial General Intelligence refers to the hypothetical ability of an AI system to understand and perform any intellectual task that a human being can do. It is a central theme in the transcript, with Sam Altman discussing the future development of AGI, its societal impacts, and the importance of responsible deployment.

๐Ÿ’กIterative Deployment

Iterative Deployment is the process of frequently releasing new versions of a product to incorporate improvements and gather user feedback. Sam Altman emphasizes this approach in the development of AI models, stating that it allows society to co-evolve with technology and provides a feedback loop for continuous improvement.

๐Ÿ’กResilience

Resilience is the capacity to recover quickly from difficulties or to adapt to new conditions. In the context of the video, it is highlighted as a crucial life skill, especially in the face of rapid technological changes. Sam Altman reflects on the resilience of his team at OpenAI, particularly during challenging times.

๐Ÿ’กContrarian Thinking

Contrarian Thinking involves taking a viewpoint that is opposite to the general consensus. Sam Altman discusses the value of contrarian thinking in innovation, suggesting that while it's important to be right, being contrarian for the sake of it is not inherently valuable. He encourages finding unique insights that others may not have considered.

๐Ÿ’กSelf-Awareness

Self-Awareness is the ability to understand and reflect on one's own emotions, motivations, and actions. It is brought up in the context of leadership and personal development. Sam Altman is asked about his self-awareness and how it influences his decision-making and leadership style.

๐Ÿ’กEthics in AI

Ethics in AI pertains to the moral principles that should guide the development and use of AI technologies. The transcript touches on the importance of ethical considerations, especially with the potential for AI to be misused or to cause unforeseen societal changes.

๐Ÿ’กComputational Infrastructure

Computational Infrastructure refers to the underlying technology and systems required to support the development and operation of AI models. Sam Altman discusses the need for significant computational resources and the potential for a semiconductor foundry endeavor to support AI's growing demands.

๐Ÿ’กNon-Consensus Ideas

Non-Consensus Ideas are those that go against or differ from theๆ™ฎ้ๅ…ฑ่ฏ† (common consensus). Sam Altman emphasizes the importance of pursuing non-obvious ideas and trusting one's own thought process, as these are often the sources of the most significant innovations.

๐Ÿ’กScaffolding of Society

The term 'scaffolding of society' is used metaphorically to describe the collective knowledge, tools, and infrastructure that enable individuals to achieve more than would be possible in isolation. Sam Altman uses this concept to illustrate how AI and other advancements contribute to the ongoing progress and capabilities of society.

Highlights

Sam Altman, co-founder and CEO of OpenAI, discusses the rapid growth and impact of AI technologies.

Altman shares his journey from being a Stanford undergrad to leading one of the most influential AI research labs.

He emphasizes the importance of starting companies and conducting AI research during a time of rapid technological change.

Altman believes that the current era may be the best time to start a company since the internet's inception.

He speaks on the challenges and opportunities in building general-purpose artificial intelligence that benefits humanity.

OpenAI's record-breaking growth with ChatGBT, reaching 100 million active users in just two months.

Altman's perspective on the economic model of AI, focusing on creating value for society over immediate monetization.

The necessity for iterative deployment of AI technologies to allow society to adapt and co-evolve with the technology.

Concerns about the dangers of AGI, with a focus on subtle, overlooked risks rather than cataclysmic events.

Altman's views on the role of AI in space exploration and the potential for robots to aid in inhospitable environments.

The importance of self-awareness and resilience when embarking on a journey in the rapidly changing field of AI.

Altman's insights on how to identify and pursue non-consensus ideas in the tech industry.

The future of energy demand and the role of renewable energy sources in powering AI advancements.

Reflections on the structure and governance of OpenAI, including its unique Russian doll model with a non-profit owning a for-profit.

The cultural forces that drive the success of OpenAI, with a focus on shared mission and loyalty.

Altman's thoughts on the potential changes AGI could bring to geopolitics and the global balance of power.

The importance of building AI systems that can recognize their own uncertainties and communicate them effectively.

Altman's reflections on the prospect of creating an AI smarter than any human and the societal implications of such an achievement.

Transcripts

play00:01

[Music]

play00:13

welcome to the entrepreneurial thought

play00:15

leader seminar at Stanford

play00:21

University this is the Stanford seminar

play00:23

for aspiring entrepreneurs ETL is

play00:25

brought to you by stvp the Stanford

play00:27

entrepreneurship engineering center and

play00:29

basis The Business Association of

play00:31

Stanford entrepreneurial students I'm

play00:33

rvie balani a lecturer in the management

play00:35

science and engineering department and

play00:36

the director of Alchemist and

play00:38

accelerator for Enterprise startups and

play00:40

today I have the pleasure of welcoming

play00:42

Sam Altman to ETL

play00:50

um Sam is the co-founder and CEO of open

play00:53

AI open is not a word I would use to

play00:55

describe the seats in this class and so

play00:57

I think by virtue of that that everybody

play00:58

already play knows open AI but for those

play01:00

who don't openai is the research and

play01:02

deployment company behind chat gbt Dolly

play01:05

and Sora um Sam's life is a pattern of

play01:08

breaking boundaries and transcending

play01:10

what's possible both for himself and for

play01:13

the world he grew up in the midwest in

play01:15

St Louis came to Stanford took ETL as an

play01:19

undergrad um for any and we we held on

play01:22

to Stanford or Sam for two years he

play01:24

studied computer science and then after

play01:26

his sophomore year he joined the

play01:27

inaugural class of Y combinator with a

play01:29

Social Mobile app company called looped

play01:32

um that then went on to go raise money

play01:33

from Sequoia and others he then dropped

play01:36

out of Stanford spent seven years on

play01:38

looped which got Acquired and then he

play01:40

rejoined Y combinator in an operational

play01:42

role he became the president of Y

play01:44

combinator from 2014 to 2019 and then in

play01:48

2015 he co-founded open aai as a

play01:50

nonprofit research lab with the mission

play01:52

to build general purpose artificial

play01:54

intelligence that benefits all Humanity

play01:57

open aai has set the record for the

play01:58

fastest growing app in history with the

play02:01

launch of chat gbt which grew to 100

play02:03

million active users just two months

play02:05

after launch Sam was named one of

play02:08

times's 100 most influential people in

play02:10

the world he was also named times CEO of

play02:12

the year in 2023 and he was also most

play02:15

recently added to Forbes list of the

play02:17

world's billionaires um Sam lives with

play02:19

his husband in San Francisco and splits

play02:20

his time between San Francisco and Napa

play02:22

and he's also a vegetarian and so with

play02:24

that please join me in welcoming Sam

play02:27

Altman to the stage

play02:35

and in full disclosure that was a longer

play02:36

introduction than Sam probably would

play02:37

have liked um brevity is the soul of wit

play02:40

um and so we'll try to make the

play02:41

questions more concise but this is this

play02:44

is this is also Sam's birth week it's it

play02:47

was his birthday on Monday and I

play02:49

mentioned that just because I think this

play02:50

is an auspicious moment both in terms of

play02:52

time you're 39 now and also place you're

play02:55

at Stanford in ETL that I would be

play02:57

remiss if this wasn't sort of a moment

play02:59

of just some reflection and I'm curious

play03:01

if you reflect back on when you were

play03:03

half a lifee younger when you were 19 in

play03:05

ETL um if there were three words to

play03:08

describe what your felt sense was like

play03:09

as a Stanford undergrad what would those

play03:11

three words be it's always hard

play03:13

questions

play03:17

um I was like ex uh you want three words

play03:20

only okay uh you can you can go more Sam

play03:23

you're you're the king of brevity uh

play03:25

excited optimistic and curious okay and

play03:29

what would be your three words

play03:30

now I guess the same which is terrific

play03:33

so there's been a constant thread even

play03:35

though the world has changed and you

play03:37

know a lot has changed in the last 19

play03:39

years but that's going to pale in

play03:40

comparison what's going to happen in the

play03:41

next 19 yeah and so I need to ask you

play03:44

for your advice if you were a Stanford

play03:46

undergrad today so if you had a Freaky

play03:47

Friday moment tomorrow you wake up and

play03:49

suddenly you're 19 in inside of Stanford

play03:52

undergrad knowing everything you know

play03:54

what would you do would you drop be very

play03:55

happy um I would feel like I was like

play03:58

coming of age at the luckiest time

play04:00

um like in several centuries probably I

play04:03

think the degree to which the world is

play04:05

is going to change and the the

play04:07

opportunity to impact that um starting a

play04:10

company doing AI research any number of

play04:13

things is is like quite remarkable I

play04:15

think this is probably the best time to

play04:20

start I yeah I think I would say this I

play04:22

think this is probably the best time to

play04:23

start a companies since uh the internet

play04:25

at least and maybe kind of like in the

play04:27

history of technology I think with what

play04:29

you can do with AI is like going to just

play04:33

get more remarkable every year and the

play04:35

greatest companies get created at times

play04:38

like this the most impactful new

play04:40

products get built at times like this so

play04:43

um I would feel incredibly lucky uh and

play04:46

I would be determined to make the most

play04:47

of it and I would go figure out like

play04:50

where I wanted to contribute and do it

play04:52

and do you have a bias on where would

play04:53

you contribute would you want to stay as

play04:55

a student um would and if so would you

play04:56

major in a certain major giving the pace

play04:58

of of change probably I would not stay

play05:01

as a student but only cuz like I didn't

play05:04

and I think it's like reasonable to

play05:05

assume people kind of are going to make

play05:06

the same decisions they would make again

play05:09

um I think staying as a student is a

play05:11

perfectly good thing to do I just I it

play05:13

would probably not be what I would have

play05:15

picked no this is you this is you so you

play05:17

have the Freaky Friday moment it's you

play05:18

you're reborn and as a 19-year-old and

play05:20

would you

play05:22

yeah what I think I would again like I

play05:25

think this is not a surprise cuz people

play05:27

kind of are going to do what they're

play05:28

going to do I think I would go work on

play05:31

research and and and where might you do

play05:33

that Sam I think I mean obviously I have

play05:36

a bias towards open eye but I think

play05:37

anywhere I could like do meaningful AI

play05:39

research I would be like very thrilled

play05:40

about but you'd be agnostic if that's

play05:42

Academia or Private Industry

play05:46

um I say this with sadness I think I

play05:48

would pick

play05:50

industry realistically um I think it's I

play05:53

think to you kind of need to be the

play05:55

place with so much compute M MH okay and

play05:59

um if you did join um on the research

play06:02

side would you join so we had kazer here

play06:04

last week who was a big advocate of not

play06:06

being a Founder but actually joining an

play06:08

existing companies sort of learn learn

play06:09

the chops for the for the students that

play06:11

are wrestling with should I start a

play06:13

company now at 19 or 20 or should I go

play06:15

join another entrepreneurial either

play06:17

research lab or Venture what advice

play06:19

would you give them well since he gave

play06:22

the case to join a company I'll give the

play06:24

other one um which is I think you learn

play06:28

a lot just starting a company and if

play06:29

that's something you want to do at some

play06:30

point there's this thing Paul Graham

play06:32

says but I think it's like very deeply

play06:34

true there's no pre-startup like there

play06:36

is Premed you kind of just learn how to

play06:38

run a startup by running a startup and

play06:40

if if that's what you're pretty sure you

play06:42

want to do you may as well jump in and

play06:43

do it and so let's say so if somebody

play06:45

wants to start a company they want to be

play06:46

in AI um what do you think are the

play06:48

biggest near-term challenges that you're

play06:52

seeing in AI that are the ripest for a

play06:54

startup and just to scope that what I

play06:56

mean by that are what are the holes that

play06:58

you think are the top priority needs for

play07:00

open AI that open AI will not solve in

play07:03

the next three years um yeah

play07:08

so I think this is like a very

play07:10

reasonable question to ask in some sense

play07:13

but I think it's I'm not going to answer

play07:15

it because I think you should

play07:19

never take this kind of advice about

play07:21

what startup to start ever from anyone

play07:24

um I think by the time there's something

play07:26

that is like the kind of thing that's

play07:29

obvious enough that me or somebody else

play07:31

will sit up here and say it it's

play07:33

probably like not that great of a

play07:34

startup idea and I totally understand

play07:37

the impulse and I remember when I was

play07:38

just like asking people like what

play07:39

startup should I start

play07:42

um but I I think like one of the most

play07:46

important things I believe about having

play07:48

an impactful career is you have to chart

play07:50

your own course if if the thing that

play07:53

you're thinking about is something that

play07:54

someone else is going to do anyway or

play07:57

more likely something that a lot of

play07:58

people are going to do anyway

play08:00

um you should be like somewhat skeptical

play08:01

of that and I think a really good muscle

play08:04

to build is coming up with the ideas

play08:07

that are not the obvious ones to say so

play08:09

I don't know what the really important

play08:12

idea is that I'm not thinking of right

play08:13

now but I'm very sure someone in this

play08:15

room does it knows what that answer is

play08:18

um and I think learning to trust

play08:21

yourself and come up with your own ideas

play08:24

and do the very like non-consensus

play08:26

things like when we started open AI that

play08:27

was an extremely non-consensus thing to

play08:30

do and now it's like the very obvious

play08:31

thing to do um now I only have the

play08:34

obvious ideas CU I'm just like stuck in

play08:36

this one frame but I'm sure you all have

play08:38

the other

play08:38

ones but are there so can I ask it

play08:41

another way and I don't know if this is

play08:42

fair or not but are what questions then

play08:44

are you wrestling with that no one else

play08:47

is talking

play08:49

about how to build really big computers

play08:51

I mean I think other people are talking

play08:52

about that but we're probably like

play08:54

looking at it through a lens that no one

play08:56

else is quite imagining yet um

play09:02

I mean we're we're definitely wrestling

play09:05

with how we when we make not just like

play09:09

grade school or middle schooler level

play09:11

intelligence but like PhD level

play09:12

intelligence and Beyond the best way to

play09:14

put that into a product the best way to

play09:16

have a positive impact with that on

play09:19

society and people's lives we don't know

play09:20

the answer to that yet so I think that's

play09:22

like a pretty important thing to figure

play09:23

out okay and can we continue on that

play09:25

thread then of how to build really big

play09:27

computers if that's really what's on

play09:28

your mind can you share I know there's

play09:30

been a lot of speculation and probably a

play09:33

lot of here say too about um the

play09:35

semiconductor Foundry Endeavor that you

play09:38

are reportedly embarking on um can you

play09:41

share what would make what what's the

play09:43

vision what would make this different

play09:45

than it's not just foundies although

play09:47

that that's part of it it's like if if

play09:50

you believe which we increasingly do at

play09:52

this point that AI infrastructure is

play09:55

going to be one of the most important

play09:57

inputs to the Future this commodity that

play09:58

everybody's going to want and that is

play10:01

energy data centers chips chip design

play10:04

new kinds of networks it's it's how we

play10:06

look at that entire ecosystem um and how

play10:09

we make a lot more of that and I don't

play10:12

think it'll work to just look at one

play10:13

piece or another but we we got to do the

play10:15

whole thing okay so there's multiple big

play10:18

problems yeah um I think like just this

play10:21

is the Arc of human technological

play10:25

history as we build bigger and more

play10:26

complex systems and does it gross so you

play10:29

know in terms of just like the compute

play10:30

cost uh correct me if I'm wrong but chat

play10:33

gbt 3 was I've heard it was $100 million

play10:36

to do the model um and it was 100 175

play10:41

billion parameters gbt 4 was cost $400

play10:44

million with 10x the parameters it was

play10:47

almost 4X the cost but 10x the

play10:49

parameters correct me adjust me you know

play10:52

it I I do know it but I won oh you can

play10:54

you're invited to this is Stanford Sam

play10:57

okay um uh but the the even if you don't

play11:00

want to correct the actual numbers if

play11:01

that's directionally correct um does the

play11:05

cost do you think keep growing with each

play11:07

subsequent yes and does it keep growing

play11:12

multiplicatively uh probably I mean and

play11:15

so the question then becomes how do we

play11:18

how do you capitalize

play11:20

that well look I I kind of think

play11:26

that giving people really capable tools

play11:30

and letting them figure out how they're

play11:32

going to use this to build the future is

play11:34

a super good thing to do and is super

play11:36

valuable and I am super willing to bet

play11:39

on the Ingenuity of you all and

play11:42

everybody else in the world to figure

play11:44

out what to do about this so there is

play11:46

probably some more business-minded

play11:48

person than me at open AI somewhere that

play11:50

is worried about how much we're spending

play11:52

um but I kind of

play11:53

don't okay so that doesn't cross it so

play11:55

you

play11:56

know open ey is phenomenal chat gbt is

play11:59

phenomenal um everything else all the

play12:01

other models are

play12:02

phenomenal it burned you've earned $520

play12:05

million of cash last year that doesn't

play12:07

concern you in terms of thinking about

play12:09

the economic model of how do you

play12:11

actually where's going to be the

play12:12

monetization source well first of all

play12:14

that's nice of you to say but Chachi PT

play12:16

is not phenomenal like Chachi PT is like

play12:20

mildly embarrassing at best um gp4 is

play12:24

the dumbest model any of you will ever

play12:26

ever have to use again by a lot um but

play12:29

you know it's like important to ship

play12:31

early and often and we believe in

play12:33

iterative deployment like if we go build

play12:35

AGI in a basement and then you know the

play12:38

world is like kind

play12:40

of blissfully walking blindfolded along

play12:44

um I don't think that's like I don't

play12:46

think that makes us like very good

play12:47

neighbors um so I think it's important

play12:49

given what we believe is going to happen

play12:51

to express our view about what we

play12:52

believe is going to happen um but more

play12:54

than that the way to do it is to put the

play12:56

product in people's hands um

play13:00

and let Society co-evolve with the

play13:03

technology let Society tell us what it

play13:06

collectively and people individually

play13:08

want from the technology how to

play13:09

productize this in a way that's going to

play13:11

be useful um where the model works

play13:13

really well where it doesn't work really

play13:14

well um give our leaders and

play13:17

institutions time to react um give

play13:20

people time to figure out how to

play13:21

integrate this into their lives to learn

play13:23

how to use the tool um sure some of you

play13:25

all like cheat on your homework with it

play13:27

but some of you all probably do like

play13:28

very amazing amazing wonderful things

play13:29

with it too um and as each generation

play13:32

goes on uh I think that will expand

play13:38

and and that means that we ship

play13:40

imperfect products um but we we have a

play13:43

very tight feedback loop and we learn

play13:45

and we get better um and it does kind of

play13:49

suck to ship a product that you're

play13:50

embarrassed about but it's much better

play13:52

than the alternative um and in this case

play13:54

in particular where I think we really

play13:56

owe it to society to deploy tively

play14:00

um one thing we've learned is that Ai

play14:02

and surprise don't go well together

play14:03

people don't want to be surprised people

play14:05

want a gradual roll out and the ability

play14:07

to influence these systems um that's how

play14:10

we're going to do it and there may

play14:13

be there could totally be things in the

play14:15

future that would change where we' think

play14:17

iterative deployment isn't such a good

play14:19

strategy um but it does feel like the

play14:24

current best approach that we have and I

play14:26

think we've gained a lot um from from

play14:29

doing this and you know hopefully s the

play14:31

larger world has gained something too

play14:34

whether we burn 500 million a year or 5

play14:38

billion or 50 billion a year I don't

play14:40

care I genuinely don't as long as we can

play14:43

I think stay on a trajectory where

play14:45

eventually we create way more value for

play14:47

society than that and as long as we can

play14:49

figure out a way to pay the bills like

play14:51

we're making AGI it's going to be

play14:52

expensive it's totally worth it and so

play14:54

and so do you have a I hear you do you

play14:56

have a vision in 2030 of what if I say

play14:58

you crushed it Sam it's 2030 you crushed

play15:01

it what does the world look like to

play15:03

you

play15:06

um you know maybe in some very important

play15:08

ways not that different uh

play15:12

like we will be back here there will be

play15:15

like a new set of students we'll be

play15:17

talking about how startups are really

play15:19

important and technology is really cool

play15:21

we'll have this new great tool in the

play15:23

world it'll

play15:25

feel it would feel amazing if we got to

play15:27

teleport forward six years today and

play15:30

have this thing that was

play15:31

like smarter than humans in many

play15:34

subjects and could do these complicated

play15:36

tasks for us and um you know like we

play15:40

could have these like complicated

play15:41

program written or This research done or

play15:43

this business

play15:44

started uh and yet like the Sun keeps

play15:48

Rising the like people keep having their

play15:50

human dramas life goes on so sort of

play15:53

like super different in some sense that

play15:55

we now have like abundant intelligence

play15:58

at our fingertips

play16:00

and then in some other sense like not

play16:01

different at all okay and you mentioned

play16:04

artificial general intellig AGI

play16:05

artificial general intelligence and in

play16:07

in a previous interview you you define

play16:09

that as software that could mimic the

play16:10

median competence of a or the competence

play16:12

of a median human for tasks yeah um can

play16:16

you give me is there time if you had to

play16:18

do a best guess of when you think or

play16:20

arrange you feel like that's going to

play16:21

happen I think we need a more precise

play16:23

definition of AGI for the timing

play16:26

question um because at at this point

play16:29

even with like the definition you just

play16:30

gave which is a reasonable one there's

play16:32

that's your I'm I'm I'm paring back what

play16:34

you um said in an interview well that's

play16:36

good cuz I'm going to criticize myself

play16:37

okay um it's it's it's it's too loose of

play16:41

a definition there's too much room for

play16:42

misinterpretation in there um to I think

play16:45

be really useful or get at what people

play16:47

really want like I kind of think what

play16:50

people want to know when they say like

play16:52

what's the timeline to AGI is like when

play16:55

is the world going to be super different

play16:57

when is the rate of change going to get

play16:58

super high when is the way the economy

play17:00

Works going to be really different like

play17:01

when does my life change

play17:05

and that for a bunch of reasons may be

play17:08

very different than we think like I can

play17:10

totally imagine a world where we build

play17:13

PhD level intelligence in any area and

play17:17

you know we can make researchers way

play17:18

more productive maybe we can even do

play17:20

some autonomous research and in some

play17:22

sense

play17:24

like that sounds like it should change

play17:26

the world a lot and I can imagine that

play17:28

we do that and then we can detect no

play17:32

change in global GDP growth for like

play17:34

years afterwards something like that um

play17:37

which is very strange to think about and

play17:38

it was not my original intuition of how

play17:40

this was all going to go so I don't know

play17:43

how to give a precise timeline of when

play17:45

we get to the Milestone people care

play17:46

about but when we get to systems that

play17:49

are way more capable than we have right

play17:52

now one year and every year after and

play17:56

that I think is the important point so

play17:57

I've given up on trying to give the AGI

play17:59

timeline but I think every year for the

play18:03

next many we have dramatically more

play18:05

capable systems every year um I want to

play18:07

ask about the dangers of of AGI um and

play18:10

gang I know there's tons of questions

play18:11

for Sam in a few moments I'll be turning

play18:13

it up so start start thinking about your

play18:15

questions um a big focus on Stanford

play18:17

right now is ethics and um can we talk

play18:20

about you know how you perceive the

play18:21

dangers of AGI and specifically do you

play18:24

think the biggest Danger from AGI is

play18:26

going to come from a cataclysmic event

play18:27

which you know makes all the papers or

play18:29

is it going to be more subtle and

play18:31

pernicious sort of like you know like

play18:33

how everybody has ADD right now from you

play18:35

know using Tik Tok um is it are you more

play18:37

concerned about the subtle dangers or

play18:39

the cataclysmic dangers um or neither

play18:42

I'm more concerned about the subtle

play18:43

dangers because I think we're more

play18:45

likely to overlook those the cataclysmic

play18:47

dangers uh a lot of people talk about

play18:50

and a lot of people think about and I

play18:52

don't want to minimize those I think

play18:53

they're really serious and a real thing

play18:57

um but I think we at least know to look

play19:01

out for that and spend a lot of effort

play19:03

um the example you gave of everybody

play19:05

getting add from Tik Tok or whatever I

play19:07

don't think we knew to look out for and

play19:10

that that's a really hard the the

play19:13

unknown unknowns are really hard and so

play19:15

I'd worry more about those although I

play19:16

worry about both and are they unknown

play19:18

unknowns are there any that you can name

play19:19

that you're particularly worried about

play19:21

well then I would kind of they'd be

play19:22

unknown unknown um you can

play19:27

I I am am worried just about so so even

play19:31

though I think in the short term things

play19:32

change less than we think as with other

play19:35

major Technologies in the long term I

play19:37

think they change more than we think and

play19:40

I am worried about what rate Society can

play19:43

adapt to something so new and how long

play19:47

it'll take us to figure out the new

play19:48

social contract versus how long we get

play19:50

to do it um I'm worried about that okay

play19:54

um I'm going to I'm going to open up so

play19:55

I want to ask you a question about one

play19:56

of the key things that we're now trying

play19:58

to in

play19:59

into the curriculum as things change so

play20:01

rapidly is resilience that's really good

play20:04

and and you

play20:05

know and the Cornerstone of resilience

play20:08

uh is is self-awareness and so and I'm

play20:11

wondering um if you feel that you're

play20:14

pretty self-aware of your driving

play20:16

motivations as you are embarking on this

play20:19

journey so first of all I think um I

play20:23

believe resilience can be taught uh I

play20:25

believe it has long been one of the most

play20:27

important life skills um and in the

play20:29

future I think in the over the next

play20:31

couple of decades I think resilience and

play20:33

adaptability will be more important

play20:36

theyve been in a very long time so uh I

play20:39

think that's really great um on the

play20:42

self-awareness

play20:44

question I think I'm self aware but I

play20:48

think like everybody thinks they're

play20:50

self-aware and whether I am or not is

play20:52

sort of like hard to say from the inside

play20:54

and can I ask you sort of the questions

play20:55

that we ask in our intro classes on self

play20:57

awareness sure it's like the Peter duer

play20:59

framework so what do you think your

play21:01

greatest strengths are

play21:04

Sam

play21:07

uh I think I'm not great at many things

play21:10

but I'm good at a lot of things and I

play21:12

think breath has become an underrated

play21:15

thing in the world everyone gets like

play21:17

hypers specialized so if you're good at

play21:19

a lot of things you can seek connections

play21:21

across them um I think you can then kind

play21:25

of come up with the ideas that are

play21:26

different than everybody else has or

play21:28

that sort of experts in one area have

play21:30

and what are your most dangerous

play21:32

weaknesses

play21:36

um most dangerous that's an interesting

play21:39

framework for it

play21:41

uh I think I have like a general bias to

play21:45

be too Pro technology just cuz I'm

play21:47

curious and I want to see where it goes

play21:49

and I believe that technology is on the

play21:50

whole a net good thing but I think that

play21:54

is a worldview that has overall served

play21:56

me and others well and thus got like a

play21:58

lot of positive

play22:00

reinforcement and is not always true and

play22:03

when it's not been true has been like

play22:05

pretty bad for a lot of people and then

play22:07

Harvard psychologist David mcland has

play22:09

this framework that all leaders are

play22:10

driven by one of three Primal needs a

play22:13

need for affiliation which is a need to

play22:15

be liked a need for achievement and a

play22:17

need for power if you had to rank list

play22:19

those what would be

play22:22

yours I think at various times in my

play22:24

career all of those I think there these

play22:26

like levels that people go through

play22:29

um at this point I feel driven by like

play22:32

wanting to do something useful and

play22:34

interesting okay and I definitely had

play22:37

like the money and the power and the

play22:38

status phases okay and then where were

play22:40

you when you most last felt most like

play22:45

yourself I I

play22:48

always and then one last question and

play22:50

what are you most excited about with

play22:51

chat gbt five that's coming out that uh

play22:55

people

play22:56

don't what are you what are you most

play22:57

excited about with the of chat gbt that

play22:59

we're all going to see

play23:01

uh I don't know yet um I I mean I this

play23:05

this sounds like a cop out answer but I

play23:07

think the most important thing about gp5

play23:09

or whatever we call that is just that

play23:11

it's going to be smarter and this sounds

play23:13

like a Dodge but I think that's like

play23:17

among the most remarkable facts in human

play23:19

history that we can just do something

play23:21

and we can say right now with a high

play23:23

degree of scientific certainty GPT 5 is

play23:25

going to be smarter than a lot smarter

play23:26

than GPT 4 GPT 6 going to be a lot

play23:28

smarter than gbt 5 and we are not near

play23:30

the top of this curve and we kind of

play23:32

know what know what to do and this is

play23:34

not like it's going to get better in one

play23:35

area this is not like we're going to you

play23:37

know it's not that it's always going to

play23:39

get better at this eval or this subject

play23:41

or this modality it's just going to be

play23:43

smarter in the general

play23:45

sense and I think the gravity of that

play23:48

statement is still like underrated okay

play23:50

that's great Sam guys Sam is really here

play23:52

for you he wants to answer your question

play23:54

so we're going to open it up hello um

play23:57

thank you so much for joining joining us

play23:59

uh I'm a junior here at Stanford I sort

play24:01

of wanted to talk to you about

play24:02

responsible deployment of AGI so as as

play24:05

you guys could continually inch closer

play24:07

to that how do you plan to deploy that

play24:10

responsibly AI uh at open AI uh you know

play24:13

to prevent uh you know stifling human

play24:15

Innovation and continue to Spur that so

play24:19

I'm actually not worried at all about

play24:20

stifling of human Innovation I I really

play24:22

deeply believe that people will just

play24:24

surprise us on the upside with better

play24:26

tools I think all of history suggest

play24:28

that if you give people more leverage

play24:30

they do more amazing things and that's

play24:32

kind of like we all get to benefit from

play24:34

that that's just kind of great I am

play24:37

though increasingly worried about how

play24:39

we're going to do this all responsibly I

play24:41

think as the models get more capable we

play24:42

have a higher and higher bar we do a lot

play24:44

of things like uh red teaming and

play24:47

external Audits and I think those are

play24:48

all really good but I think as the

play24:51

models get more capable we'll have to

play24:53

deploy even more iteratively have an

play24:55

even tighter feedback loop on looking at

play24:58

how they're used and where they work and

play24:59

where they don't work and this this

play25:01

world that we used to do where we can

play25:02

release a major model update every

play25:04

couple of years we probably have to find

play25:07

ways to like increase the granularity on

play25:09

that and deploy more iteratively than we

play25:11

have in the past and it's not super

play25:13

obvious to us yet how to do that but I

play25:16

think that'll be key to responsible

play25:17

deployment and also the way we kind of

play25:21

have all of the stakeholders negotiate

play25:24

what the rules of AI need to be uh

play25:27

that's going to get more comp Lex over

play25:28

time too thank you next question where

play25:32

here you mentioned before that there's a

play25:34

growing need for larger and larger

play25:36

computers and faster computers however

play25:38

many parts of the world don't have the

play25:40

infrastructure to build those data

play25:41

centers or those large computers how do

play25:44

you see um Global Innovation being

play25:46

impacted by that so two parts to that

play25:49

one

play25:50

um no matter where the computers are

play25:52

built I think Global and Equitable

play25:56

access to use the computers for training

play25:57

as well inference is super important um

play26:01

one of the things that's like very C to

play26:02

our mission is that we make chat GPT

play26:05

available for free to as many people as

play26:07

want to use it with the exception of

play26:08

certain countries where we either can't

play26:10

or don't for a good reason want to

play26:12

operate um how we think about making

play26:14

training compute more available to the

play26:16

world is is uh going to become

play26:18

increasingly important I I do think we

play26:21

get to a world where we sort of think

play26:23

about it as a human right to get access

play26:24

to a certain amount of compute and we

play26:26

got to figure out how to like distribute

play26:28

that to people all around the world um

play26:30

there's a second thing though which is I

play26:32

think countries are going to

play26:34

increasingly realize the importance of

play26:36

having their own AI infrastructure and

play26:38

we want to figure out a way and we're

play26:40

now spending a lot of time traveling

play26:41

around the world to build them in uh the

play26:44

many countries that'll want to build

play26:45

these and I hope we can play some small

play26:47

role there in helping that happen trfic

play26:50

thank

play26:51

you U my question was what role do you

play26:55

envision for AI in the future of like

play26:57

space exploration or like

play26:59

colonization um I think space is like

play27:02

not that hospitable for biological life

play27:05

obviously and so if we can send the

play27:07

robots that seems

play27:16

easier hey Sam so my question is for a

play27:19

lot of the founders in the room and I'm

play27:21

going to give you the question and then

play27:23

I'm going to explain why I think it's

play27:25

complicated um so my question is about

play27:28

how you know an idea is

play27:30

non-consensus and the reason I think

play27:32

it's complicated is cu it's easy to

play27:34

overthink um I think today even yourself

play27:37

says AI is the place to start a company

play27:40

I think that's pretty

play27:42

consensus maybe rightfully so it's an

play27:44

inflection point I think it's hard to

play27:47

know if idea is non-consensus depending

play27:50

on the group that you're talking about

play27:52

the general public has a different view

play27:54

of tech from The Tech Community and even

play27:57

Tech Elites have a different point of

play27:58

view from the tech community so I was

play28:01

wondering how you verify that your idea

play28:03

is non-consensus enough to

play28:07

pursue um I mean first of all what you

play28:11

really want is to be right being

play28:13

contrarian and wrong still is wrong and

play28:15

if you predicted like 17 out of the last

play28:17

two recessions you probably were

play28:20

contrarian for the two you got right

play28:22

probably not even necessarily um but you

play28:24

were wrong 15 other times and and

play28:28

and so I think it's easy to get too

play28:30

excited about being contrarian and and

play28:33

again like the most important thing to

play28:35

be right and the group is usually right

play28:39

but where the most value is um is when

play28:42

you are contrarian and

play28:45

right

play28:47

and and that doesn't always happen in

play28:50

like sort of a zero one kind of way like

play28:54

everybody in the room can agree that AI

play28:57

is the right place to start the company

play28:59

and if one person in the room figures

play29:00

out the right company to start and then

play29:02

successfully executes on that and

play29:03

everybody else thinks ah that wasn't the

play29:05

best thing you could do that's what

play29:07

matters so it's okay to kind of like go

play29:11

with conventional wisdom when it's right

play29:13

and then find the area where you have

play29:14

some unique Insight in terms of how to

play29:17

do that um I do think surrounding

play29:21

yourself with the right peer group is

play29:23

really important and finding original

play29:24

thinkers uh is important but there is

play29:28

part of this where you kind of have to

play29:30

do it Solo or at least part of it Solo

play29:33

or with a few other people who are like

play29:35

you know going to be your co-founders or

play29:36

whatever

play29:38

um and I think by the time you're too

play29:41

far in the like how can I find the right

play29:43

peer group you're somehow in the wrong

play29:45

framework already um so like learning to

play29:48

trust yourself and your own intuition

play29:51

and your own thought process which gets

play29:53

much easier over time no one no matter

play29:55

what they said they say I think is like

play29:57

truly great at this this when they're

play29:58

just starting out you because like you

play30:02

kind of just haven't built the muscle

play30:03

and like all of your Social pressure and

play30:07

all of like the evolutionary pressure

play30:09

that produced you was against that so

play30:11

it's it's something that like you get

play30:12

better at over time and and and don't

play30:15

hold yourself to too high of a standard

play30:16

too early on

play30:19

it Hi Sam um I'm curious to know what

play30:22

your predictions are for how energy

play30:24

demand will change in the coming decades

play30:26

and how we achieve a future where

play30:28

renewable energy sources are 1 set per

play30:29

kilowatt

play30:31

hour

play30:32

um I mean it will go up for sure well

play30:36

not for sure you can come up with all

play30:37

these weird ways in which

play30:39

like we all depressing future is where

play30:42

it doesn't go up I would like it to go

play30:43

up a lot I hope that we hold ourselves

play30:46

to a high enough standard where it does

play30:47

go up I I I forget exactly what the kind

play30:50

of world's electrical gener generating

play30:53

capacity is right now but let's say it's

play30:54

like 3,000 4,000 gwatt something like

play30:57

that even if we add another 100 gwatt

play31:00

for AI it doesn't materially change it

play31:02

that much but it changes it some and if

play31:06

we start at a th gwatt for AI someday it

play31:08

does that's a material change but there

play31:10

are a lot of other things that we want

play31:11

to do and energy does seem to correlate

play31:14

quite a lot with quality of life we can

play31:16

deliver for people

play31:18

um my guess is that Fusion eventually

play31:21

dominates electrical generation on Earth

play31:24

um I think it should be the cheapest

play31:25

most abundant most reliable densest

play31:27

source

play31:28

I could could be wrong with that and it

play31:30

could be solar Plus Storage um and you

play31:33

know my guess most likely is it's going

play31:35

to be 820 one way or the other and

play31:37

there'll be some cases where one of

play31:38

those is better than the other but uh

play31:42

those kind of seem like the the two bets

play31:43

for like really global scale one cent

play31:46

per kilowatt hour

play31:51

energy Hi Sam I have a question it's

play31:54

about op guide drop what happened last

play31:56

year so what's the less you learn cuz

play31:59

you talk about resilience so what's the

play32:01

lesson you learn from left that company

play32:04

and now coming back and what what made

play32:06

you com in back because Microsoft also

play32:09

gave you offer like can you share more

play32:11

um I mean the best lesson I learned was

play32:14

that uh we had an incredible team that

play32:17

totally could have run the company

play32:18

without me and did did for a couple of

play32:20

days

play32:22

um and you never and also that the team

play32:26

was super resilient like we knew that a

play32:29

CRA some crazy things and probably more

play32:31

crazy things will happen to us between

play32:33

here and AGI um as different parts of

play32:37

the world have stronger and stronger

play32:40

emotional reactions and the stakes keep

play32:41

ratcheting up and you know I thought

play32:45

that the team would do well under a lot

play32:46

of pressure but you never really know

play32:49

until you get to run the experiment and

play32:50

we got to run the experiment and I

play32:52

learned that the team was super

play32:54

resilient and like ready to kind of run

play32:56

the company um in terms of why I came

play32:59

back you know I originally when the so

play33:02

it was like the next morning the board

play33:04

called me and like what do you think

play33:05

about coming back and I was like no um

play33:07

I'm mad um

play33:11

and and then I thought about it and I

play33:13

realized just like how much I loved open

play33:14

AI um how much I loved the people the C

play33:17

the culture we had built uh the mission

play33:19

and I kind of like wanted to finish it

play33:21

Al

play33:23

together you you you emotionally I just

play33:25

want to this is obviously a really

play33:26

sensitive and one of one of oh it's it's

play33:29

not but was I imagine that was okay well

play33:32

then can we talk about the structure

play33:33

about it because this Russian doll

play33:35

structure of the open AI where you have

play33:38

the nonprofit owning the for-profit um

play33:40

you know when we're we're trying to

play33:41

teach principal ger entrepreneur we got

play33:43

here we got to the structure gradually

play33:46

um it's not what I would go back and

play33:47

pick if we could do it all over again

play33:49

but we didn't think we were going to

play33:50

have a product when we started we were

play33:52

just going to be like a AI research lab

play33:54

wasn't even clear we had no idea about a

play33:56

language model or an API or chat GPT so

play33:59

if if you're going to start a company

play34:01

you got to have like some theory that

play34:03

you're going to sell a product someday

play34:04

and we didn't think we were going to we

play34:06

didn't realize we're were going to need

play34:07

so much money for compute we didn't

play34:08

realize we were going to like have this

play34:09

nice business um so what was your

play34:11

intention when you started it we just

play34:13

wanted to like push AI research forward

play34:15

we thought that and I know this gets

play34:17

back to motivations but that's the pure

play34:18

motivation there's no motivation around

play34:21

making money or or power I cannot

play34:24

overstate how foreign of a concept like

play34:28

I mean for you personally not for open

play34:30

AI but you you weren't starting well I

play34:32

had already made a lot of money so it

play34:33

was not like a big I mean I I like I

play34:36

don't want to like claim some like moral

play34:38

Purity here it was just like that was

play34:41

the of my life a dver driver okay

play34:44

because there's this so and the reason

play34:46

why I'm asking is just you know when

play34:47

we're teaching about principle driven

play34:48

entrepreneurship here you can you can

play34:49

understand principles inferred from

play34:51

organizational structures when the

play34:52

United States was set up the

play34:54

architecture of governance is the

play34:55

Constitution it's got three branches of

play34:58

government all these checks and balances

play35:00

and you can infer certain principles

play35:02

that you know there's a skepticism on

play35:04

centralizing power that you know things

play35:06

will move slowly it's hard to get things

play35:08

to change but it'll be very very

play35:10

stable if you you know not to parot

play35:13

Billy eish but if you look at the open

play35:14

AI structure and you think what was that

play35:16

made for um it's a you have a like your

play35:18

near hundred billion dollar valuation

play35:20

and you've got a very very limited board

play35:22

that's a nonprofit board which is

play35:24

supposed to look after it's it's its

play35:26

fiduciary duties to the again it's not

play35:28

what we would have done if we knew then

play35:30

what we know now but you don't get to

play35:31

like play Life In Reverse and you have

play35:34

to just like adapt there's a mission we

play35:36

really cared about we thought we thought

play35:38

AI was going to be really important we

play35:39

thought we had an algorithm that learned

play35:42

we knew it got better with scale we

play35:43

didn't know how predictably it got

play35:44

better with scale and we wanted to push

play35:46

on this we thought this was like going

play35:47

to be a very important thing in human

play35:50

history and we didn't get everything

play35:52

right but we were right on the big stuff

play35:54

and our mission hasn't changed and we've

play35:56

adapted the structure as we go and will

play35:57

adapt it more in the future um but you

play36:00

know like you

play36:04

don't like life is not a problem set um

play36:08

you don't get to like solve everything

play36:09

really nicely all at once it doesn't

play36:11

work quite like it works in the

play36:12

classroom as you're doing it and my

play36:14

advice is just like trust yourself to

play36:16

adapt as you go it'll be a little bit

play36:18

messy but you can do it and I just asked

play36:20

this because of the significance of open

play36:21

AI um you have a you have a board which

play36:23

is all supposed to be independent

play36:25

financially so that they're making these

play36:26

decisions as a nonprofit thinking about

play36:29

the stakeholder their stakeholder that

play36:30

they are fiduciary of isn't the

play36:32

shareholders it's Humanity um

play36:34

everybody's independent there's no

play36:36

Financial incentive that anybody has

play36:38

that's on the board including yourself

play36:40

with hope and AI um well Greg was I okay

play36:43

first of all I think making money is a

play36:44

good thing I think capitalism is a good

play36:46

thing um my co-founders on the board

play36:48

have had uh financial interest and I've

play36:50

never once seen them not take the

play36:52

gravity of the mission seriously um but

play36:56

you know we've put a structure in place

play36:58

that we think is a way to get um

play37:02

incentives aligned and I do believe

play37:03

incentives are superpowers but I'm sure

play37:06

we'll evolve it more over time and I

play37:08

think that's good not bad and with open

play37:09

AI the new fund you're not you don't get

play37:11

any carry in that and you're not

play37:12

following on investments onto those okay

play37:15

okay okay thank you we can keep talking

play37:16

about this I I I know you want to go

play37:18

back to students I do too so we'll go

play37:19

we'll keep we'll keep going to the

play37:20

students how do you expect that AGI will

play37:23

change geopolitics and the balance of

play37:24

power in the world um like maybe more

play37:29

than any

play37:30

other technology um I don't I I think

play37:34

about that so much and I have such a

play37:37

hard time saying what it's actually

play37:38

going to do um I or or maybe more

play37:42

accurately I have such a hard time

play37:44

saying what it won't do and we were

play37:46

talking earlier about how it's like not

play37:47

going to CH maybe it won't change

play37:48

day-to-day life that much but the

play37:50

balance of power in the world it feels

play37:53

like it does change a lot but I don't

play37:55

have a deep answer of exactly how

play37:58

thanks so much um I was wondering sorry

play38:02

I was wondering in the deployment of

play38:03

like general intelligence and also

play38:05

responsible AI how much do you think is

play38:08

it necessary that AI systems are somehow

play38:12

capable of recognizing their own

play38:14

insecurities or like uncertainties and

play38:16

actually communicating them to the

play38:18

outside world I I always get nervous

play38:21

anthropomorphizing AI too much because I

play38:23

think it like can lead to a bunch of

play38:25

weird oversights but if we say like how

play38:28

much can AI recognize its own

play38:31

flaws uh I think that's very important

play38:34

to build and right now and the ability

play38:38

to like recognize an error in reasoning

play38:41

um and have some sort of like

play38:43

introspection ability like that that

play38:46

that seems to me like really important

play38:47

to

play38:51

pursue hey s thank you for giving us

play38:54

some of your time today and coming to

play38:55

speak from the outside looking in we we

play38:57

all hear about the culture and together

play38:59

togetherness of open AI in addition to

play39:00

the intensity and speed of what you guys

play39:02

work out clearly seen from CH gbt and

play39:05

all your breakthroughs and also in when

play39:07

you were temporarily removed from the

play39:08

company by the board and how all the all

play39:10

of your employees tweeted open air is

play39:11

nothing without its people what would

play39:13

you say is the reason behind this is it

play39:15

the binding mission to achieve AGI or

play39:16

something even deeper what is pushing

play39:18

the culture every

play39:19

day I think it is the shared Mission um

play39:22

I mean I think people like like each

play39:23

other and we feel like we've you know

play39:25

we're in the trenches together doing

play39:26

this really hard thing um

play39:30

but I think it really is like deep sense

play39:33

of purpose and loyalty to the mission

play39:36

and when you can create that I think it

play39:39

is like the strongest force for Success

play39:42

at any start at least that I've seen

play39:43

among startups um and you know we try to

play39:47

like select for that and people we hire

play39:49

but even people who come in not really

play39:51

believing that AGI is going to be such a

play39:54

big deal and that getting it right is so

play39:55

important tend to believe it after the

play39:56

first three months or whatever and so

play39:58

that's like that's a very powerful

play40:00

cultural force that we have

play40:03

thanks um currently there are a lot of

play40:06

concerns about the misuse of AI in the

play40:08

immediate term with issues like Global

play40:10

conflicts and the election coming up

play40:12

what do you think can be done by the

play40:14

industry governments and honestly People

play40:16

Like Us in the immediate term especially

play40:18

with very strong open- Source

play40:22

models one thing that I think is

play40:25

important is not to pretend like this

play40:27

technology or any other technology is

play40:29

all good um I believe that AI will be

play40:32

very net good tremendously net good um

play40:36

but I think like with any other tool

play40:40

um it'll be misused like you can do

play40:43

great things with a hammer and you can

play40:45

like kill people with a hammer um I

play40:48

don't think that absolves us or you all

play40:50

or Society from um trying to mitigate

play40:55

the bad as much as we can and maximize

play40:56

the good

play40:58

but I do think it's important to realize

play41:02

that with any sufficiently powerful Tool

play41:06

uh you do put Power in the hands of tool

play41:09

users or you make some decisions that

play41:12

constrain what people in society can do

play41:15

I think we have a voice in that I think

play41:17

you all have a voice on that I think the

play41:19

governments and our elected

play41:20

representatives in Democratic process

play41:21

processes have the loudest voice in

play41:24

that but we're not going to get this

play41:26

perfectly right like we Society are not

play41:28

going to get this perfectly right

play41:31

and a tight feedback loop I think is the

play41:34

best way to get it closest to right um

play41:37

and the way that that balance gets

play41:39

negotiated of safety versus freedom and

play41:42

autonomy um I think it's like worth

play41:44

studying that with previous Technologies

play41:47

and we'll do the best we can here we

play41:49

Society will do the best we can

play41:51

here um gang actually I've got to cut it

play41:54

sorry I know um I'm wanty to be very

play41:56

sensitive to time I know the the

play41:58

interest far exceeds the time and the

play42:00

love for Sam um Sam I know it is your

play42:03

birthday I don't know if you can indulge

play42:04

us because I know there's a lot of love

play42:05

for you so I wonder if we can all just

play42:07

sing Happy Birthday no no no please no

play42:09

we want to make you very uncomfortable

play42:11

one more question I'd much rather do one

play42:13

more

play42:14

question this is less interesting to you

play42:17

thank you we can you can do one more

play42:18

question

play42:20

quickly day dear

play42:23

Sam happy birthday to you

play42:27

20 seconds of awkwardness is there a

play42:29

burner question somebody who's got a

play42:30

real burner and we only have 30 seconds

play42:32

so make it

play42:34

short um hi I wanted to ask if the

play42:38

prospect of making something smarter

play42:41

than any human could possibly be scares

play42:44

you it of course does and I think it

play42:47

would be like really weird and uh a bad

play42:50

sign if it didn't scare me um humans

play42:54

have gotten dramatically smarter and

play42:56

more capable over time you are

play42:59

dramatically more capable than your

play43:02

great great grandparents and there's

play43:05

almost no biological drift over that

play43:07

period like sure you eat a little bit

play43:08

better and you got better healthare um

play43:11

maybe you eat worse I don't know um but

play43:14

that's not the main reason you're more

play43:16

capable um you are more capable because

play43:20

the infrastructure of

play43:22

society is way smarter and way more

play43:25

capable than any human and and through

play43:27

that it made you Society people that

play43:30

came before you um made you uh the

play43:34

internet the iPhone a huge amount of

play43:37

knowledge available at your fingertips

play43:39

and you can do things that your

play43:41

predecessors would find absolutely

play43:44

breathtaking

play43:47

um Society is far smarter than you now

play43:50

um Society is an AGI as far as you can

play43:52

tell and the

play43:57

the way that that happened was not any

play43:59

individual's brain but the space between

play44:01

all of us that scaffolding that we build

play44:03

up um and contribute to Brick by Brick

play44:08

step by step uh and then we use to go to

play44:11

far greater Heights for the people that

play44:13

come after us um things that are smarter

play44:16

than us will contribute to that same

play44:18

scaffolding um you will

play44:21

have your children will have tools

play44:23

available that you didn't um and that

play44:25

scaffolding will have gotten built up to

play44:28

Greater Heights

play44:32

and that's always a little bit scary um

play44:35

but I think it's like more way more good

play44:38

than bad and people will do better

play44:40

things and solve more problems and the

play44:42

people of the future will be able to use

play44:45

these new tools and the new scaffolding

play44:47

that these new tools contribute to um if

play44:49

you think about a world that has um AI

play44:54

making a bunch of scientific discovery

play44:56

what happens to that scientific progress

play44:58

is it just gets added to the scaffolding

play45:00

and then your kids can do new things

play45:02

with it or you in 10 years can do new

play45:03

things with it um but the way it's going

play45:07

to feel to people uh I think is not that

play45:10

there is this like much smarter entity

play45:14

uh because we're much smarter in some

play45:17

sense than the great great great

play45:19

grandparents are more capable at least

play45:21

um but that any individual person can

play45:23

just do

play45:25

more on that we're going to end it so

play45:27

let's give Sam a round of applause

play45:35

[Music]

Rate This
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…

5.0 / 5 (0 votes)

Related Tags
AI FutureSocietal ImpactArtificial General IntelligenceStanford SeminarTech InnovationEntrepreneurshipOpenAIAGI DevelopmentResponsible AIGlobal AccessTech Ethics