Sam Altman CEO of OpenAI | Podcast | In Good Company | Norges Bank Investment Management

Podcast: In Good Company
5 Sept 202349:41

Summary

TLDRIn this insightful interview, Sam Altman, the CEO of OpenAI, shares his ambitious vision for the future of artificial general intelligence (AGI) and its potential to benefit humanity. He discusses the rapid progress of language models like ChatGPT, the challenges of developing advanced AI systems, and the importance of democratizing this technology globally. Altman emphasizes the need for responsible governance and self-regulation as AI capabilities increase. He also touches on the economic implications of AI, the role of computation and funding, and the competitive landscape with other tech giants. Throughout the conversation, Altman's long-term thinking and unwavering commitment to pushing the boundaries of AI shine through.

Takeaways

  • 😎 Sam Altman, CEO of OpenAI, is leading the charge in developing powerful AI systems like ChatGPT, with the goal of creating artificial general intelligence (AGI) that benefits all of humanity.
  • 🔭 Altman believes that empirically testing and deploying AI technologies is crucial to understanding their risks, benefits, and potential evolution, rather than relying solely on philosophical speculation.
  • 🌍 He envisions a future where extremely capable AI systems will fundamentally change how we think about and interact with the world, potentially by the end of this decade.
  • 🤖 A key challenge is determining whose values the AGI should be aligned with and how to distribute access and benefits fairly across societies and countries.
  • 🕰️ Altman emphasizes the importance of long-term thinking and not being constrained by conventional wisdom, which he sees as a competitive advantage for OpenAI.
  • 💼 He stresses the need for leaders to prioritize talent recruitment, team development, vision communication, and strategic thinking, rather than getting bogged down in day-to-day details.
  • 🌐 Altman believes that AI has the potential to significantly improve productivity and lift people out of poverty globally by democratizing access to intelligence and expertise.
  • 🔬 He is also excited about the prospects of fusion energy, which he sees as complementary to AI in achieving abundance and addressing climate change.
  • 🤝 OpenAI has a productive partnership with Microsoft, with aligned goals at the highest level, despite some occasional misalignments at lower levels.
  • 🌉 Altman recognizes the need for reasonable global regulation, particularly for the most powerful AI systems that could cause grievous harm, while advocating for making models like GPT-4 widely available.

Q & A

  • What is ChatGPT, and how has it impacted the world?

    -ChatGPT is an advanced language model created by OpenAI that shocked the world with its capabilities when it was released in November 2022. It has been described as the fastest-growing product in history and has led to widespread adoption and integration across various industries and applications.

  • What is OpenAI's vision for the future where humans and AI coexist?

    -OpenAI believes that answering this question empirically is crucial, as many past predictions have been proven wrong. They aim to deploy AI into the world, observe how people use it, understand the risks and benefits, and then co-evolve the technology with society based on these observations.

  • How does Sam Altman define general intelligence (AGI)?

    -Sam Altman defines AGI as a system that can figure out new scientific knowledge that humans on their own could not. He believes that by the end of this decade, we may have extremely powerful systems that change the way we currently think about the world.

  • What are some of the challenges in ensuring that AI benefits all of humanity?

    -Some of the key challenges include deciding whose values the AI systems should align with, determining the level of flexibility and control given to individual users and countries, and finding ways to share the benefits of AI equitably, such as through increased agency and opportunities for people rather than just handouts.

  • How does OpenAI approach the development of AI models like GPT-4?

    -OpenAI aims to make their most capable models, like GPT-4, widely available globally, even if people use them for purposes that OpenAI might not always agree with. They believe in democratizing this technology as much as possible.

  • What role does Microsoft play in OpenAI's efforts?

    -Microsoft is a key partner of OpenAI. They build the computers that OpenAI uses to train their models, and both companies use these models. Their goals are generally aligned, although there may be some misalignments at lower levels that need to be addressed through communication and compromise.

  • How does Sam Altman approach talent assessment and leadership development at OpenAI?

    -Sam Altman believes he has developed a strong ability to assess talent through extensive practice. For leadership development, he tries to promote from within and warns new leaders upfront about the common pitfalls they are likely to face, encouraging them to learn from their mistakes over time.

  • What are Sam Altman's thoughts on the potential impact of AI on productivity?

    -Sam Altman believes that AI has the potential to significantly increase productivity, and he has set an ambitious goal for OpenAI employees to improve their productivity by around 20% over a 12-month period, leveraging the tools and models they are developing.

  • What is Sam Altman's perspective on the role of government regulation in the AI space?

    -While Sam Altman acknowledges the need for government regulation, especially for the most powerful AI systems capable of causing global harm, he believes that individual countries and regions should maintain the right to self-determine rules and guidelines for less powerful AI applications. He sees the potential for reasonable regulation, such as requiring disclosure when interacting with an AI system.

  • Apart from AI, what other technology is Sam Altman most excited about?

    -Sam Altman is highly excited about the potential of fusion energy technology, as he believes that bringing down the cost and increasing the abundance of clean energy, along with reducing the cost of intelligence through AI, are the two most important factors in achieving global abundance.

Outlines

00:00

🤖 The Rise of ChatGPT and OpenAI's Vision

In this introductory paragraph, Sam Altman, the CEO of OpenAI, discusses the groundbreaking release of ChatGPT and OpenAI's mission to create artificial general intelligence (AGI) that benefits humanity. He explains his excitement about working at the forefront of this technological revolution and shares his vision of a future where humans and AI coexist. Altman acknowledges the difficulty in predicting the exact trajectory of this technology but stresses the importance of empirically understanding its impacts, risks, and benefits as it evolves.

05:01

🔮 The Future of AGI and Its Global Impact

Altman reflects on the potential of achieving true AGI by the end of the decade, which he defines as a system capable of discovering new scientific knowledge beyond human capabilities. He discusses the challenges of determining whose values should guide the alignment of AGI and how to equitably share its benefits across the world. Altman emphasizes the need for global governance over powerful AI systems while allowing flexibility for individual users and countries. He also touches on the geopolitical implications of AI and the importance of democratizing access to these technologies.

10:02

🌍 AI's Role in Lifting Up the Developing World

In this paragraph, Altman expresses his belief that AI technologies, particularly the democratization of intelligence, will have a disproportionately positive impact on the developing world by providing access to expert knowledge and resources that were previously unaffordable. He acknowledges potential roadblocks, such as the trajectory of technology development or geopolitical factors, but remains optimistic about AI's potential to alleviate global poverty and inequality.

15:03

💻 The Rapid Integration of AI into Various Industries

Altman discusses the astonishing pace at which companies are integrating ChatGPT and other AI models into various products and services, such as cars, customer service, and legal document review. He acknowledges that while the current models have significant limitations, people are finding ingenious ways to leverage them, leading to substantial productivity gains. Altman also touches on the challenges of continuously training and depreciating these large models while generating valuable intellectual property for future iterations.

20:04

🤝 Partnerships, Regulation, and Democratizing AI Access

In this paragraph, Altman reflects on OpenAI's partnership with Microsoft, emphasizing their aligned goals and the importance of compromise in resolving any disagreements. He discusses the need for reasonable regulation, such as mandating disclosure when interacting with AI systems, and the potential for global governance over exceptionally powerful AI. Altman also reiterates OpenAI's commitment to democratizing access to their models, even if users employ them for unintended purposes.

25:08

🚀 Productivity Gains and the Limits of AI Progress

Altman shares his ambitious goal of achieving a 10% productivity increase across OpenAI within the next 12 months, attributing this target to the potential of AI tools. He expresses his belief in the exponential progress of AI and sees no inherent limitations to its continued advancement. Altman also touches on the topic of AI's impact on global power dynamics and the potential for unexpected breakthroughs from other countries or entities.

30:08

👩‍💻 Managing Researchers and Fostering Innovation

In this paragraph, Altman discusses his approach to managing researchers at OpenAI, emphasizing the importance of providing a high-level vision and ample freedom for exploration. He acknowledges the challenges OpenAI faced in rediscovering an effective research culture within a company setting and the need to strike a balance between allowing diverse ideas and aligning efforts towards promising directions. Altman also reflects on the lack of groundbreaking scientific breakthroughs from Silicon Valley companies before OpenAI's emergence.

35:09

🧠 Assessing Talent and Long-Term Thinking

Altman shares his thoughts on assessing talent, citing his experience at Y Combinator and his ability to recognize intelligence, track records, and novel ideas in candidates through numerous conversations. He discusses the importance of long-term thinking as a competitive advantage, which was a key factor in OpenAI's decision to pursue AGI despite skepticism from others. Altman also touches on the cultural differences between Silicon Valley and Europe regarding innovation and tolerance for failure.

40:10

⚛️ Excitement for Fusion Energy and Reading Habits

In this paragraph, Altman expresses his excitement about the potential of fusion energy to provide abundant, clean power and solve climate change challenges. He considers fusion and AI as the two most important technologies for achieving true abundance in the world. Altman also reflects on his diminished reading habits due to the demands of his work but recommends the book 'The Beginning of Infinity' as an inspiring read for young people.

45:11

🏆 Legacy and Continuing the Pursuit of AGI

In the final paragraph, Altman acknowledges that he is too focused on the present challenges and tactical problems at OpenAI to contemplate his legacy. He expresses a determination to continue building towards the goal of AGI, navigating the daily obstacles and issues that arise. Altman concludes by expressing gratitude for the opportunity to share his thoughts and receive well-wishes for OpenAI's ongoing pursuit of this ambitious endeavor.

Mindmap

Keywords

💡Artificial General Intelligence (AGI)

Artificial General Intelligence (AGI) refers to the development of AI systems that can match or surpass human intelligence in general cognitive abilities, such as reasoning, problem-solving, and learning across multiple domains. In the video, Sam Altman discusses OpenAI's goal of achieving AGI and its potential to revolutionize various aspects of society. He mentions that by the end of this decade, they expect to have extremely powerful systems that could change how we think about the world, hinting at the possibilities of AGI.

💡Language Models

Language models are AI systems trained on vast amounts of text data to understand and generate human-like language. In the video, Sam Altman mentions that OpenAI initially pursued various ideas but eventually realized the potential of language models, leading them to focus their efforts on this direction. He cites GPT (Generative Pre-trained Transformer) models as examples of powerful language models that have garnered significant attention and usage.

💡Democratization of AI

Democratization of AI refers to making AI technology widely accessible and available to people and organizations around the world, regardless of their resources or geographic location. Sam Altman emphasizes OpenAI's goal of globally democratizing AI, making their models like GPT-4 available to anyone willing to pay for the API rates. He believes that this democratization will particularly benefit poorer regions by providing access to expertise and intelligence that was previously unaffordable.

💡Alignment

Alignment refers to the challenge of ensuring that advanced AI systems are aligned with human values and goals, preventing unintended or harmful consequences. Sam Altman discusses the technical and social aspects of alignment, mentioning the need to decide whose values the AI should be aligned with and how to share the benefits of AI equitably. He expresses cautious optimism about solving the technical alignment problem but acknowledges the difficulty in reaching societal consensus on the broader implications.

💡Regulation

Regulation refers to the establishment of rules, laws, and policies to govern the development and use of AI systems. Sam Altman discusses the need for regulation, particularly for highly capable AI systems that could potentially cause significant harm. He suggests that while individual countries may regulate AI applications within their jurisdictions, global regulation will be necessary for the most powerful AI systems, similar to the regulation of nuclear weapons.

💡Productivity

Productivity refers to the efficiency and output achieved in a given amount of time or with a given set of resources. Sam Altman believes that AI will significantly enhance productivity, stating that he expects a 20% productivity increase within his company in the next 12 months due to the introduction of new AI tools and capabilities. He considers AI's potential to boost productivity as one of the key benefits it can bring to the world.

💡Competition

Competition refers to the rivalry among companies and organizations in developing and commercializing AI technologies. Sam Altman acknowledges that while OpenAI is currently a leader in the field, there will be many players contributing to the advancement of AI models and capabilities. He believes that competition and user preferences will ultimately drive the direction and adoption of AI systems, with different organizations experimenting with various features and approaches.

💡Fusion Energy

Fusion energy refers to the process of harnessing the immense energy released by fusing atomic nuclei together, as opposed to the fission process used in nuclear power plants. Sam Altman expresses excitement about the potential of fusion energy, stating that he believes fusion will soon become a reality. He sees fusion, along with AI, as one of the most important technologies for achieving global abundance by providing clean, virtually unlimited energy at a very low cost.

💡Long-term Thinking

Long-term thinking refers to the ability and willingness to consider and plan for the long-term future, beyond immediate or short-term goals. Sam Altman credits his long-term thinking as one of his superpowers, allowing OpenAI to pursue the ambitious goal of AGI despite initial skepticism and criticism. He emphasizes the importance of not being constrained by common wisdom and evaluating problems and solutions from a long-term perspective.

💡Research Culture

Research culture refers to an environment that fosters and encourages scientific exploration, experimentation, and the pursuit of new knowledge. Sam Altman discusses the need for OpenAI to rediscover and cultivate a strong research culture, which he believes had been lacking in Silicon Valley for some time. He describes the process of finding the right balance between giving researchers freedom to explore novel ideas and aligning their efforts towards the company's overarching goals.

Highlights

OpenAI shocked the world last November with ChatGPT, and OpenAI is not only creating models, it's creating the future.

The best way to predict the future is to invent it, and we're trying to see where the technology takes us, deploy it into the world to actually understand how people are using it, where the risks are, where the benefits are, what people want, how they'd like it to evolve, and then sort of co-evolve the technology with society.

When we have a system that can figure out new scientific knowledge that humans on their own could not, I would call that an AGI.

By the end of this decade, we expect to have extremely powerful systems that change the way we currently think about the world.

The holy moments have not been about new technology or new models, but about the breadth of use cases the world is finding to do this.

An uncommon use case is a guy who runs a laundromat business and uses ChatGPT for marketing copy, customer service, legal documents, and more - a virtual employee in every category.

Altman believes AI will particularly positively impact poor people the most, by democratizing intelligence and making expert advice available to everyone.

Altman is committed to making GPT models as widely available as possible, even if people use them for things OpenAI might not always feel are the best.

Governments will have to regulate AI, but OpenAI doesn't think individual countries will give up self-determination for what models can say. Global regulation will likely only happen for technology capable of grievous worldwide harm.

Altman aims for a 20% productivity increase at OpenAI over the next 12 months, driven by their AI tools.

Altman believes the key to developing leaders is having them spend enough time hiring talent, developing teams, communicating vision, and thinking strategically - things leaders often fail at initially.

For researchers, OpenAI provides a high-level vision and resources, but gives huge freedom to pursue their own directions.

Altman is most excited about fusion energy outside of AI, believing it will lead to abundance and solving climate change.

Altman recommends the book 'The Beginning of Infinity' as inspiring people to solve any problem and go off and do that.

Altman is focused on the present tactical challenges of building AGI rather than thinking about his future legacy.

Transcripts

play00:00

[Music]

play00:01

open AI shocked the world uh last

play00:03

November with uh chat GPT and um

play00:06

open AI is not only creating models it's

play00:09

uh creating the future so Sam is an

play00:11

honor to have you on the podcast thanks

play00:13

a lot for having me it's great to be

play00:14

here

play00:15

now how does it feel to spearhead this

play00:17

revolution

play00:19

ah

play00:21

it's definitely a little surreal it is

play00:23

uh

play00:25

it's like a very exciting moment in you

play00:29

know the history of technology and to

play00:30

get to work with the people who are who

play00:32

are creating this

play00:33

um it's like a is a great honor and uh I

play00:36

can't imagine anything more exciting to

play00:38

be doing

play00:39

no I can't imagine

play00:41

it's definitely a lot I can see that now

play00:45

big picture what's the vision of of the

play00:48

world where um humans and AI coexist

play00:52

well you know I one one thing that we

play00:55

believe is that you have to answer that

play00:57

question empirically there's been a lot

play00:59

of philosophizing about it for a long

play01:00

time very smart people have had very

play01:02

strong opinions I think they've all been

play01:05

wrong and it's just a question of how

play01:06

wrong and

play01:08

the course that a technology takes is is

play01:11

difficult to predict in advance I'm a I

play01:14

love that Allen K quote that the best

play01:15

way to invent the but the best way to

play01:17

predict the future is to invent it

play01:19

and so what we're trying to do is see

play01:22

where the technology takes us deploy it

play01:25

into the world to actually understand

play01:26

how people are using it where the risks

play01:29

are where the benefits are what people

play01:31

want how how they'd like it to evolve

play01:33

and then sort of co-evolve the

play01:35

technology with society and you know I

play01:39

think if you asked people five or ten

play01:41

years ago what the deployment of

play01:43

powerful AI into the world is going to

play01:44

look like they wouldn't have guessed

play01:46

that it looks like this

play01:48

um people had very different ideas at

play01:49

the time but this was what turned out to

play01:52

be where the technology leads and and

play01:54

where the science leads

play01:56

and so we try to follow that and how far

play01:59

into the future can you see now

play02:03

uh the next few years seem pretty clear

play02:06

to us

play02:07

you know we kind of know where these

play02:09

models are going to go we have a roadmap

play02:10

we're very excited about uh we can

play02:13

imagine both the Science and Technology

play02:16

but also the product a few years out

play02:19

and beyond that you know we're gonna

play02:20

learn a lot we'll be a lot smarter in

play02:22

two years than we are today yeah and and

play02:24

what kind of uh you know holy

play02:26

moments have you had lately

play02:30

um well remember that we've been

play02:33

you know we we've been thinking about

play02:35

this and playing around with this

play02:37

technology for a long time so the world

play02:38

has had to catch up very quickly but we

play02:40

we have less holy moments because

play02:43

you know we've been expecting this and

play02:45

we've been building it for a while and

play02:47

it you know we don't it doesn't feel as

play02:49

discontinuous to us

play02:52

but and what kind of big things have you

play02:54

seen since chat to petite sport

play02:57

well we

play03:02

the biggest ones have not been about new

play03:04

technology or new models but about the

play03:08

breadth of use cases the world is

play03:10

finding to do this so the holy

play03:12

moments have not been like oh now the

play03:13

model can do this now we now we figured

play03:15

out that because again you know some

play03:17

would expected that but seeing how much

play03:20

people are

play03:22

coming to rely on these models to do

play03:24

their work in their current form which

play03:26

is very imperfect and broken you know

play03:27

we're the first to say these models are

play03:30

still not very good they hallucinate a

play03:32

lot they're not very smart they have all

play03:33

these problems and yet people are using

play03:37

their human Ingenuity to figure out how

play03:39

to work around that and still leverage

play03:41

these tools and so watching people that

play03:43

are remaking their workflows for a world

play03:46

with llms has been

play03:49

big and some examples of new things

play03:53

you've seen new user cases applications

play03:58

um you know a common one is around how

play04:01

developers are changing their workflow

play04:03

to uh you know spend like half their

play04:06

time in chat GPT you hear people say um

play04:07

they feel like two or three or sometimes

play04:09

more product times productive than

play04:11

before

play04:12

um an uncommon one is I met a guy who

play04:15

runs a laundromat business as like a

play04:18

one-person thing and uses chat GPT for

play04:22

um coming up with a marketing copy

play04:24

dealing with like customer service uh

play04:27

helping review legal documents we need a

play04:28

long list of things and he's like I got

play04:30

a virtual employee in every category

play04:31

that was pretty cool

play04:34

and what about things like uh brain

play04:36

implants and getting it to help with

play04:38

speech and so on which we just saw

play04:40

recently

play04:42

um

play04:43

I'm very excited about neural interfaces

play04:45

but I am not currently super excited

play04:48

about brain implants I don't feel ready

play04:49

to want one of those I would love a

play04:52

device uh that could like read my mind

play04:54

and

play04:56

but I would like it to do that without

play04:58

having to put a hole in my skull and I

play05:01

think that's possible

play05:02

how

play05:04

oh there's many Technologies depending

play05:06

on what you'd want but you know there's

play05:07

there's a whole bunch of companies

play05:08

working on trying to sort of like read

play05:09

out the words you're thinking without

play05:12

requiring a physical implant now a few

play05:15

years ago nobody had heard about open AI

play05:17

now uh everybody's heard about it you

play05:19

are

play05:20

you know one of the most most famous

play05:22

people on Earth

play05:23

um but the people so how many people are

play05:26

you at open AI now

play05:28

500 or so and what what do these 500

play05:31

people actually do

play05:34

um it's a mix so there's a large Crew

play05:36

That's just doing the research like

play05:38

trying to figure out how we get from the

play05:40

model we have today which is very far

play05:42

from an AGI to an AGI and all of the

play05:44

pieces that have to come together there

play05:45

so you know scaling the models up coming

play05:48

up with new methods uh

play05:51

that that whole process uh there's a

play05:53

team that makes the product and figures

play05:56

out also how to scale it there's a sort

play05:58

of traditional Silicon Valley tech

play06:00

company go to market team

play06:02

um there's a very complex uh legal and

play06:05

policy team that does all the work you'd

play06:08

imagine there

play06:09

um yeah

play06:11

and so your your priorities as a CEO now

play06:14

how do you spend your time

play06:18

um

play06:20

I kind of think about the buckets of of

play06:23

what we have to do in uh research

play06:26

product and compute on the technical

play06:28

side

play06:29

and then uh on the and I that's sort of

play06:33

the work that I think I I enjoy the most

play06:36

and where I can contribute the most

play06:39

um and then I spend some of my time on

play06:42

policy uh and

play06:45

sort of

play06:47

social impact issues for lack of a

play06:49

better word uh and then the other things

play06:51

I spent less time on but we have great

play06:53

people that run the other functions

play06:55

now your mission has been to ensure that

play06:59

artificial well the general intelligence

play07:00

benefits all of humanity what's the

play07:03

biggest challenge to this you think

play07:08

I

play07:09

a couple of thoughts there uh one I'm

play07:12

reasonably optimistic about solving the

play07:15

technical alignment problem we still

play07:17

have a lot of work to do but you know I

play07:19

feel like

play07:20

and feel better and better over time not

play07:22

worse and worse

play07:24

this the the social part of that problem

play07:26

you know how do we decide whose values

play07:28

we align to who gets to set the rules

play07:30

for this how much how much flexibility

play07:32

are we going to give to each individual

play07:34

user and each individual country

play07:37

we think the answer is quite a lot but

play07:39

that comes with some other challenges

play07:41

um in terms of how they're going to use

play07:42

these systems that's all going to be uh

play07:46

you know difficult to put it lightly for

play07:48

society to agree on

play07:49

and

play07:51

and then how we share the benefits of

play07:53

this

play07:54

what we use these systems for uh that's

play07:57

also going to be difficult to to agree

play08:00

on um

play08:01

kind of the buckets I think about here

play08:04

are

play08:05

we've got to decide what

play08:08

you know Global governance over these

play08:10

systems as they get super powerful is

play08:11

going to look like and everybody's got

play08:14

to play a role in that

play08:16

um we've got to decide how we're going

play08:17

to share the access to these systems and

play08:20

we've got to decide how we're going to

play08:21

share the benefits of them

play08:22

the

play08:24

you know there's a lot of people who are

play08:26

excited about things like Ubi and I'm

play08:28

one of them but I have no delusion that

play08:31

Ubi is a full solution or even the most

play08:33

important part of the solution like

play08:35

people don't just want handouts of money

play08:37

from an AGI they want increased agency

play08:40

they want to be able to be architects of

play08:42

the future they want to be able to do

play08:43

more than they could before and figuring

play08:46

out how to do that while addressing all

play08:48

of this sort of

play08:50

let's call them disruptive challenges uh

play08:53

I think that's going to be very

play08:54

important but very difficult

play08:58

how far out this true AGI

play09:03

I don't know how to put a number on it I

play09:05

also think we're getting close enough

play09:07

that the definition really matters and

play09:08

people mean very different things when

play09:10

they say it but I would say that I

play09:12

expect by the end of this decade for us

play09:15

to have

play09:16

extremely powerful systems

play09:19

that change the way we currently think

play09:21

about the world

play09:24

and and you say we've got different

play09:26

definitions what is what is your

play09:28

definition of

play09:29

general intelligence you know there's

play09:32

like kind of the open AI official

play09:33

definitions and then there's one that's

play09:35

very important to me personally when we

play09:37

have a system that can

play09:39

figure out new scientific knowledge

play09:42

that humans on their own could not

play09:45

I would call that an AGI

play09:50

and that you think we may have by the

play09:52

end of this decade well I kind of tried

play09:55

to like soften that a little bit just by

play09:56

saying we'll have systems that like

play09:58

really change the way the world Works um

play10:00

the the new science may take a little

play10:02

bit longer or maybe not

play10:04

Steve what's the end game here

play10:06

um

play10:07

are we just all of us going to work a

play10:09

lot less

play10:12

um

play10:14

you know I want to be people

play10:16

I think we'll all work differently I

play10:18

think we'll still many of us will still

play10:19

work very hard but differently every

play10:21

technological Revolution

play10:23

um people say they're we're just gonna

play10:25

do less work in the future and

play10:28

we just find

play10:29

that we want a higher standard of living

play10:31

and new and different things and also

play10:34

that we find new kinds of work we really

play10:35

enjoy you know

play10:38

neither you nor I have to work and I bet

play10:40

we both work pretty hard

play10:42

and I love it I love my job I love my

play10:46

job

play10:47

and I feel very blessed so

play10:50

the definition of work what we work on

play10:52

why we work the reasons for it I expect

play10:54

that all to change what we do I expect

play10:56

to change

play10:57

but

play10:59

I love what I do and I expect people in

play11:02

the future to love even more what they

play11:04

do because there will be new amazing

play11:05

things to work on that we can hardly

play11:07

imagine right now and less boring stuff

play11:10

yeah

play11:12

I'm all for getting rid of the boring

play11:13

stuff like I think like everybody should

play11:15

love it that's maybe one thing we could

play11:17

say in the future is everybody will do

play11:19

things that they love you won't have to

play11:21

do things you don't and I think most

play11:22

people probably don't love their jobs

play11:24

right now

play11:27

um I believe you just traveled the world

play11:29

and met with a lot of people and users

play11:30

uh what's what was your what was your

play11:32

main takeaway

play11:35

uh

play11:38

uh the level of excitement about the

play11:41

future and what this technology is going

play11:42

to do for people

play11:44

around the world in Super different

play11:47

cultures and super different contexts

play11:51

was just very very different than I

play11:53

expected

play11:56

like it was

play12:00

it was like overwhelming in the in the

play12:03

best way

play12:04

any any difference between geographies

play12:10

yeah like you know in

play12:14

in the developing World

play12:16

um people are just focused on what this

play12:18

can do economically right now uh and in

play12:20

the more developed world there's much

play12:22

more of a conversation about what the

play12:23

downsides are going to be and you know

play12:25

how this is going to disrupt things and

play12:26

there's still excitement but it's

play12:28

tempered more by fear that was that was

play12:30

a striking change a difference

play12:33

do you think it will lift up the oral

play12:36

part of the world yeah I really do I

play12:38

think it's going to make everybody

play12:39

richer but I think it impacts positively

play12:42

impacts poor people the most

play12:44

and I think this is true for most kinds

play12:46

of Technology

play12:48

um but it should be particularly true

play12:52

for the democratization of intelligence

play12:54

you know you or I can afford to pay a

play12:57

super highly compensated expert if we

play12:59

need help but a lot of people can't

play13:02

and to the degree that we can make say

play13:05

great medical advice available to

play13:07

everyone

play13:09

um

play13:10

you and I benefit from that too but less

play13:12

less than people who just can't afford

play13:14

it at all right now

play13:17

and what would potentially prevent this

play13:19

from happening

play13:22

well

play13:23

we could be wrong about the trajectory

play13:26

that technology is on I think we are on

play13:28

a very smooth exponential curve that has

play13:30

much much further to go

play13:31

but you know we could be like missing

play13:34

something we could be drinking our own

play13:35

Kool-Aid we could either brick wall soon

play13:37

I don't think we're going to um I think

play13:39

we have some

play13:40

remarkable progress ahead of us in the

play13:43

next few years but yeah we could we

play13:45

could somehow be wrong for a reason we

play13:46

don't understand yet

play13:51

um what is it doing to the global

play13:54

balance of power

play14:00

I don't know how that's going to shift

play14:04

um I'm not sure anyone does but I

play14:06

certainly don't think that's something

play14:08

that I'm

play14:09

particularly well qualified to weigh in

play14:10

on

play14:13

but it just seems like it's being it's

play14:15

so key now to the the weapon race

play14:20

the medical race the self-driving

play14:22

vehicle race just all these races

play14:26

but it's also available

play14:28

pretty broadly

play14:30

you know like one of the things that we

play14:32

think is important is that we make gpt4

play14:34

extremely widely available

play14:37

um even if that means

play14:38

people are going to use it for things

play14:40

that we might not always

play14:42

feel

play14:43

are the best things to do with it

play14:45

uh but you know we have a goal of

play14:48

globally democratizing this technology

play14:51

and

play14:52

as far as we know gpt4 is the most

play14:56

capable model in the world right now

play14:59

and it is available to anyone who wants

play15:01

to pay what I think are the very cheap

play15:03

API rates now anyone is not quite there

play15:05

you know we don't we're blocked in a

play15:07

hand we block a handful of countries

play15:08

that the US has embargoes with or

play15:10

whatever but it's pretty available to

play15:14

the world

play15:16

but in order to develop it further you

play15:17

need

play15:18

um well you need the right chips right

play15:20

and they are not available

play15:37

but what matters is how you're going to

play15:40

get to like GPT six and seven and also

play15:42

even more than that how you're going to

play15:43

get the next set of very different ideas

play15:45

that take you

play15:47

on a different trajectory like everyone

play15:50

knows how to climb this one Hill and

play15:51

we're gonna go figure out the next total

play15:52

Decline and there's not a lot of people

play15:55

in the world that can do that but

play15:58

we're committed to making that as widely

play16:00

available

play16:01

as we can

play16:03

do we know where China is here

play16:06

we don't maybe someone doesn't want here

play16:09

do you think there's a chance that well

play16:11

like they did with weapons that just

play16:12

suddenly bang they had the supersonic

play16:14

Rockets we didn't even know they existed

play16:16

right could that happen yeah

play16:18

totally it could

play16:21

I mean we're gonna work as hard as we

play16:23

can to make sure that

play16:25

we stay in the lead but

play16:28

we're a little in the dark

play16:31

so Mark Andreessen for instance he

play16:33

thinks we should stuff it into

play16:34

everything

play16:36

and you know as part of

play16:39

the

play16:41

geopolitical

play16:43

fight

play16:46

what do you think

play16:48

stuff it into everything I mean just

play16:49

like put it everywhere

play16:51

that's happening and I think that's

play16:53

great

play16:56

like without revealing something I

play16:58

shouldn't the amount of gpt4 usage and

play17:01

the number of people companies that are

play17:04

integrating it into different ways is

play17:06

staggering is awesome

play17:09

some examples if you had to reveal

play17:10

something

play17:13

uh

play17:14

I mean like you know car makers are

play17:17

putting it into cars and I was like all

play17:20

right that sounds like a gimmick and

play17:21

then I got to try a demo of it and I was

play17:23

like wow this being able to just talk to

play17:26

my car

play17:27

and control it in a sophisticated way

play17:30

entirely by voice

play17:32

actually totally changes my experience

play17:34

of

play17:35

how I like use a car in a way that I

play17:38

would not have believed was so powerful

play17:39

so for instance use it in a car what do

play17:41

you say

play17:44

uh this is this is probably where I

play17:46

don't want to like reveal a partner's

play17:47

plans but you can imagine a lot of

play17:50

things that you might say like the basic

play17:52

stuff is easy like you know I need to go

play17:54

here and

play17:56

um I'd like to listen to this music and

play17:58

also can you make it colder

play18:01

um

play18:02

sounds good

play18:03

do you depend on uh newer and even more

play18:06

powerful ships than what we have now I

play18:07

mean how much quicker do you how much

play18:10

more complex the chips need to be than

play18:12

h100 or the latest things from Nvidia

play18:16

um

play18:19

yeah of course like there's

play18:22

the ways the ways that we can keep

play18:24

making these models better are we can

play18:26

come up with better algorithms

play18:28

or just more efficient implementations

play18:30

or both we can have uh better chips and

play18:34

we can have more of them and we plan to

play18:36

do all three things and they multiply

play18:37

together

play18:40

and do you think these the chip makers

play18:42

who will end up with the profits there

play18:48

uh they will end up with profits I

play18:49

wouldn't say the prophets I think

play18:51

there's many people who are gonna like

play18:52

share this

play18:54

massive economic boon

play19:03

how much does it cost to train these

play19:04

models I mean how much have you spent on

play19:07

free training models

play19:10

we don't really talk about exact numbers

play19:11

but like quite a lot

play19:13

yeah

play19:16

and what's the challenge of spending so

play19:19

much money pre-training and then

play19:21

it lasts for a relatively short period

play19:23

of time in a way you have to depreciate

play19:25

the whole investment in order because

play19:27

you need to invest more in the Next

play19:29

Generation

play19:30

uh I mean what are the what's yeah how

play19:33

do you think how do you think about this

play19:36

I that's true I don't think they're

play19:37

going to be as many massive pre-trained

play19:39

models in the world as people think

play19:41

I think there will be a handful and then

play19:43

a lot of people are going to fine-tune

play19:44

on top of that or whatever

play19:48

so so how does how do you how do you

play19:51

read the competitive part of it that I

play19:54

think is important is like you know

play19:56

when we did do gpt4

play19:59

um we did we produced this artifact and

play20:00

people use it and it generates all this

play20:02

economic value and um you're right that

play20:04

does depreciate fast but in the process

play20:06

of that we learned so much about how to

play20:09

go

play20:10

we pushed the frontier of research so

play20:12

far forward and we learned so much that

play20:13

it'll be critical to us being able to go

play20:16

do gpt5 someday or whatever that it's

play20:18

like you're not just depreciating the

play20:22

capex one time for the model you have

play20:25

generated a huge amount of new IP to

play20:26

help you keep keep making better models

play20:30

um

play20:31

so the way you read the competitive

play20:33

landscape now how what does it look like

play20:37

uh

play20:40

I mean there are going to be many people

play20:42

making great models will be one of them

play20:45

we'll like contribute our egi to the

play20:47

world to society among among others and

play20:50

I think that's fine and

play20:52

you know we'll all

play20:56

run different experiments we'll try

play20:58

setting you know we'll have different

play21:00

features different capabilities we'll

play21:01

have different opinions about what the

play21:03

rules of a model should be

play21:05

and through the magic of competition uh

play21:09

and users deciding what they want

play21:11

we'll get to a very good place

play21:14

how far ahead do you think you are a

play21:16

competition

play21:17

I don't know

play21:19

I don't think about that much to be

play21:20

honest like we're

play21:21

our customers are very happy uh they are

play21:24

desperate for more

play21:27

features and more capacity and us to be

play21:30

able to deliver our service in all of

play21:32

these little better ways and we're very

play21:34

focused on that

play21:36

um I'm sure Google will have something

play21:37

good here at some point but

play21:39

like I think they're you know racing to

play21:41

catch up with where we are and

play21:43

we're thinking very far ahead of that

play21:46

so normally in in the software business

play21:47

you have something which is

play21:49

very cheap where you ship where you ship

play21:52

a lot of it or something which is very

play21:53

expensive and you don't ship so much

play21:55

here you could potentially ship

play21:56

something

play21:58

and I can see you smiling here hey you

play22:00

can potentially exactly

play22:04

so so tell us how how is this going to

play22:06

work

play22:08

you know I'll tell you one of the most

play22:10

fun things about this job is

play22:13

we are past the point as a company uh I

play22:17

am past the point as like a CEO running

play22:19

this company where there's like a road

play22:20

map to follow

play22:22

we're just doing a bunch of things that

play22:24

are like outside of the standard Silicon

play22:25

Valley

play22:28

received wisdom and so we get to just

play22:31

say well we're going to figure it out

play22:32

and we're going to try things and if we

play22:33

got it wrong like who cares there was no

play22:35

like it's not like we like screwed up

play22:37

something that was already figured out

play22:40

I mean back to our very founding like

play22:42

most big tech companies are a they start

play22:46

as a product company and eventually they

play22:48

built on a research lab that doesn't

play22:49

work very well

play22:51

and we started as a research lab and

play22:54

then bolted on a product company that

play22:55

didn't work very well and now we're

play22:57

making that better and better

play22:58

um but to like a project company I mean

play23:00

Microsoft

play23:02

no no I mean like having to figure out

play23:03

how to ship the API in chat GPT yeah

play23:07

um like we started we really did just

play23:09

start as a research lab and then one day

play23:11

we're like we're gonna make a product

play23:12

and then we're gonna make another

play23:12

product and now that product is like the

play23:14

fastest growing product in history or

play23:15

whatever

play23:16

and we weren't set up for that

play23:20

it's the usage of chatipity decelerating

play23:23

no

play23:27

I think it maybe took like a little bit

play23:29

of a flat line during the summer which

play23:30

happens for lots of products but it is

play23:34

doink up

play23:39

tell us about the relationship with

play23:40

Microsoft how does that work

play23:44

um

play23:47

I mean at a high level they

play23:50

build us computers

play23:52

we train models and then we both use

play23:54

them

play23:55

and it's a pretty

play23:58

clear and great partnership

play24:03

are you have you are your goals aligned

play24:07

yeah they really are

play24:09

um

play24:10

one of I mean there's like there's of

play24:13

course areas where we are not perfectly

play24:15

aligned and like I don't

play24:17

like any partnership in Life or business

play24:19

or whatever I won't pretend it's perfect

play24:21

but it is very good and we are aligned

play24:24

at the highest levels which is really

play24:26

important and

play24:29

the the misalignments that come up at

play24:31

the sort of lower levels once in a while

play24:33

we you know like

play24:36

no contract in the world is what makes a

play24:38

partnership good like what makes a

play24:40

partnership good is that when those

play24:41

things happen you know Satya and Kevin

play24:45

and I talk and

play24:47

you'd figure it out and you know there's

play24:50

like a good spirit of compromise over a

play24:51

long time

play24:54

now they've been one of them initiators

play24:56

and I mean together with you in terms of

play24:59

self-regulating this space what

play25:02

can this type of thing be self-regulated

play25:07

not entirely

play25:09

um I think it needs to start that way

play25:10

and I think that's also kind of like how

play25:13

you figure out a better answer but like

play25:15

governments are going to have to do

play25:18

their own thing here and you know we can

play25:19

provide input to that but we don't we're

play25:22

not like the elected decision makers of

play25:23

society and we're very aware of that

play25:26

and what can governments do

play25:30

anything they want

play25:32

um and I think people forget this like

play25:33

governments have

play25:35

quite a lot of power they just have to

play25:36

decide to use it

play25:39

yeah but I mean so let's say now Europe

play25:41

decides that they're going to regulate

play25:43

you really harshly I mean are you just

play25:44

going to say goodbye Europe no

play25:47

possibly

play25:48

um I don't think that's what's gonna

play25:49

happen like I think we have a very

play25:52

productive conversation I think Europe

play25:55

will regulate AI but reasonably not not

play25:58

very harshly and what is I'm sorry and

play26:01

what is a reasonable regulation what

play26:03

what is that level

play26:07

I think there's many ways it it could

play26:13

I think there's many ways that it could

play26:15

go uh that would all be reasonable but

play26:19

you know like

play26:21

to give one specific example and I'm

play26:23

surprised this is controversial at all

play26:25

but a regulatory thing that's coming up

play26:27

a lot in Europe and elsewhere is that if

play26:30

you're using an AI you've got to

play26:32

disclose it so if you're talking to like

play26:34

a bot and not a person you need to know

play26:36

that that seems like a super reasonable

play26:39

and important thing to do to me for a

play26:41

bunch of reasons given what's starting

play26:42

to happen

play26:44

um to my surprise there's some people

play26:45

who really hate that idea but I'd say

play26:47

that's like a very very

play26:50

um

play26:51

reasonable regulation

play26:55

I agree I agree

play26:58

do you think we'll get Global regulation

play27:00

is there any

play27:02

um shape I think that can happen

play27:05

I think we're going to get it for only

play27:07

the most powerful systems so you know I

play27:10

think like individual countries or

play27:11

blocks of countries are not going to

play27:13

give up their right to self-determine

play27:14

for like you know what can a model say

play27:17

and not say and how do we think about

play27:18

the Free Speech rules and whatever

play27:20

um but but for technology that is

play27:24

capable of causing Grievous harm to the

play27:26

entire world like we have done before

play27:28

with nuclear weapons

play27:30

a small number of other examples yeah I

play27:33

think we are going to come together and

play27:34

get good Global regulation but given how

play27:37

embedded it now is in in everything as

play27:39

we spoke about you know weapons your car

play27:40

you're sitting in your car and it's like

play27:42

super cool and it's uh cold and hot than

play27:44

music and this and that and you know and

play27:46

you you're a Chinese car company and you

play27:48

won't all compete the the Americans why

play27:50

would you wanna

play27:51

why would you only when you want to have

play27:53

a regular regulation on this

play27:55

well gpt4 I don't think needs Global

play27:57

regulation nor should it have it I'm

play27:58

talking about like what happens when we

play27:59

get to gpt10 and it is you know say

play28:01

smarter than all of humans put together

play28:09

and that's why you think we get it

play28:11

that's when I think we'll get it

play28:14

when you have the cost of Intelligence

play28:16

coming down so dramatically like it is

play28:18

now what is it going to do to

play28:20

productivity in the world

play28:23

I mean it's supposed to go up a lot

play28:25

right that's what theory tells us and

play28:28

that's what I think

play28:30

so so

play28:33

um

play28:33

I've told everybody in in our company

play28:36

that hey we shouldn't we should improve

play28:38

our productivity about 10 over the next

play28:40

12 months all of us

play28:42

and that's and you know how I got the

play28:43

number

play28:46

did you ask gbt no I just took it I just

play28:49

took it straight out of the air

play28:52

do you think what do you think about

play28:54

that number is it low high under

play28:56

ambitious what what should what should

play29:00

productivity increase by

play29:01

how do you how do you measure uh the

play29:04

stuff we do

play29:07

that's not very good measurement but

play29:09

just the kind of stuff that I produce

play29:13

how much of your company writes code

play29:15

uh

play29:18

if uh

play29:21

15 well people in

play29:24

in technology probably 15 20 of us

play29:29

more actually but

play29:32

okay let's let's say that's 20 writing

play29:34

code I think an overall goal of like you

play29:37

know 20 productivity increase in a

play29:40

12-month period is appropriately

play29:42

ambitious given the tool

play29:44

and given the tools that we will launch

play29:45

over the next 12 months

play29:47

okay sounds like I should uh up the game

play29:50

her a bit

play29:51

I think so yeah I'll just tell everybody

play29:53

you told me to so that's fine

play29:55

it's better to set a goal that is like

play29:57

slightly too ambitious than

play29:58

significantly under ambitious in my

play30:00

opinion

play30:01

yeah

play30:03

um now is there like a an inherent

play30:06

limitation to what AI can achieve I mean

play30:08

is there like a point of no further

play30:10

progress

play30:13

I couldn't come up with any

play30:17

reasonable explanation of why that

play30:19

should be the case

play30:25

you say

play30:26

um that most people overestimate risk

play30:28

and underestimate reward what do you

play30:31

mean by that

play30:35

um

play30:36

you know there's a lot of people that

play30:37

don't go start the company or take the

play30:40

job they want to take or

play30:43

try a product idea because they think

play30:45

it's too risky

play30:47

and then if you really ask them like all

play30:48

right can we unpack that and can you

play30:50

explain what what the risk is and what's

play30:52

going to go wrong it's like well the

play30:54

company might fail

play30:55

okay and then what

play30:57

you know well then I have to go back to

play30:58

my own job

play30:59

my old job all right that seems

play31:01

reasonable and they're like well you

play31:03

know but I'll be a little embarrassed

play31:04

and I'm like oh is that you know what's

play31:06

the cause

play31:07

I I I think like people view that as a

play31:11

super risky thing and they view staying

play31:14

in a job where they're not really

play31:16

progressing or learning more or doing

play31:18

new things for 20 years uh it's not

play31:21

risky at all

play31:22

and to me that seems catastrophically

play31:25

risky you know to like miss out on 20

play31:28

years of your very limited life and

play31:31

energy to

play31:32

try to do the thing you actually want to

play31:34

do

play31:35

um

play31:37

that seems really risky

play31:39

[Music]

play31:40

but it's not thought of that way

play31:42

talking about staying in your job what

play31:45

um so the leaders and the CEOs so you

play31:48

know how how is AI going to change the

play31:51

way leaders need to act and behave

play31:54

well hopefully it's gonna like do my job

play31:55

you know hopefully the first thing we do

play31:57

with AGI is let it run open Ai and I can

play32:00

you know go sit on the beach

play32:03

that'd be great

play32:05

I wouldn't want to do that for long but

play32:06

right now it sounds really nice

play32:07

how do you develop the people in your

play32:09

company how do you develop your leaders

play32:13

um

play32:16

I think developing leaders tend to fail

play32:18

at the same set of things most of the

play32:20

time you know they don't

play32:22

they don't spend enough of their time

play32:24

hiring talent and developing their own

play32:26

teams they don't spend enough of their

play32:29

time articulating and communicating the

play32:31

vision of their team uh they don't spend

play32:33

enough of their time thinking

play32:34

strategically because they get bogged

play32:36

down on the details and so when I like

play32:39

put a new person in a very senior role

play32:42

which I always try to do with promotions

play32:44

I mean I'm willing to hire externally

play32:46

but I'd always always rather promote

play32:48

internally

play32:49

um

play32:51

I have them over for dinner or go for a

play32:53

walk or sit down or something and say

play32:54

like here are the ways you're going to

play32:56

screw up

play32:57

I'm gonna tell you all of them right now

play32:59

you're gonna totally ignore me on this

play33:01

and not believe me or at least not do

play33:02

them because you're going to think you

play33:03

know better or you know not make these

play33:05

mistakes but

play33:06

I'm going to put this in writing and

play33:08

hand it to you and we're going to talk

play33:10

about it in three months and in six

play33:11

months and you know

play33:14

eventually I think you'll come around

play33:17

and they always ignore me and always

play33:18

come around

play33:19

and I think just like

play33:21

letting people recognize that for

play33:23

themselves uh but telling them up front

play33:26

so that it's at least in their mind is

play33:28

very important well it's the most common

play33:30

way leaders grow up

play33:33

uh

play33:34

failing to

play33:38

recruit slash promote and then failing

play33:40

to build a good

play33:42

delegation process

play33:44

and then as a consequence of those not

play33:47

having enough time to

play33:49

set strategy because they're too bogged

play33:51

on in the day-to-day and they can't get

play33:53

out of that downward spiral

play33:55

uh what what is your delegation process

play33:57

look like

play33:58

two things number one high quality

play34:01

people number two setting the training

play34:03

wheels at the right height and

play34:04

increasing them over time as people

play34:06

learn more and I build up more Trust

play34:09

is that the way to manage geniuses

play34:14

um

play34:15

they get uh researchers that's a

play34:18

different thing I was like talking about

play34:19

how to like Executives that run the

play34:21

thing okay what about researchers what

play34:23

about the geniuses

play34:25

um the primadonna's

play34:31

explain well pick really great people

play34:35

explain the general direction of travel

play34:39

and the resources that we have available

play34:41

and kind of at a high level where we

play34:43

need to get to to get to the next level

play34:45

so you know we have to achieve this to

play34:46

go get the next 10 times bigger computer

play34:48

or whatever

play34:49

and

play34:53

you know provide like the most mild

play34:57

input on it would be really great if we

play35:00

could pursue this research Direction and

play35:02

this would be really helpful and then

play35:03

step back

play35:05

so we kind of like you know we set a

play35:09

very high level vision for the company

play35:12

and what we want to achieve and beyond

play35:14

that researchers get just a huge amount

play35:15

of freedom

play35:17

do you think companies generally are too

play35:19

detailed

play35:20

in the remit they give the teams

play35:24

yes

play35:25

I mean at least for our kind of thing I

play35:27

think uh managing

play35:30

we talked earlier about having to like

play35:32

ReDiscover a bunch of things

play35:35

I'd say this

play35:37

realizing it's going to come across as

play35:39

arrogant and I don't mean it that way

play35:41

but I think it's an important Point

play35:43

um

play35:44

there used to be great research that

play35:47

happened in companies in Silicon Valley

play35:50

um you know Xerox park being the obvious

play35:52

example there have not been for a long

play35:55

time and we really had to ReDiscover

play35:56

that and we made many screw-ups along

play35:58

the way to learn how to run a research

play36:00

effort well and how you balance

play36:04

letting people go off and do whatever

play36:06

towards trying to get the company to

play36:08

point in the same direction and then

play36:10

over time how to get to a culture where

play36:12

people will try lots of things but

play36:14

realize where the promising directions

play36:16

are and on their own want to come

play36:20

together to say let's put all of our

play36:22

Firepower behind this one idea because

play36:24

it seems like it's really working

play36:26

you know I'd love to tell you we always

play36:28

knew language models were going to work

play36:29

that was absolutely not the case we had

play36:31

a lot of other ideas about what might

play36:32

work but when we realized the language

play36:34

models were going to work we were able

play36:36

to get the entire

play36:37

research trust or almost entire research

play36:39

Brain Trust to get behind it

play36:42

I'm slightly surprised you say that

play36:44

there was no innovation culture in

play36:45

Silicon Valley because that's uh a bit

play36:47

contrary to uh

play36:49

to what I thought so

play36:51

there is

play36:53

yeah there's a product Innovation

play36:54

culture for sure a good one but like

play36:58

I mean again I hate to say this because

play37:00

it sounds so arrogant but like before

play37:02

open AI what was the last really great

play37:04

scientific breakthrough that came out of

play37:06

a Silicon Valley company

play37:09

and and why did and why did that happen

play37:11

why

play37:13

what happened there

play37:16

well

play37:17

we got a little lucky no I don't mean we

play37:19

I'm sorry why did why do these

play37:22

culture disappear in Silicon Valley you

play37:24

think

play37:27

I have spent so much time reflecting on

play37:29

that question uh

play37:33

I don't fully understand it I think

play37:37

I think

play37:38

it got so easy to make a super valuable

play37:41

company

play37:42

um and people got so impatient on

play37:44

timelines and return Horizons

play37:47

that a lot of the capital went to these

play37:49

things that could just you know fairly

play37:51

reliably multiply money in a short

play37:53

period of time uh by just saying like

play37:56

we're gonna take the magic of the

play37:59

technology we have now

play38:01

the internet mobile phones whatever and

play38:02

apply to every industry

play38:05

that sucked up a lot of talent very

play38:07

understandably

play38:10

now you you had some um

play38:13

what should we say you'll you'll

play38:15

co-founders

play38:16

are pretty pretty into big big hairy

play38:19

goals right

play38:23

yeah

play38:24

I mean we're trying to make AGI I think

play38:26

that's the biggest hairiest goal in the

play38:27

world

play38:28

so not so many companies have those kind

play38:30

of co-founders and people who with that

play38:34

kind of track record and

play38:37

you know that that type of talent magnet

play38:40

uh funding capabilities and so on do you

play38:43

how important was that

play38:45

you mean Elon by this right yeah yeah

play38:48

and you know and some of the other

play38:49

people you worked in the beginning

play38:52

well there were there's six co-founders

play38:56

uh

play38:57

Elon and me Greg and Elia and John and

play38:59

voichak and you know Elon was definitely

play39:02

a talent

play39:03

magnet and attention magnet for sure and

play39:05

also just like

play39:07

has some real superpowers that were

play39:09

super helpful to us in those early days

play39:10

aside from all of those things and you

play39:12

know contributed in ways that we're very

play39:14

grateful for but the rest of us were

play39:16

like pretty pretty unknown

play39:18

and

play39:20

I mean maybe I was like somewhat known

play39:22

in technology circles because I was

play39:23

running my combinator but not not like a

play39:27

not in a major way

play39:29

uh

play39:31

and so we just had to like you know

play39:33

grind it out but that was like that was

play39:35

like a good and valuable process

play39:39

what is your superpower

play39:49

I think I'm good at

play39:52

thinking very long term

play39:55

and not being sort of constrained in

play39:58

like common common wisdom

play40:06

evaluating talent that was like a really

play40:08

helpful thing to learn from my

play40:10

combinator

play40:13

you said in 2016 that long-term thinking

play40:15

it's a competitive Advantage because

play40:16

almost no one does it

play40:19

yeah

play40:20

I mean when we started openai and said

play40:22

we're going to build AGI everybody was

play40:24

like that's insane hey it's 50 years

play40:26

away and B it's like you know the wrong

play40:28

thing to even be thinking about you

play40:29

should be thinking about this how to

play40:31

improve this one thing this year you

play40:33

know also this is like unethical to even

play40:35

say you're working on it because it's

play40:36

like such a science fiction and you're

play40:38

gonna lead to another AI winter because

play40:40

it's too much hype

play40:41

and we just said it's going to take us a

play40:43

while but we're going to go figure out

play40:44

how to do it

play40:46

you said you was a good at assessing

play40:48

Talent what how do you do it

play40:57

I don't know I don't I can't like I have

play41:00

a lot of practice so I've got like a

play41:03

but I don't have like words for it

play41:06

I can't I can't tell you like here's the

play41:08

five questions I ask or here's the one

play41:11

thing I always look for

play41:13

but

play41:18

you know assessing if someone is smart

play41:20

and if they have a track record of

play41:22

getting things done and if they have

play41:24

like

play41:25

novel ideas that they're passionate

play41:27

about

play41:28

I think you can learn how to do that

play41:30

through thousands of conversations even

play41:32

if it's hard to explain

play41:34

why is Europe

play41:36

um so behind generally when it comes to

play41:39

Innovation and Innovative culture

play41:45

I'd ask you that I don't know why is it

play41:48

what is it first of all like well

play41:51

I guess I guess it is uh

play41:55

look at where the big tech companies are

play41:57

where the big Innovations come uh it's

play42:00

certainly behind it's certainly very

play42:02

behind in like hyperscale software

play42:04

companies there's no question there but

play42:06

big fear of failure uh it's a cultural

play42:09

thing

play42:10

um it's there's a lot of there's there

play42:12

are a lot of things going into that

play42:13

cocktail I think

play42:14

the the fear of failure thing um and and

play42:17

the kind of the like

play42:19

the cultural

play42:21

environment or backdrop there is is huge

play42:24

no doubt uh the

play42:26

you know we funded a lot of European

play42:28

people at YC and

play42:31

a thing they would always say is like

play42:32

they cannot get used to the fact that in

play42:34

Silicon Valley failure is tolerated

play42:39

field and stuff big time

play42:41

and I'm sure I'll fail at stuff

play42:44

in the future what was the biggest

play42:46

failure so far

play42:48

uh well I mean monetarily wise I've made

play42:52

a lot of big Investments that have gone

play42:53

to Total like just you know zero like

play42:55

crater in the ground but in terms of

play42:57

like time and psychological impact on me

play43:00

I did a startup from when I was like 19

play43:03

to 26.

play43:05

worked unbelievably hard to consume my

play43:07

life and failed at that and that was

play43:10

like quite painful and quite

play43:12

demoralizing and it's like it you know

play43:14

you learned to get back up after stuff

play43:16

like that but it's hard

play43:18

how do you get back up

play43:22

um

play43:24

I mean one of the key insights for me

play43:27

was realizing that although I thought

play43:28

this was like terribly embarrassing and

play43:31

shameful

play43:32

uh no one but me spent much time

play43:36

thinking about it

play43:42

who do you ask for advice like

play43:44

personally

play43:47

my strategy is not to just have like one

play43:50

person that I go to with everything and

play43:51

a lot of people do that you know they

play43:52

have like One Mentor that they go to for

play43:54

every big decision but my strategy is to

play43:57

talk to a ton of different people

play43:59

when I'm facing a big decision

play44:02

and try to synthesize the input from all

play44:05

of that so if I'm facing like a real

play44:09

major strategic challenge for openai

play44:12

um you know kind of one of these better

play44:13

company things

play44:15

I would bet that you know counting

play44:17

people internal and external the company

play44:19

I talked to 50 people about it

play44:23

and

play44:25

probably out of you know 30 of those

play44:27

conversations I would

play44:29

hear something interesting or learn

play44:30

something that updates my thinking

play44:33

and that's my strategy

play44:40

so now outside AI

play44:43

um what are you the most excited about

play44:46

Fusion

play44:48

I think we're going to get Fusion to

play44:49

work very soon

play44:50

and I think

play44:53

my model

play44:56

if you boil everything down to get to

play44:58

abundance in the world

play45:00

the two biggest most important things

play45:03

are bringing the cost of intelligence

play45:06

way down and bringing the cost and

play45:09

amount of energy way down

play45:10

and I think AI is the best way to do the

play45:13

former and fusion is the best way to do

play45:14

the latter and you know in a world where

play45:16

we look at energy that's like less than

play45:18

a penny per kilowatt hour and more

play45:19

importantly we can have as much as we

play45:21

want and it's totally clean

play45:23

um that's a big deal do you think it's

play45:24

going to solve the climate problem

play45:26

yes

play45:28

we'll have to use it to do other things

play45:30

like we'll have to you know use some of

play45:32

it to capture carbon because we've

play45:33

already done so much damage but yes I do

play45:34

what about crypto uh

play45:38

I am excited

play45:41

for the vision of crypto

play45:43

and it has so far failed to deliver on

play45:45

that promise

play45:50

but you have plans

play45:52

it's it's not something I'm spending

play45:54

that much time like open air is taking

play45:57

over my whole life so I can have a lot

play45:59

of plans about open Ai and there's other

play46:01

projects that I've invested in or helped

play46:03

start that I feel bad because I don't

play46:05

have much time to offer them anymore but

play46:07

they're all run by super capable people

play46:08

and I assume they'll figure it out

play46:11

what do you read

play46:14

um the thing that has unfortunately gone

play46:19

the most by

play46:21

the Wayside for me recently has been

play46:24

free time and thus reading so I don't

play46:29

I don't get to read much these days uh I

play46:31

used to be a voracious reader and uh

play46:34

there was like one year where I read you

play46:36

know not fully but like more than a scam

play46:37

I Read 50 textbooks

play46:39

and that was like an unbelievable

play46:41

experience uh

play46:43

but I don't like this last year uh I

play46:46

have not read many books

play46:51

what's the one book young people should

play46:52

read

play47:05

that's a great question picking one is

play47:07

really hard

play47:18

um I don't think

play47:22

man that's such a good question

play47:24

um

play47:29

I don't think it's the same for every

play47:31

young person uh and I like

play47:33

coming up with a generic singular

play47:35

recommendation here

play47:37

is super hard

play47:44

I don't think I can give a faithful

play47:45

answer on this one

play47:47

it's good

play47:49

now we uh

play47:51

we are uh fast forwarding you know what

play47:54

it's not can I actually I do have this

play47:56

is not the one for every young person

play47:57

but I wish a lot more people would read

play48:01

the beginning of infinity

play48:04

early on in there

play48:07

early on in their career or their lives

play48:09

the beginning of infinity the beginning

play48:11

of infinity bye I think uh that doesn't

play48:15

matter we'll find it I think it's the

play48:17

most

play48:18

inspiring you can do anything you can

play48:22

solve any problem and it's important to

play48:24

go off and do that

play48:25

it's a very like I felt it was like a

play48:28

very expansive book of the way I thought

play48:30

about the world

play48:32

well Sam I think that's a very

play48:34

beautiful place to uh to go in for

play48:37

landing now last one so um fast forward

play48:40

a couple of decades

play48:42

um people sit down and reflect on Sam

play48:44

oldman's impact on the attack world and

play48:46

Society what what do you hope what do

play48:49

you hope they'll say what do you what do

play48:50

you hope your lazy will be

play48:55

you know I'll think about that when I'm

play48:57

like

play48:57

at the end of my career like right now I

play49:00

my days are spent like trying to figure

play49:02

out why this executive is mad at this

play49:04

one and why this product is delayed and

play49:06

like why our Network on our you know big

play49:09

new training computer is not working and

play49:11

who screwed that up and how to fix it

play49:12

and it's like very caught up in the like

play49:16

annoying tactical problems uh there is

play49:18

no room to think about Legacy we're just

play49:20

trying to go off and like build this

play49:21

thing

play49:22

fantastic well um good luck with that

play49:25

it's been

play49:26

been absolutely fantastic conversation

play49:30

and uh all the best of luck and uh go

play49:33

get them