Sam Altman CEO of OpenAI | Podcast | In Good Company | Norges Bank Investment Management
Summary
TLDRIn this insightful interview, Sam Altman, the CEO of OpenAI, shares his ambitious vision for the future of artificial general intelligence (AGI) and its potential to benefit humanity. He discusses the rapid progress of language models like ChatGPT, the challenges of developing advanced AI systems, and the importance of democratizing this technology globally. Altman emphasizes the need for responsible governance and self-regulation as AI capabilities increase. He also touches on the economic implications of AI, the role of computation and funding, and the competitive landscape with other tech giants. Throughout the conversation, Altman's long-term thinking and unwavering commitment to pushing the boundaries of AI shine through.
Takeaways
- 😎 Sam Altman, CEO of OpenAI, is leading the charge in developing powerful AI systems like ChatGPT, with the goal of creating artificial general intelligence (AGI) that benefits all of humanity.
- 🔭 Altman believes that empirically testing and deploying AI technologies is crucial to understanding their risks, benefits, and potential evolution, rather than relying solely on philosophical speculation.
- 🌍 He envisions a future where extremely capable AI systems will fundamentally change how we think about and interact with the world, potentially by the end of this decade.
- 🤖 A key challenge is determining whose values the AGI should be aligned with and how to distribute access and benefits fairly across societies and countries.
- 🕰️ Altman emphasizes the importance of long-term thinking and not being constrained by conventional wisdom, which he sees as a competitive advantage for OpenAI.
- 💼 He stresses the need for leaders to prioritize talent recruitment, team development, vision communication, and strategic thinking, rather than getting bogged down in day-to-day details.
- 🌐 Altman believes that AI has the potential to significantly improve productivity and lift people out of poverty globally by democratizing access to intelligence and expertise.
- 🔬 He is also excited about the prospects of fusion energy, which he sees as complementary to AI in achieving abundance and addressing climate change.
- 🤝 OpenAI has a productive partnership with Microsoft, with aligned goals at the highest level, despite some occasional misalignments at lower levels.
- 🌉 Altman recognizes the need for reasonable global regulation, particularly for the most powerful AI systems that could cause grievous harm, while advocating for making models like GPT-4 widely available.
Q & A
What is ChatGPT, and how has it impacted the world?
-ChatGPT is an advanced language model created by OpenAI that shocked the world with its capabilities when it was released in November 2022. It has been described as the fastest-growing product in history and has led to widespread adoption and integration across various industries and applications.
What is OpenAI's vision for the future where humans and AI coexist?
-OpenAI believes that answering this question empirically is crucial, as many past predictions have been proven wrong. They aim to deploy AI into the world, observe how people use it, understand the risks and benefits, and then co-evolve the technology with society based on these observations.
How does Sam Altman define general intelligence (AGI)?
-Sam Altman defines AGI as a system that can figure out new scientific knowledge that humans on their own could not. He believes that by the end of this decade, we may have extremely powerful systems that change the way we currently think about the world.
What are some of the challenges in ensuring that AI benefits all of humanity?
-Some of the key challenges include deciding whose values the AI systems should align with, determining the level of flexibility and control given to individual users and countries, and finding ways to share the benefits of AI equitably, such as through increased agency and opportunities for people rather than just handouts.
How does OpenAI approach the development of AI models like GPT-4?
-OpenAI aims to make their most capable models, like GPT-4, widely available globally, even if people use them for purposes that OpenAI might not always agree with. They believe in democratizing this technology as much as possible.
What role does Microsoft play in OpenAI's efforts?
-Microsoft is a key partner of OpenAI. They build the computers that OpenAI uses to train their models, and both companies use these models. Their goals are generally aligned, although there may be some misalignments at lower levels that need to be addressed through communication and compromise.
How does Sam Altman approach talent assessment and leadership development at OpenAI?
-Sam Altman believes he has developed a strong ability to assess talent through extensive practice. For leadership development, he tries to promote from within and warns new leaders upfront about the common pitfalls they are likely to face, encouraging them to learn from their mistakes over time.
What are Sam Altman's thoughts on the potential impact of AI on productivity?
-Sam Altman believes that AI has the potential to significantly increase productivity, and he has set an ambitious goal for OpenAI employees to improve their productivity by around 20% over a 12-month period, leveraging the tools and models they are developing.
What is Sam Altman's perspective on the role of government regulation in the AI space?
-While Sam Altman acknowledges the need for government regulation, especially for the most powerful AI systems capable of causing global harm, he believes that individual countries and regions should maintain the right to self-determine rules and guidelines for less powerful AI applications. He sees the potential for reasonable regulation, such as requiring disclosure when interacting with an AI system.
Apart from AI, what other technology is Sam Altman most excited about?
-Sam Altman is highly excited about the potential of fusion energy technology, as he believes that bringing down the cost and increasing the abundance of clean energy, along with reducing the cost of intelligence through AI, are the two most important factors in achieving global abundance.
Outlines
🤖 The Rise of ChatGPT and OpenAI's Vision
In this introductory paragraph, Sam Altman, the CEO of OpenAI, discusses the groundbreaking release of ChatGPT and OpenAI's mission to create artificial general intelligence (AGI) that benefits humanity. He explains his excitement about working at the forefront of this technological revolution and shares his vision of a future where humans and AI coexist. Altman acknowledges the difficulty in predicting the exact trajectory of this technology but stresses the importance of empirically understanding its impacts, risks, and benefits as it evolves.
🔮 The Future of AGI and Its Global Impact
Altman reflects on the potential of achieving true AGI by the end of the decade, which he defines as a system capable of discovering new scientific knowledge beyond human capabilities. He discusses the challenges of determining whose values should guide the alignment of AGI and how to equitably share its benefits across the world. Altman emphasizes the need for global governance over powerful AI systems while allowing flexibility for individual users and countries. He also touches on the geopolitical implications of AI and the importance of democratizing access to these technologies.
🌍 AI's Role in Lifting Up the Developing World
In this paragraph, Altman expresses his belief that AI technologies, particularly the democratization of intelligence, will have a disproportionately positive impact on the developing world by providing access to expert knowledge and resources that were previously unaffordable. He acknowledges potential roadblocks, such as the trajectory of technology development or geopolitical factors, but remains optimistic about AI's potential to alleviate global poverty and inequality.
💻 The Rapid Integration of AI into Various Industries
Altman discusses the astonishing pace at which companies are integrating ChatGPT and other AI models into various products and services, such as cars, customer service, and legal document review. He acknowledges that while the current models have significant limitations, people are finding ingenious ways to leverage them, leading to substantial productivity gains. Altman also touches on the challenges of continuously training and depreciating these large models while generating valuable intellectual property for future iterations.
🤝 Partnerships, Regulation, and Democratizing AI Access
In this paragraph, Altman reflects on OpenAI's partnership with Microsoft, emphasizing their aligned goals and the importance of compromise in resolving any disagreements. He discusses the need for reasonable regulation, such as mandating disclosure when interacting with AI systems, and the potential for global governance over exceptionally powerful AI. Altman also reiterates OpenAI's commitment to democratizing access to their models, even if users employ them for unintended purposes.
🚀 Productivity Gains and the Limits of AI Progress
Altman shares his ambitious goal of achieving a 10% productivity increase across OpenAI within the next 12 months, attributing this target to the potential of AI tools. He expresses his belief in the exponential progress of AI and sees no inherent limitations to its continued advancement. Altman also touches on the topic of AI's impact on global power dynamics and the potential for unexpected breakthroughs from other countries or entities.
👩💻 Managing Researchers and Fostering Innovation
In this paragraph, Altman discusses his approach to managing researchers at OpenAI, emphasizing the importance of providing a high-level vision and ample freedom for exploration. He acknowledges the challenges OpenAI faced in rediscovering an effective research culture within a company setting and the need to strike a balance between allowing diverse ideas and aligning efforts towards promising directions. Altman also reflects on the lack of groundbreaking scientific breakthroughs from Silicon Valley companies before OpenAI's emergence.
🧠 Assessing Talent and Long-Term Thinking
Altman shares his thoughts on assessing talent, citing his experience at Y Combinator and his ability to recognize intelligence, track records, and novel ideas in candidates through numerous conversations. He discusses the importance of long-term thinking as a competitive advantage, which was a key factor in OpenAI's decision to pursue AGI despite skepticism from others. Altman also touches on the cultural differences between Silicon Valley and Europe regarding innovation and tolerance for failure.
⚛️ Excitement for Fusion Energy and Reading Habits
In this paragraph, Altman expresses his excitement about the potential of fusion energy to provide abundant, clean power and solve climate change challenges. He considers fusion and AI as the two most important technologies for achieving true abundance in the world. Altman also reflects on his diminished reading habits due to the demands of his work but recommends the book 'The Beginning of Infinity' as an inspiring read for young people.
🏆 Legacy and Continuing the Pursuit of AGI
In the final paragraph, Altman acknowledges that he is too focused on the present challenges and tactical problems at OpenAI to contemplate his legacy. He expresses a determination to continue building towards the goal of AGI, navigating the daily obstacles and issues that arise. Altman concludes by expressing gratitude for the opportunity to share his thoughts and receive well-wishes for OpenAI's ongoing pursuit of this ambitious endeavor.
Mindmap
Keywords
💡Artificial General Intelligence (AGI)
💡Language Models
💡Democratization of AI
💡Alignment
💡Regulation
💡Productivity
💡Competition
💡Fusion Energy
💡Long-term Thinking
💡Research Culture
Highlights
OpenAI shocked the world last November with ChatGPT, and OpenAI is not only creating models, it's creating the future.
The best way to predict the future is to invent it, and we're trying to see where the technology takes us, deploy it into the world to actually understand how people are using it, where the risks are, where the benefits are, what people want, how they'd like it to evolve, and then sort of co-evolve the technology with society.
When we have a system that can figure out new scientific knowledge that humans on their own could not, I would call that an AGI.
By the end of this decade, we expect to have extremely powerful systems that change the way we currently think about the world.
The holy moments have not been about new technology or new models, but about the breadth of use cases the world is finding to do this.
An uncommon use case is a guy who runs a laundromat business and uses ChatGPT for marketing copy, customer service, legal documents, and more - a virtual employee in every category.
Altman believes AI will particularly positively impact poor people the most, by democratizing intelligence and making expert advice available to everyone.
Altman is committed to making GPT models as widely available as possible, even if people use them for things OpenAI might not always feel are the best.
Governments will have to regulate AI, but OpenAI doesn't think individual countries will give up self-determination for what models can say. Global regulation will likely only happen for technology capable of grievous worldwide harm.
Altman aims for a 20% productivity increase at OpenAI over the next 12 months, driven by their AI tools.
Altman believes the key to developing leaders is having them spend enough time hiring talent, developing teams, communicating vision, and thinking strategically - things leaders often fail at initially.
For researchers, OpenAI provides a high-level vision and resources, but gives huge freedom to pursue their own directions.
Altman is most excited about fusion energy outside of AI, believing it will lead to abundance and solving climate change.
Altman recommends the book 'The Beginning of Infinity' as inspiring people to solve any problem and go off and do that.
Altman is focused on the present tactical challenges of building AGI rather than thinking about his future legacy.
Transcripts
[Music]
open AI shocked the world uh last
November with uh chat GPT and um
open AI is not only creating models it's
uh creating the future so Sam is an
honor to have you on the podcast thanks
a lot for having me it's great to be
here
now how does it feel to spearhead this
revolution
ah
it's definitely a little surreal it is
uh
it's like a very exciting moment in you
know the history of technology and to
get to work with the people who are who
are creating this
um it's like a is a great honor and uh I
can't imagine anything more exciting to
be doing
no I can't imagine
it's definitely a lot I can see that now
big picture what's the vision of of the
world where um humans and AI coexist
well you know I one one thing that we
believe is that you have to answer that
question empirically there's been a lot
of philosophizing about it for a long
time very smart people have had very
strong opinions I think they've all been
wrong and it's just a question of how
wrong and
the course that a technology takes is is
difficult to predict in advance I'm a I
love that Allen K quote that the best
way to invent the but the best way to
predict the future is to invent it
and so what we're trying to do is see
where the technology takes us deploy it
into the world to actually understand
how people are using it where the risks
are where the benefits are what people
want how how they'd like it to evolve
and then sort of co-evolve the
technology with society and you know I
think if you asked people five or ten
years ago what the deployment of
powerful AI into the world is going to
look like they wouldn't have guessed
that it looks like this
um people had very different ideas at
the time but this was what turned out to
be where the technology leads and and
where the science leads
and so we try to follow that and how far
into the future can you see now
uh the next few years seem pretty clear
to us
you know we kind of know where these
models are going to go we have a roadmap
we're very excited about uh we can
imagine both the Science and Technology
but also the product a few years out
and beyond that you know we're gonna
learn a lot we'll be a lot smarter in
two years than we are today yeah and and
what kind of uh you know holy
moments have you had lately
um well remember that we've been
you know we we've been thinking about
this and playing around with this
technology for a long time so the world
has had to catch up very quickly but we
we have less holy moments because
you know we've been expecting this and
we've been building it for a while and
it you know we don't it doesn't feel as
discontinuous to us
but and what kind of big things have you
seen since chat to petite sport
well we
the biggest ones have not been about new
technology or new models but about the
breadth of use cases the world is
finding to do this so the holy
moments have not been like oh now the
model can do this now we now we figured
out that because again you know some
would expected that but seeing how much
people are
coming to rely on these models to do
their work in their current form which
is very imperfect and broken you know
we're the first to say these models are
still not very good they hallucinate a
lot they're not very smart they have all
these problems and yet people are using
their human Ingenuity to figure out how
to work around that and still leverage
these tools and so watching people that
are remaking their workflows for a world
with llms has been
big and some examples of new things
you've seen new user cases applications
um you know a common one is around how
developers are changing their workflow
to uh you know spend like half their
time in chat GPT you hear people say um
they feel like two or three or sometimes
more product times productive than
before
um an uncommon one is I met a guy who
runs a laundromat business as like a
one-person thing and uses chat GPT for
um coming up with a marketing copy
dealing with like customer service uh
helping review legal documents we need a
long list of things and he's like I got
a virtual employee in every category
that was pretty cool
and what about things like uh brain
implants and getting it to help with
speech and so on which we just saw
recently
um
I'm very excited about neural interfaces
but I am not currently super excited
about brain implants I don't feel ready
to want one of those I would love a
device uh that could like read my mind
and
but I would like it to do that without
having to put a hole in my skull and I
think that's possible
how
oh there's many Technologies depending
on what you'd want but you know there's
there's a whole bunch of companies
working on trying to sort of like read
out the words you're thinking without
requiring a physical implant now a few
years ago nobody had heard about open AI
now uh everybody's heard about it you
are
you know one of the most most famous
people on Earth
um but the people so how many people are
you at open AI now
500 or so and what what do these 500
people actually do
um it's a mix so there's a large Crew
That's just doing the research like
trying to figure out how we get from the
model we have today which is very far
from an AGI to an AGI and all of the
pieces that have to come together there
so you know scaling the models up coming
up with new methods uh
that that whole process uh there's a
team that makes the product and figures
out also how to scale it there's a sort
of traditional Silicon Valley tech
company go to market team
um there's a very complex uh legal and
policy team that does all the work you'd
imagine there
um yeah
and so your your priorities as a CEO now
how do you spend your time
um
I kind of think about the buckets of of
what we have to do in uh research
product and compute on the technical
side
and then uh on the and I that's sort of
the work that I think I I enjoy the most
and where I can contribute the most
um and then I spend some of my time on
policy uh and
sort of
social impact issues for lack of a
better word uh and then the other things
I spent less time on but we have great
people that run the other functions
now your mission has been to ensure that
artificial well the general intelligence
benefits all of humanity what's the
biggest challenge to this you think
I
a couple of thoughts there uh one I'm
reasonably optimistic about solving the
technical alignment problem we still
have a lot of work to do but you know I
feel like
and feel better and better over time not
worse and worse
this the the social part of that problem
you know how do we decide whose values
we align to who gets to set the rules
for this how much how much flexibility
are we going to give to each individual
user and each individual country
we think the answer is quite a lot but
that comes with some other challenges
um in terms of how they're going to use
these systems that's all going to be uh
you know difficult to put it lightly for
society to agree on
and
and then how we share the benefits of
this
what we use these systems for uh that's
also going to be difficult to to agree
on um
kind of the buckets I think about here
are
we've got to decide what
you know Global governance over these
systems as they get super powerful is
going to look like and everybody's got
to play a role in that
um we've got to decide how we're going
to share the access to these systems and
we've got to decide how we're going to
share the benefits of them
the
you know there's a lot of people who are
excited about things like Ubi and I'm
one of them but I have no delusion that
Ubi is a full solution or even the most
important part of the solution like
people don't just want handouts of money
from an AGI they want increased agency
they want to be able to be architects of
the future they want to be able to do
more than they could before and figuring
out how to do that while addressing all
of this sort of
let's call them disruptive challenges uh
I think that's going to be very
important but very difficult
how far out this true AGI
I don't know how to put a number on it I
also think we're getting close enough
that the definition really matters and
people mean very different things when
they say it but I would say that I
expect by the end of this decade for us
to have
extremely powerful systems
that change the way we currently think
about the world
and and you say we've got different
definitions what is what is your
definition of
general intelligence you know there's
like kind of the open AI official
definitions and then there's one that's
very important to me personally when we
have a system that can
figure out new scientific knowledge
that humans on their own could not
I would call that an AGI
and that you think we may have by the
end of this decade well I kind of tried
to like soften that a little bit just by
saying we'll have systems that like
really change the way the world Works um
the the new science may take a little
bit longer or maybe not
Steve what's the end game here
um
are we just all of us going to work a
lot less
um
you know I want to be people
I think we'll all work differently I
think we'll still many of us will still
work very hard but differently every
technological Revolution
um people say they're we're just gonna
do less work in the future and
we just find
that we want a higher standard of living
and new and different things and also
that we find new kinds of work we really
enjoy you know
neither you nor I have to work and I bet
we both work pretty hard
and I love it I love my job I love my
job
and I feel very blessed so
the definition of work what we work on
why we work the reasons for it I expect
that all to change what we do I expect
to change
but
I love what I do and I expect people in
the future to love even more what they
do because there will be new amazing
things to work on that we can hardly
imagine right now and less boring stuff
yeah
I'm all for getting rid of the boring
stuff like I think like everybody should
love it that's maybe one thing we could
say in the future is everybody will do
things that they love you won't have to
do things you don't and I think most
people probably don't love their jobs
right now
um I believe you just traveled the world
and met with a lot of people and users
uh what's what was your what was your
main takeaway
uh
uh the level of excitement about the
future and what this technology is going
to do for people
around the world in Super different
cultures and super different contexts
was just very very different than I
expected
like it was
it was like overwhelming in the in the
best way
any any difference between geographies
yeah like you know in
in the developing World
um people are just focused on what this
can do economically right now uh and in
the more developed world there's much
more of a conversation about what the
downsides are going to be and you know
how this is going to disrupt things and
there's still excitement but it's
tempered more by fear that was that was
a striking change a difference
do you think it will lift up the oral
part of the world yeah I really do I
think it's going to make everybody
richer but I think it impacts positively
impacts poor people the most
and I think this is true for most kinds
of Technology
um but it should be particularly true
for the democratization of intelligence
you know you or I can afford to pay a
super highly compensated expert if we
need help but a lot of people can't
and to the degree that we can make say
great medical advice available to
everyone
um
you and I benefit from that too but less
less than people who just can't afford
it at all right now
and what would potentially prevent this
from happening
well
we could be wrong about the trajectory
that technology is on I think we are on
a very smooth exponential curve that has
much much further to go
but you know we could be like missing
something we could be drinking our own
Kool-Aid we could either brick wall soon
I don't think we're going to um I think
we have some
remarkable progress ahead of us in the
next few years but yeah we could we
could somehow be wrong for a reason we
don't understand yet
um what is it doing to the global
balance of power
I don't know how that's going to shift
um I'm not sure anyone does but I
certainly don't think that's something
that I'm
particularly well qualified to weigh in
on
but it just seems like it's being it's
so key now to the the weapon race
the medical race the self-driving
vehicle race just all these races
but it's also available
pretty broadly
you know like one of the things that we
think is important is that we make gpt4
extremely widely available
um even if that means
people are going to use it for things
that we might not always
feel
are the best things to do with it
uh but you know we have a goal of
globally democratizing this technology
and
as far as we know gpt4 is the most
capable model in the world right now
and it is available to anyone who wants
to pay what I think are the very cheap
API rates now anyone is not quite there
you know we don't we're blocked in a
hand we block a handful of countries
that the US has embargoes with or
whatever but it's pretty available to
the world
but in order to develop it further you
need
um well you need the right chips right
and they are not available
but what matters is how you're going to
get to like GPT six and seven and also
even more than that how you're going to
get the next set of very different ideas
that take you
on a different trajectory like everyone
knows how to climb this one Hill and
we're gonna go figure out the next total
Decline and there's not a lot of people
in the world that can do that but
we're committed to making that as widely
available
as we can
do we know where China is here
we don't maybe someone doesn't want here
do you think there's a chance that well
like they did with weapons that just
suddenly bang they had the supersonic
Rockets we didn't even know they existed
right could that happen yeah
totally it could
I mean we're gonna work as hard as we
can to make sure that
we stay in the lead but
we're a little in the dark
so Mark Andreessen for instance he
thinks we should stuff it into
everything
and you know as part of
the
geopolitical
fight
what do you think
stuff it into everything I mean just
like put it everywhere
that's happening and I think that's
great
like without revealing something I
shouldn't the amount of gpt4 usage and
the number of people companies that are
integrating it into different ways is
staggering is awesome
some examples if you had to reveal
something
uh
I mean like you know car makers are
putting it into cars and I was like all
right that sounds like a gimmick and
then I got to try a demo of it and I was
like wow this being able to just talk to
my car
and control it in a sophisticated way
entirely by voice
actually totally changes my experience
of
how I like use a car in a way that I
would not have believed was so powerful
so for instance use it in a car what do
you say
uh this is this is probably where I
don't want to like reveal a partner's
plans but you can imagine a lot of
things that you might say like the basic
stuff is easy like you know I need to go
here and
um I'd like to listen to this music and
also can you make it colder
um
sounds good
do you depend on uh newer and even more
powerful ships than what we have now I
mean how much quicker do you how much
more complex the chips need to be than
h100 or the latest things from Nvidia
um
yeah of course like there's
the ways the ways that we can keep
making these models better are we can
come up with better algorithms
or just more efficient implementations
or both we can have uh better chips and
we can have more of them and we plan to
do all three things and they multiply
together
and do you think these the chip makers
who will end up with the profits there
uh they will end up with profits I
wouldn't say the prophets I think
there's many people who are gonna like
share this
massive economic boon
how much does it cost to train these
models I mean how much have you spent on
free training models
we don't really talk about exact numbers
but like quite a lot
yeah
and what's the challenge of spending so
much money pre-training and then
it lasts for a relatively short period
of time in a way you have to depreciate
the whole investment in order because
you need to invest more in the Next
Generation
uh I mean what are the what's yeah how
do you think how do you think about this
I that's true I don't think they're
going to be as many massive pre-trained
models in the world as people think
I think there will be a handful and then
a lot of people are going to fine-tune
on top of that or whatever
so so how does how do you how do you
read the competitive part of it that I
think is important is like you know
when we did do gpt4
um we did we produced this artifact and
people use it and it generates all this
economic value and um you're right that
does depreciate fast but in the process
of that we learned so much about how to
go
we pushed the frontier of research so
far forward and we learned so much that
it'll be critical to us being able to go
do gpt5 someday or whatever that it's
like you're not just depreciating the
capex one time for the model you have
generated a huge amount of new IP to
help you keep keep making better models
um
so the way you read the competitive
landscape now how what does it look like
uh
I mean there are going to be many people
making great models will be one of them
we'll like contribute our egi to the
world to society among among others and
I think that's fine and
you know we'll all
run different experiments we'll try
setting you know we'll have different
features different capabilities we'll
have different opinions about what the
rules of a model should be
and through the magic of competition uh
and users deciding what they want
we'll get to a very good place
how far ahead do you think you are a
competition
I don't know
I don't think about that much to be
honest like we're
our customers are very happy uh they are
desperate for more
features and more capacity and us to be
able to deliver our service in all of
these little better ways and we're very
focused on that
um I'm sure Google will have something
good here at some point but
like I think they're you know racing to
catch up with where we are and
we're thinking very far ahead of that
so normally in in the software business
you have something which is
very cheap where you ship where you ship
a lot of it or something which is very
expensive and you don't ship so much
here you could potentially ship
something
and I can see you smiling here hey you
can potentially exactly
so so tell us how how is this going to
work
you know I'll tell you one of the most
fun things about this job is
we are past the point as a company uh I
am past the point as like a CEO running
this company where there's like a road
map to follow
we're just doing a bunch of things that
are like outside of the standard Silicon
Valley
received wisdom and so we get to just
say well we're going to figure it out
and we're going to try things and if we
got it wrong like who cares there was no
like it's not like we like screwed up
something that was already figured out
I mean back to our very founding like
most big tech companies are a they start
as a product company and eventually they
built on a research lab that doesn't
work very well
and we started as a research lab and
then bolted on a product company that
didn't work very well and now we're
making that better and better
um but to like a project company I mean
Microsoft
no no I mean like having to figure out
how to ship the API in chat GPT yeah
um like we started we really did just
start as a research lab and then one day
we're like we're gonna make a product
and then we're gonna make another
product and now that product is like the
fastest growing product in history or
whatever
and we weren't set up for that
it's the usage of chatipity decelerating
no
I think it maybe took like a little bit
of a flat line during the summer which
happens for lots of products but it is
doink up
tell us about the relationship with
Microsoft how does that work
um
I mean at a high level they
build us computers
we train models and then we both use
them
and it's a pretty
clear and great partnership
are you have you are your goals aligned
yeah they really are
um
one of I mean there's like there's of
course areas where we are not perfectly
aligned and like I don't
like any partnership in Life or business
or whatever I won't pretend it's perfect
but it is very good and we are aligned
at the highest levels which is really
important and
the the misalignments that come up at
the sort of lower levels once in a while
we you know like
no contract in the world is what makes a
partnership good like what makes a
partnership good is that when those
things happen you know Satya and Kevin
and I talk and
you'd figure it out and you know there's
like a good spirit of compromise over a
long time
now they've been one of them initiators
and I mean together with you in terms of
self-regulating this space what
can this type of thing be self-regulated
not entirely
um I think it needs to start that way
and I think that's also kind of like how
you figure out a better answer but like
governments are going to have to do
their own thing here and you know we can
provide input to that but we don't we're
not like the elected decision makers of
society and we're very aware of that
and what can governments do
anything they want
um and I think people forget this like
governments have
quite a lot of power they just have to
decide to use it
yeah but I mean so let's say now Europe
decides that they're going to regulate
you really harshly I mean are you just
going to say goodbye Europe no
possibly
um I don't think that's what's gonna
happen like I think we have a very
productive conversation I think Europe
will regulate AI but reasonably not not
very harshly and what is I'm sorry and
what is a reasonable regulation what
what is that level
I think there's many ways it it could
I think there's many ways that it could
go uh that would all be reasonable but
you know like
to give one specific example and I'm
surprised this is controversial at all
but a regulatory thing that's coming up
a lot in Europe and elsewhere is that if
you're using an AI you've got to
disclose it so if you're talking to like
a bot and not a person you need to know
that that seems like a super reasonable
and important thing to do to me for a
bunch of reasons given what's starting
to happen
um to my surprise there's some people
who really hate that idea but I'd say
that's like a very very
um
reasonable regulation
I agree I agree
do you think we'll get Global regulation
is there any
um shape I think that can happen
I think we're going to get it for only
the most powerful systems so you know I
think like individual countries or
blocks of countries are not going to
give up their right to self-determine
for like you know what can a model say
and not say and how do we think about
the Free Speech rules and whatever
um but but for technology that is
capable of causing Grievous harm to the
entire world like we have done before
with nuclear weapons
a small number of other examples yeah I
think we are going to come together and
get good Global regulation but given how
embedded it now is in in everything as
we spoke about you know weapons your car
you're sitting in your car and it's like
super cool and it's uh cold and hot than
music and this and that and you know and
you you're a Chinese car company and you
won't all compete the the Americans why
would you wanna
why would you only when you want to have
a regular regulation on this
well gpt4 I don't think needs Global
regulation nor should it have it I'm
talking about like what happens when we
get to gpt10 and it is you know say
smarter than all of humans put together
and that's why you think we get it
that's when I think we'll get it
when you have the cost of Intelligence
coming down so dramatically like it is
now what is it going to do to
productivity in the world
I mean it's supposed to go up a lot
right that's what theory tells us and
that's what I think
so so
um
I've told everybody in in our company
that hey we shouldn't we should improve
our productivity about 10 over the next
12 months all of us
and that's and you know how I got the
number
did you ask gbt no I just took it I just
took it straight out of the air
do you think what do you think about
that number is it low high under
ambitious what what should what should
productivity increase by
how do you how do you measure uh the
stuff we do
that's not very good measurement but
just the kind of stuff that I produce
how much of your company writes code
uh
if uh
15 well people in
in technology probably 15 20 of us
more actually but
okay let's let's say that's 20 writing
code I think an overall goal of like you
know 20 productivity increase in a
12-month period is appropriately
ambitious given the tool
and given the tools that we will launch
over the next 12 months
okay sounds like I should uh up the game
her a bit
I think so yeah I'll just tell everybody
you told me to so that's fine
it's better to set a goal that is like
slightly too ambitious than
significantly under ambitious in my
opinion
yeah
um now is there like a an inherent
limitation to what AI can achieve I mean
is there like a point of no further
progress
I couldn't come up with any
reasonable explanation of why that
should be the case
you say
um that most people overestimate risk
and underestimate reward what do you
mean by that
um
you know there's a lot of people that
don't go start the company or take the
job they want to take or
try a product idea because they think
it's too risky
and then if you really ask them like all
right can we unpack that and can you
explain what what the risk is and what's
going to go wrong it's like well the
company might fail
okay and then what
you know well then I have to go back to
my own job
my old job all right that seems
reasonable and they're like well you
know but I'll be a little embarrassed
and I'm like oh is that you know what's
the cause
I I I think like people view that as a
super risky thing and they view staying
in a job where they're not really
progressing or learning more or doing
new things for 20 years uh it's not
risky at all
and to me that seems catastrophically
risky you know to like miss out on 20
years of your very limited life and
energy to
try to do the thing you actually want to
do
um
that seems really risky
[Music]
but it's not thought of that way
talking about staying in your job what
um so the leaders and the CEOs so you
know how how is AI going to change the
way leaders need to act and behave
well hopefully it's gonna like do my job
you know hopefully the first thing we do
with AGI is let it run open Ai and I can
you know go sit on the beach
that'd be great
I wouldn't want to do that for long but
right now it sounds really nice
how do you develop the people in your
company how do you develop your leaders
um
I think developing leaders tend to fail
at the same set of things most of the
time you know they don't
they don't spend enough of their time
hiring talent and developing their own
teams they don't spend enough of their
time articulating and communicating the
vision of their team uh they don't spend
enough of their time thinking
strategically because they get bogged
down on the details and so when I like
put a new person in a very senior role
which I always try to do with promotions
I mean I'm willing to hire externally
but I'd always always rather promote
internally
um
I have them over for dinner or go for a
walk or sit down or something and say
like here are the ways you're going to
screw up
I'm gonna tell you all of them right now
you're gonna totally ignore me on this
and not believe me or at least not do
them because you're going to think you
know better or you know not make these
mistakes but
I'm going to put this in writing and
hand it to you and we're going to talk
about it in three months and in six
months and you know
eventually I think you'll come around
and they always ignore me and always
come around
and I think just like
letting people recognize that for
themselves uh but telling them up front
so that it's at least in their mind is
very important well it's the most common
way leaders grow up
uh
failing to
recruit slash promote and then failing
to build a good
delegation process
and then as a consequence of those not
having enough time to
set strategy because they're too bogged
on in the day-to-day and they can't get
out of that downward spiral
uh what what is your delegation process
look like
two things number one high quality
people number two setting the training
wheels at the right height and
increasing them over time as people
learn more and I build up more Trust
is that the way to manage geniuses
um
they get uh researchers that's a
different thing I was like talking about
how to like Executives that run the
thing okay what about researchers what
about the geniuses
um the primadonna's
explain well pick really great people
explain the general direction of travel
and the resources that we have available
and kind of at a high level where we
need to get to to get to the next level
so you know we have to achieve this to
go get the next 10 times bigger computer
or whatever
and
you know provide like the most mild
input on it would be really great if we
could pursue this research Direction and
this would be really helpful and then
step back
so we kind of like you know we set a
very high level vision for the company
and what we want to achieve and beyond
that researchers get just a huge amount
of freedom
do you think companies generally are too
detailed
in the remit they give the teams
yes
I mean at least for our kind of thing I
think uh managing
we talked earlier about having to like
ReDiscover a bunch of things
I'd say this
realizing it's going to come across as
arrogant and I don't mean it that way
but I think it's an important Point
um
there used to be great research that
happened in companies in Silicon Valley
um you know Xerox park being the obvious
example there have not been for a long
time and we really had to ReDiscover
that and we made many screw-ups along
the way to learn how to run a research
effort well and how you balance
letting people go off and do whatever
towards trying to get the company to
point in the same direction and then
over time how to get to a culture where
people will try lots of things but
realize where the promising directions
are and on their own want to come
together to say let's put all of our
Firepower behind this one idea because
it seems like it's really working
you know I'd love to tell you we always
knew language models were going to work
that was absolutely not the case we had
a lot of other ideas about what might
work but when we realized the language
models were going to work we were able
to get the entire
research trust or almost entire research
Brain Trust to get behind it
I'm slightly surprised you say that
there was no innovation culture in
Silicon Valley because that's uh a bit
contrary to uh
to what I thought so
there is
yeah there's a product Innovation
culture for sure a good one but like
I mean again I hate to say this because
it sounds so arrogant but like before
open AI what was the last really great
scientific breakthrough that came out of
a Silicon Valley company
and and why did and why did that happen
why
what happened there
well
we got a little lucky no I don't mean we
I'm sorry why did why do these
culture disappear in Silicon Valley you
think
I have spent so much time reflecting on
that question uh
I don't fully understand it I think
I think
it got so easy to make a super valuable
company
um and people got so impatient on
timelines and return Horizons
that a lot of the capital went to these
things that could just you know fairly
reliably multiply money in a short
period of time uh by just saying like
we're gonna take the magic of the
technology we have now
the internet mobile phones whatever and
apply to every industry
that sucked up a lot of talent very
understandably
now you you had some um
what should we say you'll you'll
co-founders
are pretty pretty into big big hairy
goals right
yeah
I mean we're trying to make AGI I think
that's the biggest hairiest goal in the
world
so not so many companies have those kind
of co-founders and people who with that
kind of track record and
you know that that type of talent magnet
uh funding capabilities and so on do you
how important was that
you mean Elon by this right yeah yeah
and you know and some of the other
people you worked in the beginning
well there were there's six co-founders
uh
Elon and me Greg and Elia and John and
voichak and you know Elon was definitely
a talent
magnet and attention magnet for sure and
also just like
has some real superpowers that were
super helpful to us in those early days
aside from all of those things and you
know contributed in ways that we're very
grateful for but the rest of us were
like pretty pretty unknown
and
I mean maybe I was like somewhat known
in technology circles because I was
running my combinator but not not like a
not in a major way
uh
and so we just had to like you know
grind it out but that was like that was
like a good and valuable process
what is your superpower
I think I'm good at
thinking very long term
and not being sort of constrained in
like common common wisdom
evaluating talent that was like a really
helpful thing to learn from my
combinator
you said in 2016 that long-term thinking
it's a competitive Advantage because
almost no one does it
yeah
I mean when we started openai and said
we're going to build AGI everybody was
like that's insane hey it's 50 years
away and B it's like you know the wrong
thing to even be thinking about you
should be thinking about this how to
improve this one thing this year you
know also this is like unethical to even
say you're working on it because it's
like such a science fiction and you're
gonna lead to another AI winter because
it's too much hype
and we just said it's going to take us a
while but we're going to go figure out
how to do it
you said you was a good at assessing
Talent what how do you do it
I don't know I don't I can't like I have
a lot of practice so I've got like a
but I don't have like words for it
I can't I can't tell you like here's the
five questions I ask or here's the one
thing I always look for
but
you know assessing if someone is smart
and if they have a track record of
getting things done and if they have
like
novel ideas that they're passionate
about
I think you can learn how to do that
through thousands of conversations even
if it's hard to explain
why is Europe
um so behind generally when it comes to
Innovation and Innovative culture
I'd ask you that I don't know why is it
what is it first of all like well
I guess I guess it is uh
look at where the big tech companies are
where the big Innovations come uh it's
certainly behind it's certainly very
behind in like hyperscale software
companies there's no question there but
big fear of failure uh it's a cultural
thing
um it's there's a lot of there's there
are a lot of things going into that
cocktail I think
the the fear of failure thing um and and
the kind of the like
the cultural
environment or backdrop there is is huge
no doubt uh the
you know we funded a lot of European
people at YC and
a thing they would always say is like
they cannot get used to the fact that in
Silicon Valley failure is tolerated
field and stuff big time
and I'm sure I'll fail at stuff
in the future what was the biggest
failure so far
uh well I mean monetarily wise I've made
a lot of big Investments that have gone
to Total like just you know zero like
crater in the ground but in terms of
like time and psychological impact on me
I did a startup from when I was like 19
to 26.
worked unbelievably hard to consume my
life and failed at that and that was
like quite painful and quite
demoralizing and it's like it you know
you learned to get back up after stuff
like that but it's hard
how do you get back up
um
I mean one of the key insights for me
was realizing that although I thought
this was like terribly embarrassing and
shameful
uh no one but me spent much time
thinking about it
who do you ask for advice like
personally
my strategy is not to just have like one
person that I go to with everything and
a lot of people do that you know they
have like One Mentor that they go to for
every big decision but my strategy is to
talk to a ton of different people
when I'm facing a big decision
and try to synthesize the input from all
of that so if I'm facing like a real
major strategic challenge for openai
um you know kind of one of these better
company things
I would bet that you know counting
people internal and external the company
I talked to 50 people about it
and
probably out of you know 30 of those
conversations I would
hear something interesting or learn
something that updates my thinking
and that's my strategy
so now outside AI
um what are you the most excited about
Fusion
I think we're going to get Fusion to
work very soon
and I think
my model
if you boil everything down to get to
abundance in the world
the two biggest most important things
are bringing the cost of intelligence
way down and bringing the cost and
amount of energy way down
and I think AI is the best way to do the
former and fusion is the best way to do
the latter and you know in a world where
we look at energy that's like less than
a penny per kilowatt hour and more
importantly we can have as much as we
want and it's totally clean
um that's a big deal do you think it's
going to solve the climate problem
yes
we'll have to use it to do other things
like we'll have to you know use some of
it to capture carbon because we've
already done so much damage but yes I do
what about crypto uh
I am excited
for the vision of crypto
and it has so far failed to deliver on
that promise
but you have plans
it's it's not something I'm spending
that much time like open air is taking
over my whole life so I can have a lot
of plans about open Ai and there's other
projects that I've invested in or helped
start that I feel bad because I don't
have much time to offer them anymore but
they're all run by super capable people
and I assume they'll figure it out
what do you read
um the thing that has unfortunately gone
the most by
the Wayside for me recently has been
free time and thus reading so I don't
I don't get to read much these days uh I
used to be a voracious reader and uh
there was like one year where I read you
know not fully but like more than a scam
I Read 50 textbooks
and that was like an unbelievable
experience uh
but I don't like this last year uh I
have not read many books
what's the one book young people should
read
that's a great question picking one is
really hard
um I don't think
man that's such a good question
um
I don't think it's the same for every
young person uh and I like
coming up with a generic singular
recommendation here
is super hard
I don't think I can give a faithful
answer on this one
it's good
now we uh
we are uh fast forwarding you know what
it's not can I actually I do have this
is not the one for every young person
but I wish a lot more people would read
the beginning of infinity
early on in there
early on in their career or their lives
the beginning of infinity the beginning
of infinity bye I think uh that doesn't
matter we'll find it I think it's the
most
inspiring you can do anything you can
solve any problem and it's important to
go off and do that
it's a very like I felt it was like a
very expansive book of the way I thought
about the world
well Sam I think that's a very
beautiful place to uh to go in for
landing now last one so um fast forward
a couple of decades
um people sit down and reflect on Sam
oldman's impact on the attack world and
Society what what do you hope what do
you hope they'll say what do you what do
you hope your lazy will be
you know I'll think about that when I'm
like
at the end of my career like right now I
my days are spent like trying to figure
out why this executive is mad at this
one and why this product is delayed and
like why our Network on our you know big
new training computer is not working and
who screwed that up and how to fix it
and it's like very caught up in the like
annoying tactical problems uh there is
no room to think about Legacy we're just
trying to go off and like build this
thing
fantastic well um good luck with that
it's been
been absolutely fantastic conversation
and uh all the best of luck and uh go
get them
great talking to you thank you for thank
you for having me
[Music]
me
Weitere ähnliche Videos ansehen
Sam Altman's Surprising WARNING For GPT-5 - (9 KEY Details)
20 Surprising Things You MISSED From SAM Altman's New Interview (Q-Star,GPT-5,AGI)
The Possibilities of AI [Entire Talk] - Sam Altman (OpenAI)
Ep #95: OpenAI and Moderna, Microsoft Phi-3, Sam Altman & AI Leaders Join Homeland Security AI Board
Sergey Brin | All-In Summit 2024
Sam Altman - The Man Who Owns Silicon Valley
5.0 / 5 (0 votes)