OpenAI's Altman and Makanju on Global Implications of AI
Summary
TLDRSam Altman and Anna Patterson of OpenAI discuss AI safety, regulation, energy use, and preparing for artificial general intelligence at a Bloomberg roundtable interview during the 2024 Davos conference. They talk about OpenAI's content moderation efforts, relationship with publishers and artists, the pace of AI progress, and the need for vastly more energy to power advanced AI while combatting climate change.
Takeaways
- OpenAI aims to restrict harmful uses of AI like misinformation while enabling beneficial ones.
- Demand for AI compute power will drive breakthroughs in fusion, solar and energy storage.
- AI will augment and enhance human productivity more than replace jobs.
- OpenAI seeks partnerships with news publishers to properly attribute content.
- Advanced AI may discover new scientific knowledge and even do AI research.
- Regulating AI risks stifling entrepreneurial innovation.
- AI progress will likely continue at an exponential, continuous pace.
- AI safety standards and best practices still need development.
- Well-designed AI can make technology feel more human-compatible.
- Preparing for transformative AI requires humility about the future.
Q & A
What content moderation efforts is OpenAI making regarding AI?
-OpenAI is banning use of ChatGPT for political campaigns, adding cryptographic watermarks to AI images, and partnering with secretaries of state to surface authoritative voting information.
How does OpenAI view AI's impact on jobs and employment?
-OpenAI sees AI as more likely to augment human productivity than replace jobs, though concedes AI may still significantly alter many occupations.
What is OpenAI's perspective on training AI systems?
-OpenAI believes future AI systems will need vastly less training data than currently speculated, using smaller high-quality datasets rather than mass quantities of data.
How does OpenAI characterize the pace of AI progress?
-OpenAI expects AI capabilities to improve at an exponential, continuous pace with each new system delivering impressive advances.
What is the greatest uncertainty regarding advanced AI?
-What societal impacts will emerge when everyone has access to highly capable AI assistants and collaborators.
Why does OpenAI iteratively deploy AI systems?
-To give people time to gradually adapt to the technology while mistakes can occur at low stakes, allowing co-evolution of technology and society.
What role can AI have regarding scientific knowledge?
-Advanced AI systems may discover entirely new scientific knowledge that benefits humanity.
How can AI research best progress safely?
-Through collaboration on safety standards and best practices among organizations like the new Frontier Model Forum.
What makes for good AI system design?
-Systems that feel natural, intuitive and human-compatible in how they are interacted with.
What mindset is most constructive regarding transformative AI?
-Cautious optimism along with humility about the difficulty inherent in predicting the future societal impacts of advanced AI.
Outlines
Discussion on AI Guidelines and Elections
Paragraph 1 covers a discussion between the speakers on OpenAI's newly announced guidelines restricting the use of AI tools like ChatGPT in political campaigns. They talk about how OpenAI plans to enforce these guidelines at scale using safety systems and partnerships.
Bipartisan Nature of AI Regulation
Paragraph 2 continues the discussion on AI regulation, with the speakers highlighting the bipartisan support and agreement so far on the need to regulate AI technology.
AI Applications for National Security
Paragraph 3 has the speakers talk about OpenAI's evolving policies around use of their AI for military and national security applications. They are open to cybersecurity and veteran wellbeing focused collaborations but not developing weapons.
Engaging Responsibly with Artists
Paragraph 4 covers the issues around use of AI generative models like DALL-E in art. The speakers talk about respecting artist preferences on use of their style and data while finding ways for them to benefit from and collaborate on the technology.
Learning from Past Controversies
In Paragraph 5, Anna talks about leveraging learnings from past industry controversies in government partnerships and getting policymakers to better understand AI technology from early on.
Preparing for Societal Impacts
Paragraph 6 has Sam highlight the potential for rapid advancement in AI capabilities and the resulting responsibility leaders have for thoughtful governance and policies to shape positive societal impacts.
Developing Safely with Humility
Sam continues on responsible development of AI in paragraph 7, emphasizing the need for humility and safety-focused iterative deployment to give society time to gradually adapt and government policies to evolve.
Possibilities of AI-Powered Devices
In the final paragraph, when asked about reports of collaborations with Elon Musk's Neuralink, Sam leaves open the possibility of building new kinds of AI empowered devices, though not as replacements for smartphones.
Mindmap
Keywords
💡Artificial General Intelligence (AGI)
💡Compute
💡Regulation
💡Governance
💡Safety
💡Alignment
💡Climate change
💡Exponential growth
💡Uncertainty
💡Co-evolution
Highlights
OpenAI introduced new guidelines banning ChatGPT use in political campaigns and adding cryptographic watermarks to AI-generated images.
Sam Altman doesn't believe AI will displace jobs as much as previously thought. He sees it more as a productivity-enhancing tool that lets people do more.
Anna Chen says governments are becoming more interested in understanding and potentially regulating AI, though most are not yet ready to incorporate it operationally.
Sam believes developing AGI will require a massive increase in energy production and breakthroughs like fusion or much cheaper solar and storage.
Sam thinks the world will change more slowly then more quickly as AI capabilities advance exponentially, but says no one knows exactly what happens next.
Anna sees government AI regulation accelerating in 2024 as more comprehensive policies like the EU AI Act and US executive order are implemented.
Sam wants leaders at Davos to understand 2023 woke people up to AI's potential, but what's coming with models like GPT-5 will be far more impressive and transformative.
Sam says open AI considers its systems AI assistants rather than standalone products or entities.
Sam expects open AI's next model after GPT-4 to be very impressive, doing new things not possible before and improving on GPT-4's capabilities.
Sam thinks new specialized AI devices could be created to augment humanity, but they won't replace general-purpose tools like smartphones.
Anna wants leaders at Davos to balance realistic concerns about AI with messaging that lets people engage with and benefit from its potential.
Sam believes the way AI products are designed matters hugely for making the technology feel accessible rather than scary or mystical.
Sam thinks no one has strong intuitions for what happens when AI becomes thousands of times cheaper and more capable.
Sam argues that gradual, iterative AI deployment builds public familiarity and helps society and policy co-evolve alongside rapid technical advances.
Anna thinks clearer industry standards are needed for concepts like AI safety that lack common definitions and approaches.
Transcripts
you guys made some news today announcing
some new guidelines around the use of AI
in elections I'm sure it's all uh stuff
that the the the davo set loved to hear
uh you banned the use of chat GPT in
political campaigns you introduced
cryptographic watermarks or images
created by do to to create kind of
Providence and transparency around the
use of AI generated images I read it and
I thought you know this is great some of
these principles are shared by much
larger platforms like Facebook and Tik
Tok and YouTube and they have struggled
to enforce it how do
you make it
real I mean these a lot of these are
things that we've been doing for a long
time and we have a really strong safety
systems team that um not only sort of
has monitoring but we're actually able
to leverage our own tools in order to
scale our enforcement which gives us I
think a significant Advantage um but uh
so there this there also some some
really important Partnerships like with
the National Association with the
secretaries of state so we can Surface
authoritative voting information so we
have quite a few ways that we are able
to enforce this mean Sam are you does
this put your mind at ease that we don't
that that open AI doesn't move the
needle in in some 77 upcoming critical
Democratic elections in 2024 no we're
quite focused on it uh and I think it's
good that our mind is not at EAS I think
it's good that we have a lot of anxiety
and are going to do everything we can to
get it as right as we can um I think our
role is very different than the role of
a distribution platform but still
important we'll have to work with them
too uh it'll you know it's like you
generate here and distribute here uh and
there needs to be a good conversation
between them but we also have the
benefit of having watched what's
happened in previous Cycles with
previous uh you know Technologies and I
don't think this will be the same as
before I I think it's always a mistake
to try to fight the last war but we do
get to take away some learnings from
that
and so I I wouldn't you know I I think
it'd be terrible if I said oh yeah I'm
not worried I feel great like we're
going to have to watch this incredibly
closely this year super tight monitoring
super tight feedback loop Anna you you
were at Facebook for open
AI so so I almost apologize for asking
it this in this way uh probably a
trigger phrase but do you worry about
another Cambridge analytical analytica
moment I think as Sam alluded to there
are a lot of learnings that we can
leverage but also open the eye from its
Inception has been a company that thinks
about these issues that it was one of
the reasons that it was founded so I
think I'm a lot less concerned because
these are issues that our teams have
been thinking about from the beginning
of uh our building of these tools Sam
Donald Trump just won the Iowa caucus
yesterday uh we are now sort of
confronted with the reality of this
upcoming election what do you think is
at
stake in the in the US election for for
Tech and for the safe stewardship of AI
do you feel like that's a a critical
issue that voters should and will have
to consider in this election I think the
now confronted as part of the problem uh
I actually think most people who come to
D say that again I didn't quite get that
I think part of the problem is we're
saying we're now confronted you know it
never occurred to us that what Trump is
saying might be resonating with a lot of
people and now all of a sudden after
this performance in Iowa oh man um it's
a very like Davos Centric you know um
I've been here for two days I guess
just uh so I I would love if we had a
lot more reflection and if we started it
a lot sooner um about and we didn't feel
now confronted but uh I think there's a
lot at stake at this election I think
elections are you know huge deals I
believe that America is going to be fine
no matter what happens in this election
I believe that AI is going to be fine no
matter what happens in this election and
we will have to work very hard to make
it so um but this is not you know no one
wants to sit up here and like hear me
rant about politics I'm going to stop
after this um but I think there has been
a real failure to sort of learn lessons
about what what's kind of like working
for the citizens of America and what's
not Anna I want to ask you the same
question uh um you know taking your
political background into account what
do you feel like for Silicon Valley for
AI is at stake in the US election I
think what has struck me and has been
really remarkable is that the
conversation around AI has remained very
bipartisan and so you know I think that
the one concern I have is that somehow
both parties hate
it no but you know this is like an area
where um
you Republicans tend to of course have a
an approach where they are not as in
favor of Regulation but on this I think
there's agreement on both parties that
they are consider they believe that
something is needed on this technology
you know Senator Schumer has this
bipartisan effort that he is running
with his Republican counterparts again
uh when we speak to people in DC on both
sides of the aisle for now it seems like
they're on the same page and do you feel
like all the existing campaigns are
equally articulate about the about the
issues relating to Ai No know that AI
has really been a campaign issue to date
so it will be interesting to see how
that if we're right about what's going
to happen here this is like bigger than
just a technological re ution in some
sense I mean sort of like all
technological revolutions or societal
revolutions but this one feels like it
can be much more of that than usual and
so it it is going to become uh a social
issue a political issue um it already
has in some ways but I think it is
strange to both of us that it's not more
of that already but with what we expect
to happen this year not with the
election but just with the the increase
in the capabilities of the products uh
and as people really
catch up with what's going to happen
what is happening what's already
happened uh there's like a lot of a Nur
always in society well I mean there are
political figures in the US and around
the world like Donald Trump who have
successfully tapped into a feeling of
yeah
dislocation uh anger of the working
class the feeling of you know
exacerbating inequality or technology
leaving people behind is there the
danger that uh you know AI furthers
those Trends yes for sure I think that's
something to think about but one of the
things that surprised us very pleasantly
on the upside uh cuz you know when you
start building a technology you start
doing research you you kind of say well
we'll follow where the science leads us
and when you put a product you'll say
this is going to co-evolve with society
and we'll follow where users lead us but
it's not you get you get to steer it but
only somewhat there's some which is just
like this is what the technology can do
this is how people want to use it and
this is what it's capable of and this
has been much more of a tool than I
think we expected it is not yet and
again in the future it'll it'll get
better but it's not yet like replacing
jobs in the way to the degree that
people thought it was going to it is
this incredible tool for productivity
and you can see people magnifying what
they can do um by a factor of two or
five or in some way that doesn't even
talk to makes sense to talk about a
number because they just couldn't do the
things at all before and that is I think
quite exciting this this new vision of
the future that we didn't really see
when we started we kind of didn't know
how it was going to go and very thankful
the technology did go in this direction
but where this is a tool that magnifies
what humans do lets people do their jobs
better lets the AI do parts of jobs and
of course jobs will change and of course
some jobs will totally go away but the
human drives are so strong and the sort
of way that Society works is so strong
that I think and I can't believe I'm
saying this because it would have
sounded like an ungrammatical sentence
to me at some point but I think AGI will
get developed in the reasonably
close-ish future and it'll change the
world much less than we all think it'll
change jobs much less than we all think
and again that sounds I may be wrong
again now but that wouldn't have even
compiled for me as a sentence at some
point given my conception then of how
AGI was going to go as you've watched
the technology develop have you both
changed your views on how significant
the job dislocation and disruption will
be as AGI comes into Focus so this is
actually an area that we know we have a
policy research team that studies this
and they've seen pretty significant
impact in terms of changing the way
people do jobs rather than job
dislocation and I think that's actually
going to accelerate and that it's going
to change more people's jobs um but as
Sam said so far it hasn't been the
significant a replacement of jobs you
know you hear a coder say okay I'm like
two times more productive three times
more productive whatever than they used
to be and I like can never code again
without this tool you mostly hear that
from the younger ones but
um it turns out and I think this will be
true for a lot of Industries the world
just needs a lot more code than we have
people to write right now and so it's
not like we run out of demand it's that
people can just do more expectations go
up but ability goes up
too goes up I want to ask you about
another news report today that suggested
that open AI was relaxing its
restrictions around the use of AI in
military projects and developing weapons
can you say more about that and you what
work are you doing with the US
Department of Defense and other military
agencies so a lot of these policies were
written um before we even knew what
these people would use our tools for so
what this was not actually just the
adjustment of the military use case
policies but across the board to make it
more clear so that people understand
what is possible what is not possible
but specifically on this um area we
actually still prohibit the development
of weapons um the destruction of
property harm to individuals but for
example we've been doing work with the
Department of Defense on um cyber
security tools for uh open source
software that secures critical
infrastructure we've been exploring
whether it can assist with veteran
suicide and because we previously had a
what essentially was a blanket
prohibition on Military many people felt
like that would have prohibited any of
these use cases which we think are very
much aligned with what we want to see in
the world has the US government asked
you to restrict the level of cooperation
with uh militaries in other
countries um they haven't asked us but
we certainly are not you know right for
now actually our discussion are focused
on um United States national security
agencies and um you know I think we have
always believed that democracies need to
be in the lead on this technology uh Sam
changing topics uh give us an update on
the GPT store and are you seeing maybe
probably explain it briefly and are you
seeing the same kind of explosion of
creativity we saw in the early days of
the mobile app stores yeah the same
level of creativity and the same level
of crap but it I mean that happens in
the early days as people like feel out a
technology there's some incredible stuff
in there too um give us an example the
gpts should I say what gpts are first
yeah sure um so gpts are a way to do a
very lightweight customization of chat
GPT and if you want it to behave in a
particular way to use particular data to
be able to call out to an external
service um you can make this thing and
you can do all sorts of like uh great
stuff with it um and then we just
recently launched a store where you can
see what other people have built and you
can share it and um I mean personally
one that I have loved is Al Trails I
have this like every other weekend I
would like to like go for a long hike
and there's always like the version of
Netflix that other people have where
it's like takes an hour to figure out
what to watch it takes me like two hours
to figure out what hike to do and the
all Trails thing to like say I want this
I want that you know I've already done
this one and like here's a great hike
it's been I it's sounds silly but I love
that one have you added any gpts of your
own have I made any yeah um I have not
put any in the store maybe I will great
um can you give us an update on the
volume or or the pace at which you're
seeing new gpts um the number I know is
that there had been 3 million created
before we launched the store I have been
in the middle of this trip around the
world that has been quite hectic and I
have not been doing my normal daily
metrics tracking so I don't know how
it's gone since launch but I'll tell you
by the slowness of chat GPT it's
probably doing really
well um I want to ask you about open
ai's copyright issues uh how important
are publisher relations to open ai's
business considering for example the
lawsuit last month file against open AI
by the New York Times They are important
but not for the reason people think um
there is this belief held by some people
that man you need all of my training
data and my training data is so valuable
and actually uh that is generally not
the case we do not want to train on the
New York Times data for example um and
all more generally we're getting to a
world where it's been like data data
data you just need more you need more
you need more you're going to run out of
that at some point anyway so a lot of
our research has been how can we learn
more from smaller amounts of very high
quality data and I think the world is
going to figure that out what we want to
do with Publishers if they want is when
one of our users says what happened to
Davos today be able to say here's an
article from blueberg here's an article
from New York Times and here you know
here's like a little snippet or probably
not a snippet there's probably some
cooler thing that we can do with the
technology and you know some people want
to partner with us some people don't
we've been striking a lot of great
Partnerships and we have a lot more
coming um and then you know some people
don't want want to uh we'd rather they
just say we don't want to do that rather
than Sue us but like we'll defend
ourselves that's fine too I just heard
you say you don't want to train on the
New York Times does that mean given the
the legal exposure you would have done
things differently as you trained your
model here's a tricky thing about that
um people the web is a big thing and
there are people who like copy from The
New York Times and put an article
without attribution up on some website
and you don't know that's a New York
Times article if the New York Times
wants to give us a database of all their
articles or someone else does and say
hey don't put anything out that's like a
match for this we can probably do a
pretty good job and um solve we don't
want to regurgitate someone else's
content um but the problem is not as
easy as it sounds in a vacuum I think we
can get that number down and down and
down have it be quite low and that seems
like a super reasonable thing to
evaluate us on you know if you have
copyrighted content whether or not it
got put into someone else's thing
without our knowledge and you're willing
to show us what it is and say don't
don't put this stuff as a direct
response we should be able to do that
um again it won't like thousand you know
monkeys thousand typewriters whatever it
is once in a while the model will just
generate something very close but on the
whole we should be able to do a great
job with this um so there's like there's
all the negatives of this people like ah
you know don't don't do this but the
positives are I think there's going to
be great new ways to consume and
monetize news and other published
content and for every one New York Times
situation we have we have many more
Super productive things about people
that are excited to to build the future
and not do their theatrics and and what
and what about DOI I mean there have
been artists who have been upset with
Dolly 2 Dolly 3 what what has that
taught you and how will you do things
differently we engage with the artist
Community a lot and uh you know we we
try to like do the requests so one is
don't don't generate in my style um even
if you're not training on my data super
reasonable so we you know Implement
things like that
um you know let me opt out of training
even if my images are all over the
Internet and you don't know what they
are what I'm and so there's a lot of
other things too what I'm really excited
to do and the technology isn't here yet
but get to a point where rather than the
artist say I don't want this thing for
these reasons be able to deliver
something where an artist can make a
great version of Dolly in their style
sell access to that if they want don't
if they don't want just use it for
themselves uh or get some sort of
economic benefit or otherwise when
someone does use their stuff um and it's
not just training on their images it
really is like you know it really is
about style uh and and that's that's the
thing that at least in the artist
conversations I've had that people are
super interested in so for now it's like
all right let's know what people don't
want make sure that we respect that um
of course you can't make everybody happy
but try to like make the community feel
like we're being a good partner um but
what what I what I think will be better
and more exciting is when we can do
things that artists are like that's
awesome Anna you are open AI ambassador
to Washington other capitals around the
world I I'm curious what you've taken
from your experience in Facebook what
you've taken from the tense relations
between a lot of tech companies and
governments and Regulators over the past
few decades and how you're putting that
to use now in open open AI I mean so I
think one thing that I really learned
working in government and of course I
worked in the White House during the
2016 Russia election interference and
people think that that was the first
time we'd ever heard of it but it was
something that we had actually been
working on for years and thinking you
know we know that this happens what do
we do about it and one thing I never did
during that period is go out and talk to
the companies because it's not actually
typical thing you do in government and
was much more rare back then especially
with you know these emerging tools and I
thought about that a lot as I entered
the tech space that I regretted that and
that I wanted governments to be able to
really understand the technology and how
the decisions are made by these
companies and also just honestly when I
first joined openi no one of course had
heard of openi in government for the
most part
and I thought every time I used it I
thought my God if IID had this for8
years I was in the administration I
could have gotten 10 times more done so
for me it was really how do I get my
colleagues to use it um especially with
open eyes mission to make sure these
tools benefit everyone I don't think
that'll ever happen unless governments
are incorporating it to serve citizens
more efficiently and faster and so this
is actually one of the things I've been
most excited about is to just really get
governments to use it for everyone's
benefit I mean I'm hearing like a lot of
sincerity in that pitch are Regulators
receptive to it it feels like a lot are
coming to the conversation probably with
a good deal of skepticism because of
past interactions with Silicon Valley I
think I mostly don't even really get to
talk about it because for the most part
people are interested in governance and
Regulation and I think that they know um
theoretically that there is a lot of
benefit the government many governments
are not quite ready to incorporate I
mean there are exceptions obviously
people who are really at the Forefront
so it's not you know I think often I
just don't even really get to that
conversation
so I want to ask you both about the
dramatic turn of events in uh November
Sam one day the window on these
questions will close um that is not you
think they
will I think at some point they probably
will but it hasn't happened yet so it
doesn't doesn't matter um I guess my
question is is you know have you
addressed the Govern the governance
issues the very unique uh corporate
structure at open AI with the nonprofit
board and the cap profit arm that led to
your ouer we're going to focus first on
putting a great full board in place um I
expect us to make a lot of progress on
that in the coming months uh and then
after that the new board uh will take a
look at the governance structure but I
think we debated both what does that
mean is it should open AI be a
traditional Silicon Valley for-profit
company we'll never be a traditional
company but the structure I I think we
should take a look at the structure
maybe the answer we have now is right
but I think we should be willing to
consider other things but I think this
is not the time for it and the focus on
the board first and then we'll go look
at it from all angles I mean presumably
you have investors including Microsoft
including uh your Venture Capital
supporters um your employees who uh over
the long term are seeking a return on
their investment um I think one of the
things that's difficult to express about
open aai is the degree to which our team
and the people around us investors
Microsoft whatever are committed to this
Mission um in the middle of that crazy
few days uh at one point I think like 97
something like that 98% of the company
signed uh a letter saying you know we're
all going to resign and go to something
else and that would have torched
everyone's equity and for a lot of our
employees like this is all or the great
majority of their wealth and people
being willing to go do that I think is
quite unusual our investors who also
were about to like watch their Stakes go
to zero which just like how can we
support you and whatever is best for for
the mission Microsoft too um I feel very
very fortunate about that uh of course
also would like to make all of our
shareholders a bunch of money but it was
very clear to me what people's
priorities were and uh that meant a lot
I I I sort of smiled because you came to
the Bloomberg Tech Conference in last
June and Emily Chang asked uh it was
something along along the lines of why
should we trust you and you very
candidly says you shouldn't and you said
the board should be able to fire me if
if they want and of course then they did
and you quite uh adeptly orchestrated
your return actually let me tell you
something um I the board did that I was
like I think this is wild super confused
super caught off guard but this is the
structure and I immediately just went to
go thinking about what I was going to do
next it was not until some board members
called me the next morning that I even
thought about really coming back um when
they asked you don't want you want to
come back uh you want to talk about that
but like the board did have all of the
Power there now you know what I'm not
going to say that next thing but I I I
think you should continue I think I no I
would I would also just say that I think
that there's a lot of narratives out
there it's like oh well this was
orchestrated by all these other forces
it's not accurate I mean it was the
employees of open AI that wanted this
and that thought that it was the right
thing for Sam to be back the you know
like yeah I thing I'll will say is uh I
think it's important that I have an
entity that like can fire this but that
entity has got to have some
accountability too and that is a clear
issue with what happened right Anna you
wrote a remarkable letter to employees
during The Saga and one of the many
reasons I was excited to to have you on
stage today was ju to just ask you what
were those five days like for you and
why did you step up and write that uh
Anna can clearly answer this if she
wants to but like is really what you
want to spend our time on like the soap
opera rather than like what AI is going
to do I mean I'm wrapping it up but but
um I mean go I think people are
interested okay well we can leave it
here if you want no no yeah let's let's
answer that question and we'll we'll we
can move on I would just say uh for
color that it happened the day before
the entire company was supposed to take
a week off so we were all on Friday uh
preparing to you know have a restful
week after an insane year so then you
know many of us slept on the floor of
the office for a week right there's a
question here that I think is a a really
good one we are at Davos climate change
is on the agenda um the question is does
do well I'm going to give it a different
spin considering the compute costs and
the the need for chips does the
development of AI and the path to AGI
threaten to take us in the opposite
direction on the climate
um we do need way more energy in the
world than I think we thought we needed
before my my whole model of the world is
that the two important currencies of the
future are compute SL intelligence and
energy um you know the ideas that we
want and the ability to make stuff
happen and uh the ability to like run
the compute and I think we still don't
appreciate the energy needs of this
technology um the good news to the
degree there's good news is there's no
way to get there without a breakthrough
we need Fusion or we need like radically
cheaper solar Plus Storage or something
at massive scale like a scale that no
one is really planning for um so
we it's totally fair to say that AI is
going to need a lot of energy but it
will force us I think to invest more in
the technologies that can deliver this
none of which are the ones that are
burning the carbon like that'll be those
all those unbelievable number of fuel
trucks and by the way you back one or
more nuclear yeah I I personally think
that
is either the most likely or the second
most likely approach feel like the world
is more receptive to that technology now
certainly historically not in the US um
I think the world is still
unfortunately pretty negative on fishing
super positive on Fusion it's a much
easier story um but I wish the world
would Embrace fishing much more I look I
I may be too optimistic about this but I
think
I I think we have paths now to
massive a massive energy transition away
from burning carbon it'll take a while
those cars are going to keep driving
there you know there's all the transport
stuff it'll be a while till there's like
a fusion reactor in every cargo ship um
but if if we can drop the cost of energy
as dramatically as I hope we can then
the math on carbon captur just so
changes uh I still expect unfortunately
the world is on a path where we're going
to have to do something dramatic with
climate look like geoengineering as a as
a as a Band-Aid as a stop Gap but I
think we do now see a path to the
long-term solution so I I want to just
go back to my question in terms of
moving in the opposite direction it
sounds like the answer is potentially
yes on the demand side unless we take
drastic action on the supply side but
there there is no I I see no way to
supply this with to to manage the supply
side without
a really big breakthrough right which is
this is does this frighten you guys
because um you know the world hasn't
been that versatile when it comes to
supply but AI as you know you have
pointed out is not going to take its
time until we start generating enough
power it motivates us to go invest more
in fusion and invest more in nor new
storage and and not only the technology
but what it's going to take to deliver
this at the scale that AI needs and that
the whole globe needs so I think it
would be not helpful for us to just sit
there and be nervous um we're just like
hey we see what's coming with very high
conviction it's coming how can we use
our
abilities uh our Capital our whatever
else to do this and in the process of
that hopefully deliver a solution for
the rest of the world not just AI
training workloads or inference
workloads Anna it felt like in 2023 we
had the beginning of a almost
hypothetical conversation about
regulating AI what what should we expect
in 2024 and you know does it do do do
governments act does it does it become
real and what is what is AI safety look
like so I think we it is becoming real
you know the EU is uh on the cusp of
actually finalizing this regulation
which is going to be quite extensive and
the Biden Administration uh wrote the
longest executive order I think in the
history of executive orders uh covering
this technology and is being implemented
in 2024 because they gave agencies you
know a bunch of homework for how to
implement this and govern this
technology and and it's happening so I
think it is really moving forward um but
what exactly safety looks like of what
it even is I think this is still a
conversation we haven't bottomed out on
you know we founded this Frontier Model
Forum in part yeah maybe explain what
that is so this is um for now this is um
Microsoft openai anthropic and um Google
but it will I think expand to other
Frontier Labs but really right now all
of us are working on safety we all red
teamr models um we all do a lot of this
work but we really don't have even a
common vocabulary um or a standardized
approach and to the extent that people
think like well this is just industry
but uh this is in part in response to
many governments that have asked us for
this very thing so like what is it
across industry that you think are
viable best practices is there a risk
that regulation starts to discourage
entrepreneurial activity in in AI I mean
I think people are terrified of this um
this is why I think Germany and France
and Italy in interjected into the EU um
AI act discussion because they are
really concerned about their own
domestic Industries being sort of
undercut before they've even had a
chance to develop were you satisfied
with your old boss's executive order and
was was there anything in there that uh
you had lobbied against no and in fact
you know I think it's it was really good
in that it wasn't just these are the
restrictions it's like and then also
please go and think about how your
agency will actually leverage this to do
your work better so I was really
encouraged that they actually did have a
balanced
approach um Sam first time at Davos
first time okay is um uh you mentioned
that uh You' prefer to spend more of our
time here on stage talking about AGI
what is the message you're bringing to
political leaders and other Business
Leaders here if you could distill it
thank you
um
so I think 2023 was a year where the
world woke up to the possibility of
these systems becoming increasingly
capable and increasingly General but GPT
4 I think is best understood as a
preview and it was more Over the Bar
than we expected of utility for more
people in more ways but you know it's
easy to point out the limitations and
again we're thrilled that people love it
and use it as much as they do but this
is progress here is not linear and this
is the thing that I think is really
tricky humans have horrible intuition
for exponentials at least speaking for
myself but it seems like a common part
of the human condition um what does it
mean if GPT 5 is as much better than gp4
is four was to three and six is to five
and what does it mean if we're just on
this trajectory now um what you know on
the question of Regulation I think it's
great that different countries are going
to try different things some countries
will probably ban AI some countries will
probably say no guard rails at all both
of those I think will turn out to be
suboptimal and we'll we'll get to see
different things work but as these
systems become more powerful um as they
as they become more deeply integrated
into the economy as they become
something we all used to do our work and
then as things beyond that happen as
they become capable of discovering new
scientific knowledge for
Humanity even as they become capable of
doing AI research at some point um the
world is going
to change more slowly and then more
quickly than than we might imagine but
the world is going to change um this is
you know a thing I I always say to
people is no one knows what happens next
and I really believe that and I think
keeping the humility about that is
really important you can see a few steps
in front of you but not too many
um but when cognition the when the cost
of cognition Falls by a factor of a
thousand or a million when the
capability of it becomes uh it augments
Us in ways we can't even imagine you
know uh like one example I I try to give
to people is what if everybody in the
world had a really competent company of
10,000 great virtual employees experts
in every area they never fought with
each other they didn't need to rest they
got really smart they got smarter at
this rapid Pace what would we be able to
create for each other what would that do
to the world that we experience and the
answer is none of us know of course and
none of us have strong intuitions for
that I can imagine it sort of but it's
not like a clear picture um and this is
going to happen uh it doesn't mean we
don't get to steer it it doesn't mean we
don't get to work really hard to make it
safe and to do it in a responsible way
but we are going to go to the Future and
I think the best way to get there in a
way that works
is the level of Engagement we now have
part of the reason a big part of the
reason we believe in iterative
deployment of our technology is that
people need time to gradually get used
to it to understand it we need time to
make mistakes while the stakes are low
governments need time to make some
policy mistakes and also technology and
Society have to co-evolve in a case like
this uh so technology is going to change
with each iteration but so is the way
Society works and that's got to be this
interactive iterative process um and we
need to embrace it but have caution
without fear and how long do we have for
this iterative process to play I I think
it's surprisingly continuous I don't
like if I try to think about
discontinuities I can sort of see one
when AI can do really good AI research
um and I can see a few others too but
that's like an evocative example um but
on the whole I don't think it's about
like Crossing this one line I think it's
about this continuous exponential curve
we climb together and so how long do we
have like no time at all in
infinite I saw GPT five trending on X
earlier this week and I clicked and I
you know couldn't I it sounded uh you
know probably misinformed but what what
can you tell us about gbt 5 and is it an
exponential uh you know improvement over
what we've seen look I don't know what
we're going to call our next model um I
don't know when are you going to get
creative with the uh the naming process
uh I don't want to be like shipping
iPhone
27 um so you know it's not my style
quite uh but I I think the next model we
release uh I expect it to be very
impressive to do new things that were
not possible with gp4 to do a lot of
things better and I expect us to like
take our time and make sure we can
launch something that we feel good about
and responsible about within open AI
some employees consider themselves to be
quote building God is that I haven't
heard that okay is um I mean I've heard
like people say that factiously but uh I
think almost all employees would say
they're building a tool more so than
they thought they were going to be which
they're thrilled about you know this
confusion in the industry of Are We
building a creature are we building a
tool um I think we're much more building
a tool and that's much
better uh to transition to something
yeah goad no no no no you finish your
thought oh I was just going to say like
the
the we think of ourselves as tool
Builders um AI is much more of a tool
than a product and much much more of a
tool than this like entity and uh one of
the most wonderful things about last
year was seeing just how much people
around the world could do with that tool
and they astonished us and I think we'll
just see more and more and human
creativity uh and ability to like do
more with better tools is remarkable and
and before we have to start wrapping up
you know there was a report that you
were working with Johnny I on an AI
powered device either within open AI
perhaps as a separate company you know I
bring it up because CES was earlier this
month and AI powered devices were the
the talk of of the conference you know
can you give us an update on that and
are we approach does AI bring us to the
beginning of the end of the smartphone
era smartphones are fantastic I don't
think smartphones are going anywhere uh
I think what they do they do really
really well and they're very general if
if there is a new thing to make uh I
don't think it replaces a smartphone in
the way that I don't think smartphones
replace computers but if there's a new
thing to make that helps us do more
better you know in a in a new way given
that we have this unbelievable change
like I don't think we quite I don't
spend enough time I think like marveling
at the fact that we can now talk to
computers and they understand us and do
stuff for us like it is a new affordance
a new way to use a computer and if we
can do something great there uh a new
kind of computer we should do that and
if it turns out that the smartphone's
really good and this is all software
then fine but I bet there is something
great to be done and um the partnership
with Johnny is that an open AI effort is
that another company I have not heard
anything official about a partnership
with
Johnny okay um Anna I'm going to give
you the last word as you and Sam meet
with business and world leaders here at
Davos what's the message you want to
leave them
with um I think the that there is an a
trend where people feel more fear than
excitement about this technology and I
understand that we have to work very
hard to make sure that the best version
of this technology is realized but I do
think that many people are engaging with
this via the leaders here and that they
really have a responsibility to make
sure that um they are sending a balanced
message so that um people can really
actually engage with it and realize the
benefit of this technology can I have 20
seconds absolutely one one of the things
that I think open ey has not always done
right in the field hasn't either is find
a way to build these tools in a way uh
and also talk about them that don't
don't get that kind of response I think
chat gbt one of the best things it did
is it shifted the conversation to the
positive not because we said trust us
it'll be great but because people used
it and are like oh I get this I use this
in a very natural way the smartphone was
cool cuz I didn't even have to use a
keyboard and phone I could use it more
naturally talking is even more natural
um speaking of Johnny Johnny is a genius
and one of the things that I think he
has done again and again about computers
is figuring out a way to make them very
human compatible and I think that's
super important with this technology
making this feel like uh you know not
this mystical thing from sci-fi not this
scary thing from sci-fi but this this
new way to use a computer that you love
and that really feels like I still
remember the first iMac I got and what
that felt like to me
relative it was heavy but the fact that
it had that handle even though it is
like a kid it was very heavy to carry um
it did mean that I was like I had a
different relationship with it because
of that handle and because of the way it
looked I was like oh I can move this
thing around I could unplug it and throw
it out the window if it tried to like
wake up and take over that's nice um and
I think the way we design our technology
and our products really does matter
Browse More Related Video
Sam Altman CEO of OpenAI | Podcast | In Good Company | Norges Bank Investment Management
20 Surprising Things You MISSED From SAM Altman's New Interview (Q-Star,GPT-5,AGI)
Ep #95: OpenAI and Moderna, Microsoft Phi-3, Sam Altman & AI Leaders Join Homeland Security AI Board
Elon Musk's STUNNING Prediction | Sam Altman Attempts to Harness the Power of a Thousand Suns.
Sam Altman's Surprising WARNING For GPT-5 - (9 KEY Details)
Inside OpenAI, the Architect of ChatGPT, featuring Mira Murati | The Circuit with Emily Chang
5.0 / 5 (0 votes)