Lester Holt interviews Open AI's Sam Altman and Airbnb's Brian Chesky
Summary
TLDR在这段访谈中,Sam Altman和Brian Chesky就人工智能(AI)的现状和未来进行了深入讨论。他们探讨了AI如何融入日常生活,并对各行各业产生影响。讨论还涉及了AI技术发展中的道德责任、数据使用问题,以及AI可能带来的社会和经济变革。此外,他们还讨论了人工智能在政治、政策制定和选举中的作用,以及如何确保技术的安全和对社会的积极影响。
Takeaways
- 🧑🤝🧑 两位嘉宾是朋友,并且在一些重要项目上有过合作。
- 🤖 大多数现代人在某种程度上都与AI有交互,但可能并不总是意识到这一点。
- 🚀 AI技术正在迅速发展,已经过了一个关键的转折点,未来将会更加深入地融入我们的日常生活。
- 🔍 AI在医疗领域的应用,如Callor Health使用AI进行癌症筛查和治疗计划。
- 💡 AI的未来可能不是单一的‘奇点’,而是一系列能力的逐步提升。
- 🔑 技术公司,包括Airbnb,正在通过收购AI初创公司或建立合作伙伴关系来参与AI领域。
- 🌐 AI将像互联网一样,成为几乎所有公司的基础设施的一部分。
- 🏆 Sam Altman认为AI可能是他职业生涯中最大的一件事,因为它将比以往任何技术都更深刻地影响人们的生活。
- 🤖 AI技术的发展带来了道德责任问题,需要确保技术的发展是安全和负责任的。
- 🌍 随着AI技术的发展,可能会出现地缘政治上的竞争和合作,需要全球性的框架来确保技术的健康和安全发展。
- 💡 AI技术的发展需要社会、政府和企业的共同努力,以确保技术带来的利益最大化,同时避免潜在的负面影响。
Q & A
Sam和Brian在对话中提到了哪些AI技术的应用实例?
-Sam和Brian提到了使用AI技术进行癌症筛查和治疗计划,以及未来可能帮助发现癌症治疗方法的应用。另外,还提到了Airbnb使用AI来更好地理解用户,提供个性化的旅行和居住体验。
对话中提到的AI技术发展是否已经到达了一个临界点?
-Sam认为AI技术已经越过了一个临界点,但他也指出未来还会有更多的临界点,随着系统获得新能力和更好的性能。
Sam和Brian如何看待与其他科技公司的关系,例如与苹果的合作以及Elon Musk的反应?
-Sam没有预料到Elon Musk的负面反应,而Brian则认为这仅仅是Elon个人的反应,并不代表其他科技公司对OpenAI的看法。
为什么Brian认为每家科技公司都需要拥有AI技术?
-Brian认为AI技术将会像互联网一样被完全嵌入我们所做的一切事情中,因此每家公司都需要有AI技术,无论是通过合作还是自己的开发计划。
Sam和Brian讨论了AI技术可能带来的哪些社会变革?
-他们讨论了AI技术可能带来的巨大社会变革,包括提高生产力、改变人们的生活方式,以及可能对教育、医疗、艺术等多个领域产生深远影响。
Sam被自己创立的公司解雇的事件对AI行业有什么影响?
-Sam被解雇的事件引起了业界的广泛关注和讨论,这反映了AI技术发展过程中可能遇到的道德责任、监管和治理等挑战。
AI技术在发展过程中如何确保安全和伦理性?
-Sam提到了他们公司在推出系统时会非常谨慎,确保系统的健壮性和安全性,并强调了与政府、政策制定者和其他利益相关者的沟通和合作的重要性。
Sam和Brian如何看待AI技术在即将到来的总统选举中的作用?
-他们认为AI技术将是这次选举中的一个主要技术元素,需要确保准确的投票信息,防止深度伪造等滥用技术的出现,并保持警惕以应对可能的新的滥用方式。
AI技术的发展是否会遇到数据使用的法律和伦理问题?
-Sam提到了关于数据使用的公平性和合法性的问题,他们正在考虑新的经济模型,以确保所有创造数据和知识的人都能参与其中。
Sam和Brian如何看待人工通用智能(AGI)的未来发展?
-Sam认为AGI的发展不会是一个单一的关键时刻,而是一系列逐步提升能力的过程。他们强调了与社会透明沟通和技术发展步骤的重要性。
Sam和Brian认为AI技术如何影响创作和艺术领域?
-他们认为AI技术可以成为艺术家和创意工作者的强大工具,而不是取代他们,可以协助创作、提高效率,并开辟新的艺术表达方式。
Outlines
🤖 AI技术的普及与未来展望
在这段对话中,讨论了人工智能(AI)技术的普及程度以及它如何影响日常生活。提到了像Chat GPT这样的工具,尽管许多人可能没有意识到,但AI已经渗透到许多服务中。Sam认为AI技术已经跨过了一个关键的门槛,并且未来将有更多的突破。提到了AI在医疗领域的应用,例如用于癌症筛查和治疗计划,并预测未来AI可能帮助发现癌症的治疗方法。此外,还讨论了AI技术如何被集成到各种服务中,以及它如何提升这些服务的水平。
🛠️ 技术的责任与道德问题
这段对话聚焦于技术的责任和道德问题,特别是AI技术。讨论了公众对于AI技术的不信任感,以及技术发展背后的道德责任。Sam分享了他与Brian之间的对话,强调了技术应当被看作工具,而不是控制我们的存在。同时,提到了Sam被自己公司解雇的经历,以及他对于AI技术发展的看法,包括对监管的需求和公众对于AI技术未来可能的担忧。
📰 AI技术在社会中的争议
在这一部分中,讨论了AI技术在社会中的争议,包括与名人声音相似的AI技术可能引起的问题,以及深度伪造技术(deep fakes)的滥用问题。提到了AI技术可能对个人和社会造成的伤害,以及行业需要采取的立场来防止这些滥用情况的发生。同时,也提到了AI技术在选举中的应用,以及如何防止虚假信息的传播。
🏛️ AI技术与政策制定
这段对话讨论了AI技术与政策制定之间的关系。Sam和Brian分享了他们对于AI技术发展的看法,以及政策制定者如何影响这一进程。提到了AI技术可能对选举产生的影响,以及不同企业可能受到选举结果的不同影响。同时,强调了建立一个全球性的框架来合作管理AI技术的重要性,以及避免让技术发展超出社会控制的范围。
🧠 AI训练数据与知识创造
在这一部分中,讨论了AI训练数据的需求,以及如何公平地使用这些数据。提到了AI模型的训练需要大量的数据,并且对于数据的来源和使用方式有深入的讨论。Sam提到了他们对于数据使用的法律立场,以及他们如何考虑数据创造者的权利。同时,提到了AI技术可能对互联网使用方式和经济模型产生的影响。
🚀 AI技术发展与价值观教育
这段对话探讨了AI技术的发展,特别是关于如何教授AI系统人类的价值观。Sam分享了他们如何尝试将特定的价值观嵌入到AI模型中,并且讨论了这些价值观的来源和如何让社会参与到这一过程中。同时,提到了AI技术可能带来的巨大变革,以及如何确保这些变革是积极的。
🌐 AI技术与全球合作
在这一部分中,讨论了AI技术在全球范围内的影响,以及不同国家之间如何合作来管理这项技术。提到了建立一个跨国的组织或团体来确保全球对AI技术的共识,以及如何避免技术被滥用。同时,讨论了AI技术可能带来的经济影响,以及如何通过技术创新来推动全球GDP的增长。
🛑 AI技术的自我调节与未来规划
这段对话讨论了AI技术发展过程中的自我调节问题,以及如何平衡技术创新与社会影响。Sam和Brian分享了他们对于AI技术未来发展的看法,包括可能的技术突破和对社会的潜在影响。同时,提到了在AI技术发展过程中可能需要的自我限制,以及如何确保技术发展不会超出社会的控制范围。
🌟 AI技术的广泛应用与期望
在对话的最后部分,讨论了AI技术的广泛应用,以及对未来5年内AI技术发展的期望。Sam分享了他收到的许多关于人们如何使用AI工具的积极反馈,并且表达了他对于AI技术能够继续帮助人们实现更多目标的乐观态度。同时,提到了AI技术在教育、艺术、科学研究等领域的潜在应用,以及如何通过这些技术来解决社会问题。
Mindmap
Keywords
💡人工智能(AI)
💡通用人工智能(AGI)
💡深度学习
💡道德责任
💡技术门槛
💡数据隐私
💡深度伪造(Deepfake)
💡监管
💡技术创新
💡价值观
💡技术发展
Highlights
Sam和Brian讨论了人工智能(AI)在日常生活中的普及程度,提到了像Chat GPT这样的工具。
AI技术的快速发展,可能已经越过了一个关键的门槛,未来将有更多突破。
Sam提到AI在医疗领域的应用,例如Callor Health使用AI进行癌症筛查和治疗计划。
讨论了AI技术发展可能带来的伦理和责任问题,以及公众对此的担忧。
Brian强调了AI将如何深刻地影响人们的生活,甚至超过以往任何技术。
Sam和Brian讨论了与苹果公司的合作以及Elon Musk对此的反应。
Airbnb收购AI初创公司,预示着每个科技公司都需要拥有或合作AI技术。
讨论了AI技术在不同行业中的应用,例如Airbnb使用AI来更好地匹配用户需求。
Sam分享了他在OpenAI的经历,包括被解雇和重新加入公司的过程。
讨论了AI技术在政治和社会中的作用,以及其对选举可能产生的影响。
Sam和Brian讨论了AI技术对数据的需求,以及数据使用中的公平性和合法性问题。
讨论了AI技术可能带来的问题,例如deepfake技术对个人和社会的潜在危害。
Sam提到了AI技术在提高生产力和可能创造巨大经济价值方面的潜力。
讨论了AI技术发展中的价值观问题,以及如何教会AI积极的价值观。
Sam和Brian分享了他们对于AI技术未来发展的看法,以及对社会的积极影响。
讨论了AI技术在教育、艺术和科学研究等领域的应用潜力。
Sam表达了他对于AI技术未来5年的期望,希望能够继续为人们带来帮助和快乐。
Transcripts
[Music]
[Applause]
[Music]
[Applause]
well you guys get all the Applause I've
invented nothing Zippo great to see you
guys welcome thanks everybody for being
here very excited about this
conversation uh we'll set this up by
letting folks know you guys are friends
you have your your work is kind of you
know worked in together on some
important projects and some important
things so we're going to get into some
of that as well but you're wondering why
the two them here that's why thank you
so much for your time let me get um
let's start off with kind of a
perspective Sam what percentage of this
audience do you think has in some way
interative with AI
today I I would bet most
uh I'm not going to hold you to it by
the way in the
90s it's a and most of us don't know
where it's affecting our lives yeah you
know there there are people who use chat
GPT and you kind of know when you're
using that or not but the number of
people are integrating AI into all of
their other services and taking our gp4
and other models that we have and you
know it's sort of
like lifting a lot of services up has AI
crossed a a critical threshold in the
past
year I think
that yes but I think there will be many
thresholds that AI uh crosses you know
we used to Brian actually gave me great
advice about this we used to talk about
we're going to get to this like moment
of AGI and you know it was this very IL
defined term and I think it never made
sense to think about it that way in the
first place but we used to and now we
think about it is it'll just be this
series of thresholds uh where the
systems will get new and new cap better
and better capabilities so you know you
can use chat GPT today for some things
and you'll be able to use it for much
more helpful tasks in the future um you
know maybe today there are things like
okay uh like for example one of our um
one of our partners callor health is now
using uh gp4 for cancer screening and
treatment plans and that's great and
then maybe a future version will help uh
discover cures for cancer so I think of
it as success of thresholds but
definitely the fact that we can talk to
computers in natural language and have
them understand us and help us that's
certainly been a threshold I want to
talk about some things that we've seen
in the news lately and get your reaction
to it um at times you have both made
friends and enemies fairly quickly you
struck a big deal with Apple recently um
Elon Musk was upset and said he wouldn't
allow Apple products at his companies
did you see that reaction coming uh well
I saw it happen but no I didn't I I I
didn't I sort of doubt it will actually
happen um but I didn't predict that are
are you does it represent something
that's happening on the outskirts of
open AI in terms of reaction from other
tech
companies uh no I think that's just like
an Elon
reaction and Brian let me turn to you
Airbnb recently picked up an AI startup
are we at a point now that every tech
company is going to have to have a piece
of this action a partnership or uh its
own development plans yeah I mean I
think that just like now every company
almost in the world is on the internet
AI is just going to be completely
embedded in everything that we do and I
think that one of the things that's
incredible Sam is like Sam used to say
you have to be if you want to be a great
entrepreneur you have to be right about
one big thing in your career and I think
that Sam was right about one of the
biggest things in the history of tech
because this is going to be something
that's going to affect people's lives
more than any technology that we've ever
seen in the past but I think a lot of
the conversation you know we're talking
about AI as this like existential
enigmatic thing and I think one of the
things we're missing is just talking
about the practical ways that people can
benefit their lives I can give you an
example Airbnb but Sam has a lot of
examples so today Airbnb is a way you
like type in a city and you find a home
and you book a home and that's Airbnb
and it's pretty much the way that the
internet's worked for the last 20 years
but imagine in the future um systems
that understand you better that's the
real promise a computer that can
understand you and can ask you like well
who are you Lester like what are your
hopes what are your dreams like where do
you want to travel what do you one day
want to do with your life and then it
could actually understand you and be
more of a Matchmaker really understand
you and match you to people communities
Services experiences anything you want
to be able to travel and live anywhere
in the world and that's kind of how I
think airb be can use but I think almost
every industry can get remade with AI
and I think they can participate but the
stakes are higher here than I mean what
what you talk about is largely
aspirational but with AI you're looking
at some real fears that I think we all
here understand so what does that mean
in terms of the people who are running
this most most of us are just passengers
on this bus we're watching you guys you
know do these incredible things you know
talk about it being compared to the
Manhattan Project and wondering where is
this going and wondering who are the
people behind it can we trust these
people so talk if you can about the
moral
responsibility um and and for all of us
to know these people know people like
you who are making these
changes yeah I mean I I can share um I I
me I I met Sam in 2008 and when I came
to Silicon Valley the word technology
might as well have been like a
dictionary definition for the word good
I mean Facebook is a way to share photos
of your friends YouTube was like cat
videos Twitter was like talking about
like what you're going doing today and I
think there was this General innocence
and I think over time what we realize is
when you take a tool and I think
technology is a tool you know Steve Jobs
one of the things he said is he put a
handle in the back of every computer cuz
he said never trust a computer who can't
throw out the window he said these are
tools and we're meant to dominate them
they don't dominate us and I think one
of the things that happen though is when
you put a tool in the hands of hundreds
of millions of people you know they're
going to use it for ways you didn't
intend and I think we are much more
sober and realistic in this new
generation because I think we learned a
lot of the lessons of the last
generation we learned about how
technology can be used mostly for good
but there's always unintended
consequences and so I think this time
one of the things I've seen Sam do is
he's been very cautious not polanish at
all about where this technology is going
and and really telling governments there
actually is a need for regulation Sam I
want to get your your take and give you
a chance to talk about your firing you
were you were fired from your own
company
why let me first touch on something that
Brian said in with your earlier question
and then I will very happy to talk about
that
um
I this is going to be a huge change in
society uh I think unlike other
technological Trends um we're sort of
we're aware even if today we're like
okay chat gbt is this like very helpful
tool and it's you know once I use it I'm
not scared of it um there is a sense
of super understandable anxiety about
where this is going to go what does it
mean if these tools keep getting more
capable at the rate they've been getting
capable at and there's tons of wonderful
things and we could talk about those all
day but there is this what is the future
going to look like even if we solve
every safety problem even if we solve
every um you know misuse problem even if
we figure out the perfect regulatory
regime like what are what are our lives
going to be like when it's not just like
the computer understands us and gets to
know us and helps us and do these things
but we can say like hey computer like
discover all of physics and it can go
off and do that um what does it mean
when we can say like hey start and run a
great company you can go off and do that
so that's a big change uh that's a lot
of trust that we have to earn to be some
of the stewards there will be many other
people working on this of this
technology and we're we're proud of our
track record uh I think if you look at
the systems that we've put out and the
time and Care we've taken we've been
able to get them to a level of generally
accepted robustness and safety that is
well beyond what what people thought we
were going to be able to do when we got
to these initial systems a few years ago
like when you looked at gpt2 or gpt3 and
said are we going to be able to make
this safe enough to use a lot of people
thought no but but there's this thing in
there's this the future is like looming
large and we've got to continue to earn
the trust with what we do the systems we
put in the world um and how we how we
have legitimate decision-making over
these systems how we broadly Empower
people with them how we continue to
promote stability in the world in the
face of all this change um and it makes
people very anxious uh and the whole
like the whole board firing me and
coming back thing I mean Brian was an
enormous help during that uh it was
obviously a super painful experience but
I do understand why anxiety levels have
been so are so high uh I and I think the
previous board members like they're
nervous about the continued development
of AI uh had whatever feelings they had
about
me and how we were doing things and
although I super strongly disagree with
what what they think things they've said
since how how they acted uh I think
there are like fundamentally good people
who are nervous about the future and
trying to figure out how we get to a
good outcome um I'm super excited with
the new board they're extremely uh
constructive and helpful and experienced
and strong and it's been a very
productive thing since then but that was
a horrible experience to go through not
not during the moment where it was just
like this is a crazy thing let's figure
out how to undo it and Brian was like
unbelievably helpful but then the period
after that uh where I just had to like
kind of pick up the pieces in this like
state of emotional shock that was that
was really bad you were trying to pick
up the pieces you were picking up the
phone Brian yes explain that well I
remember
um so maybe just to go back in time um
when chat GPT launched and launched in
Nate late November
2022 it was a phenomenon unlike anything
we'd seen probably since the launch of
the iPhone I have no recollection
anything and I we knew overnight
everything was going to change and I
remember meeting with Sam and I said you
know I've been through a little bit of
this rocket ship before and I'm not
going to advise you on the core research
of AI but when it comes to like
marketing and like stakeholder
management and PR and like design and
product and everything that's not that
you're going to go on a rocket ship and
I'm only where I am today because people
believed in me and people helped me and
one of the great things about silken
Valley is is a high trust place where
people will help so I just wanted to be
helpful to him so this goes on for about
a year it's one year later and I get a
text message and it's actually from
somebody else saying Sam was fired from
open the eye I was like fired and I
immediately texted him and I think his
text back to me like was 5 minutes later
he had just found out he was fired
and he said so brutal and I go what
happened so we get on the phone and he
doesn't know what happened it wasn't
fully explained to him and by the way
his co-founder who was also on the board
was removed from the board and that
seemed to me very suspicious so I got on
the phone with him and Greg and I felt
really comfortable with the
circumstances that this was not a fair
process and I think this should always
be a fair process but especially if
they're Founders because they're very
very difficult to replace and what I
noticed in those first 24 hours was not
a lot of people sticking up for Sam and
I in my darkest times in my crisis have
had people stick up for me and that's
what I wanted to do for Sam and I
basically we talk through things and I
said I think the most important thing
for you to do is just be completely
transparent internally and externally
with what you know and what's happening
but the most remarkable thing and the
thing that made me really want to defend
him was you know you you learn a lot
about people in a crisis if you really
want to know what someone's like see
them in a crisis and at no point in the
5
days this went down did Sam ever even
for a second focus on self-preservation
he was completely I I was like why
aren't you sticking up for here like why
aren you care more about yourself that's
what I was saying to him like somebody's
got to stick up for you you're not even
sticking up for yourself and he just was
so focused on the team and what was best
for the team and I think that's what
really made me
you know so vifer like focused on
helping I want to turn Sam if I can turn
to the some of the bad publicity you've
received lately including the dust up
over the voice of Sky one thing that
could help clear up the concern over the
similarity of Sky's voice to Scarlett
Johansson would be to hear from the
actor who you say was hired to be the
voice of sky is that something that you
will
do certainly if she wants to I mean I
know she's made statements through her
agent uh but I'm not I don't I don't
know where I mean anything she wants to
do would certainly be fine with us the
whole thing opens up certainly a larger
question of what do we own in an AI
world uh do we have control over our
likenesses we're seeing uh you know deep
fake porn right now people's you know
heads being swapped um these are harmful
on an individual level how and I know
it's not unique to open AI but how is
the industry going to respond to
this I mean we think the industry needs
to take a super strong stance on that it
is we obviously do uh and there are
other issues related to how this
technology is being used uh to harm
people that we think the industry needs
to take a very strong stance on um we
try to be not only very loud in our
calls for regulation to prevent some of
these misuse cases these misuses which I
think is happening but also to set a
really good example in the products and
services we offer and hold ourselves to
a very high were these things inevitable
I mean you you clearly saw the risk
coming as this uh technology was
maturing like deep fakes and stuff deep
fakes yeah head face swapping yeah um it
was inevitable that the technology was
going to be capable of that and so you
know of course there are going to be
systems out there that allow that uh but
that's where I think we society and
governments have a role to say you know
will allow
some use cases of Technology we not not
comfortable with but in some places we
are going to draw a line and face
swapping deep fake revenge porn is a
great place to draw a line we're nearing
a presidential election as you know
we're seeing some of the deep fakes
already happening there's been talk
about this for years that this would be
a very difficult election what are your
thoughts as you begin to see this stuff
kind to emerge and in terms of your
responsibility your industry's
responsibility to make sure that we're
not being overwhelmed by disinformation
yes so you know this will be I think the
first election where there's not just
the US many other elections this year
where AI is like a major technological
element Providence is really important
accurate polling information and
avoiding some of the issues we've seen
with uh previous technological platforms
and other election Cycles um and you
know preventing things like deep fakes I
I think those are three top of- mind
issues for us uh in this election cycle
I'll also add that there may be other
things ways people try to misuse misuse
this that we're not aware of yet um so
we're we have like a whole monitoring
efforts set up and uh I think we'll need
a very tight feedback loop as we get
closer to the election uh to see if
there's additional areas where people
are trying to abuse the technology while
we're on the topic of the election Brian
I'll let you start what what do you
think will be the impact on your
individual businesses in terms of the
outcome of this
election hard to say I mean Airbnb is
kind of more of a cityby city
state-by-state thing so the changes in
um Federal administrations don't have
not historically um had a huge impact on
us and we're of course in a 109 220
countries so we're a pretty resilient
business I mean one of the things we saw
during the pandemic is when one part of
our business changes it adapts to some
other part so I don't anticipate a
really big change based on who's who's
who's elected how about you
Sam I do
expect some big impact based off who's
elected but I don't know how to I I
don't know what it'll be it it does seem
to me like AI is going to be an
increasingly important geopolitical
priority in the world um but I'm you
know I I hard for me to say exactly how
it's going to go one of the
things that I've really valued about
Brian so Brian kind of like under sold
what he mentioned earlier in that first
year kind of like what he's done to help
but when Chach BT started taking off and
everything just went crazy for me a lot
of people reach out and say oh I'd love
to help you I can do this I can do that
and you know everyone's I think they
mean it when they say it but everybody's
just busy um Brian was like the person
who would just sit down with me for like
3 hours every other week and like give
me a list and say Here's the five things
you got to do now here's where you're
behind here's what you're screwing up
here's what you got to proactively do
here's what you got to think about um
and it's basically like almost always
right and uh I learn to just like always
shut up and follow the advice um
one of the things that Brian started
saying
uh more recently uh is that you're
probably not thinking enough about
politics and policy and what that's
going to mean for how the world thinks
about Ai and here's the people you need
to hire here's the here's what it means
to like you know map this out and think
about a strategy here here's what you
should do and definitely not do and uh
that's been like super helpful and do
think for our business it's going to be
really important and I think one of the
things Lester is that you know I
remember coming to silen Valley we
didn't think these platforms would have
the impact on society that they we now
know they have and so I think the
mindset that Sam has and even the
questions you're asking him probably
weren't asked of tech leaders 15 years
ago I think the whole industry is
changed the whole conversation is is
like like Sam has built out much more of
a team much earlier than the big tech
companies would have around policy and
stakeholder management I want to ask if
about one of the things we've learned in
your research and developing chat GPT
and others is the requirement of data to
train up these modules it's an
insatiable appetite as it as it appears
has it changed how you view what is fair
use and whose material compated material
you can
use first of all I don't think we know
yet what the future of how these models
get smart is going to look like you know
is it that we just need more and more
data
forever doesn't feel to me like likely
to be right you know if you think about
what a human can learn from Reading one
textbook it's very different than what
it takes these AI models for now so I I
expect and also there comes a point
where to like invent new science you
need to just sit there and think and run
some experiments but it's not in any
textbook because it's new so I I expect
that the future of how we think about
training data um and what it takes to
make these models really capable is it
going to be a roadblock though in the
development of these products that's
what I was trying to say I I you know
this is like science we don't know for
sure I think it won't be um now that
said uh the issue of like fair use and
how to think about how people who create
data create knowledge create you know
Wonderful
books I think although like from a legal
perspective we're confident in our fair
use position now that we see where this
may evolve um we need to figure figure
out New Economic models where the whole
world gets to participate and I think
this goes beyond just people who have
data that we train on but also uh and
we've you know found many different ways
to license it and do different things
but also the people that provide the
feedback to the models the people who
like go off and create great realtime
news that maybe the model doesn't train
on but you want to display it um at the
time and that there's a lot of work that
goes into that uh and you know I I think
maybe AI is going to not super
significantly but somewhat significantly
change the way people use the internet
and if so you can see some of the
economic models of the past needing to
evolve uh and I think that's a broader
conversation than just training data but
it's sort of like content in general
surfaced via AI I want to ask you about
artificial general intelligence that's
taking it up taking up the game
considerably if I understand it
correctly that's when you get to the
point that the computers can do whatever
we can do is that a fair summation you
know that I I I think I was wrong to
initially think about it as this one
moment as we talked about but uh it does
seem to me and now I think people use
AGI to means all all sorts of things it
it does seem to me that trying to sort
of road map out for the world where we
think the significant increases in
capability will be um can do what you
know people can do can create new
science can what whole companies can do
uh that feels like it'd be very useful
for the industry to sort of agree on so
that we could have these conversations
in a little bit more of a disciplined
way and that's one of the things we
talked about is like just operating
transparently letting people know that
it's probably not this one promethian
moment where it goes from AI to AGI that
there's many many steps just like the
story of technology and that it's really
important that we bring Society along
and that we're not operating in this
black box and people think there's only
a few people controlling the future that
were transparent with other developers
and computer scientists and researchers
and policy makers about these are steps
we're going this is what we're seeing
and this is what we think the next four
steps look like but isn't but isn't this
a race on a different level the stakes
are so high I mean are are you do
consider yourself in a race and do you
think it's one you'll win to get to the
point of artificial general intelligence
I don't think of it as a race I
understand why that's like a very
compelling dramatic way to to talk about
it I I think that
there may be a race between nation
states um at some point but the
companies that are developing this now I
think everyone feels the stakes the need
to get this right I also think to to
Brian's point that it's not there's not
this Milestone we're all racing towards
it is this it is this continual
evolution of Technology where we melted
sand and figured out how to like turn it
into transistors and then figured out
how to like build an operating system
and do a certain kind of programming and
we made it bigger and bigger and then we
figured out how to like train these
systems that are sort of smart in some
ways but they're not off like running as
these autonomous things they're tools
that we're using to do more than we
could before in the way that we used
computers to do more than we could
before without Ai and in the way we used
machines in the industrial revolution to
do more than we could before and the way
we used agriculture to be able to have
time and space to do more things than we
could before
and and I don't think it's this race to
a milestone it's this ongoing the next
step and the next one and the next one
and the tools are going to get better
and better but what happens is it's not
like for sure technology is not neutral
and tools are not inherently neutral
things but the impact we can have by
building the tools is important we want
to get that right people are going to go
use these tools to invent the future
that we all collectively live in and
what one person can already do now
before chat GPT existed is an impressive
leap and by the time we get to GPT 6 or
7 what one person can do will be
incredibly uh increased and I'm very
excited for that like I think that is
that is the story of the world getting
better we make technology um people use
it to build new things Express their
creative ideas and Society improves yeah
when you when you talk about these
programs though um and when you give
them the ability to do what we do we
also have a set of values different sets
of values we view common decency in a
not so common way sometimes how do you
teach that to a computer in a way that
won't be harmful how do you teach values
that are
positive one of the things that has
surprised me and I don't want to say
this gets us like this solves the whole
AI alignment problem um but at our
current levels of systems uh our ability
to teach a Model A Certain set of values
and to behave in a certain way um is way
better than I thought it was going to be
at this point now there's a harder
question which is who gets to decide
what those values are um who gets to
decide what the defaults are how much an
individual user can uh sort of customize
them within those broad bounds and as an
early step there we put out this thing
maybe a month or two ago called the spec
where we tried to say um here is our
desired Model Behavior here are the
values we want our model to follow and
that way people can at least tell if
it's a bug or intentional when it does
something that they don't like and over
time Society can debate what those
values are and we can adapt to it um so
I'm very heartened by our technical
progress on this topic but man writing
that set of values or getting Society to
debate and agree on what those set of
values should be that's a much harder
Challenge and Brian you as you've talked
about you've given Sam um advice from
time to time I I I read somewhere I
don't have the exact quote in front of
me at least I can't find it right now
but to the notion of go for it and
figure it out later
I don't what is the quote it's it's the
idea that you you have believe that you
need to go for it when it comes to this
kind of research are there brakes that
should be put
on well yeah I mean I think if you
imagine you're in a car the faster the
car goes the more you need to look ahead
in front of you and you need to
anticipate the corners and I think that
we acknowledge that this technology is
so so powerful that I think this is why
we're like being so thoughtful I mean
people really are agon izing over how to
treat these systems and I do not
remember us doing this in 2007 2008 so I
do think it's a very very different time
I mean one of the things that like Sam
and I talked about was bring other
stakeholders in early and one of the
things we did last year was he went on a
tour around the world meeting with
people it was mostly I think a way to
get feedback from people educate people
and really get feedback so I think I
think the key Point Lester is we never
go so fast that we leave Society behind
that we only go as fast as to bring
bring everyone along and I think that if
everyone here could feel like they could
participate and they could have their
input into it then I don't really think
there's a huge thing to fear I think the
thing to fear is something we don't
understand we're left out of and
something that runs away from us that we
can't control and so that's the future
we don't want to live in also it's quite
interesting if you say the word AI it
can be scary you say Chachi BT it
doesn't sound as scary because it's a
very tangible tool so I think we need to
also just focus on like that which is in
front of us and how can we help people
there's a lot of problems right now and
open AI I mean it can lead to a lot of
scientific research and Discovery um
Chach PD can be an incredible tool for
artists um you know Airbnb we can think
we think it can really bring people
together we're living in this huge
epidemic of Lon we can use this to help
bring people together at the end of the
day it's not the technology it's the
people with the technology it's always
comes down to the people their values
and are they good people the way I sort
of think about this is um we need to
learn how to make safe technology we
need to figure out how to build Safe
products and that in that includes like
an ongoing dialogue with Society about
hey this has this impact I didn't expect
or don't want and also you're not
letting me do this thing that's really
important for this reason you didn't
understand so the way that we talk to
the broader world and the people that
use and impact by our products and
impacted by our products and let let
them reflect what they want and then
also like a safe operating plan which is
we get better and better at predicting
capabilities um research is of course an
open question you don't always know
where it's going to go but before we
start training a new model we'd like to
be able to say here here are the
dangerous capabilities that we think
could happen we have a preparedness
framework to test them sometimes this
takes a very long time uh with gp4 for
example we had about eight months
between when we finished training when
we released it including lots of like
external consultation in red teaming um
future models may take even longer but
it is very important to get the feedback
from society one thing that I don't
think is good is to let a huge
capability overhang build
up uh and then we haven't had that
feedback loop with Society so we we we
we do need to figure out how to balance
that but yeah you know taking the time
to get it right is very important are
you ever inclined or you think you'd
ever be inclined to back up to see the
future and and a and find it is maybe as
scary as some some people have suggested
are you prepared to that hit that moment
where you have to take a step back even
as your competitors may want to move
forward forward for sure um there are
things that we have built and chosen not
to release or held back for long periods
of time um there are plenty of other
companies that would release things that
we won't um we're not going to get every
decision right of course and uh we also
May at some point deploy something and
need to take it back but there'll also
be things that we just don't deploy you
you we talk about these scary images did
it help when you compared where you are
with a I with the Manhattan Project the
the race to build an atomic weapon was
that helpful for you as you try to make
your
case I mean we we try to give a number
of historical analogies because we think
it is important we may be wrong we may
be right but it's important for us to
tell Society what we believe the level
of importance of this technology is
there's no perfect historical analog for
any new technology so we can say there
were some things about the Manhattan
project that are like what we're doing
now there's some things about the Apollo
program there are some things about the
iPhone there are some things about that
iMac with the handle which I also really
loved um there's some things about the
internet there are some things about the
Industrial Revolution
and but what I think is important is to
say here are the parts where we can look
to a historical analogy and here are the
parts where we can't and the shape of
this technology and kind of the
decisions and the impact it is
fundamentally like a little bit
different than anything I think I think
it's different than the Manhattan
Project it's not a race it's not going
to be done in secret and I think Nations
can collaborate together and there're
could be a transnational kind of group
or body that could really kind of align
to make sure we're all on the same page
which would be best for society and
frankly probably best for entrepreneurs
so they don't have to comply with like
200 different laws we think we think
that's super important uh to get to get
some sort of global framework in
cooperation uh I think we're we're
really going to need that I think one of
you mentioned the the the nation states
is there a risk of of Nations States
taking this technology and using them in
a in a dangerous way
or absolutely and um I you know I think
you always have to be really really
careful about like who this technology
who who you're putting this technology
in the hands of and I think it goes back
to like some of the things Sam's
thinking about like one of the things we
like I know they developed early on that
they chose not to release is like voice
cloning right there's technology already
where you can basically like capture
someone's voice but obviously that would
be very very dangerous because obviously
you can imagine how it could compromise
elections and major security risk so I
think one of the things is just thinking
about like who could these tools end up
in the hands of and therefore if you let
the genie out of the bottle could it get
like too dangerous and so be very
thoughtful about it yeah and Sam
according to one report you speculated
AR artificial in general intelligence
could acrew as much wealth as a $100
trillion that's wealth that you said you
would then redistribute is it
was that an accurate quote and do you
want to expand on it I I think the sort
of point I was trying to make was that I
thought it could like double the world's
GDP um which feels like reasonable to me
and certainly would be in line with
other technological
revolutions um yeah we do think this is
just going to be a massive driver of
productivity and already at this early
stage seeing what people are doing with
it to sort of vast improve products and
services do you think do you understand
have that would sound to a lot of people
though um for for for sure of course uh
but I think like this is where this is
where historical analoges are helpful
and this is where it is helpful to look
at the chart of world GDP over time and
you know if the if the world GDP can
grow at you know 7% a year like which
sounds hugely fast but maybe with a
technological shift like this um is not
that far away I'm always bad at doing
this in my head but I think that's like
only 10 years to
double
so I I think it
is you know I think it is worth taking
the potential of this technology to do
enormous good very seriously and I think
we can now see more of what that looks
like as people are adopting the tools
preview if you will for us Chad GPT five
um what what will the leaps in
technology be and and does it put you on
a straighter path to where you want to
be does it put us on a what path I'm
sorry a what path does it put you on a
straighter path in terms of your goals
um so we don't know yet uh you know
we're optimistic but we still have a lot
of work to do on it uh but I expect it
to be a significant Leap Forward um a
lot of the things that GPT for gets
wrong you know can't do much in the way
of reasoning sometimes just sort of
totally goes off the rails and like
makes a dumb mistake uh like even like a
six-year-old would ever make um I expect
it to be much much better in those ways
and to be able to be used for a much
wider variety of of more helpful tasks
and it does go off the rail sometimes is
that a result to back where we were we
were speaking about the lack of data or
the shortage of
data I think it's many things together
it's we're we're still just like so
early in developing such a complex
system um there's data issues there's
algorithmic issues uh the models are
still quite small relative to what they
will be someday and we know they get
predictably better uh so I think it's
more like there's many things that we
need to go improve all of and we're
still just like so early in the
technology you know the first iPhone was
still pretty buggy but it was like good
enough to be useful for people yeah I
think that like I I don't think things
are going to change as much in the world
in the next couple years people think
it's not linear it's things are going to
change slowly and then probably all of a
sudden and I think everyone's still
trying to figure out how to use this
technology if you take your phone and
you look at your home screen and ask
yourself a year and a half after chat
GPT launched how many apps are
fundamentally different because of AI
and very few of them are fundamentally
different so I think we're still in this
world where we're developing a lot of
the you know computation with Nvidia Sam
and team are developing the models and a
lot of the change to SI is going to
happen when people build on top of those
models the applications and there's so
many uses for it I mean you know one of
the big use cases that we're talking
about scientific discovery you know
about what this can do to drug research
uh to like you know some of the biggest
kind of uh types of ills in society
there's a lot this can do with education
we think this can essentially give
access to tutors to everyone around the
world um creative people I know there's
a lot of fear that artists can be
replaced but you know I think if artists
participate I went to design school I
think this is a technology that they can
use so I think we can go down the list
um and I think there are going to be a
lot of really exciting opportunities in
the next 3 to five years where do you
want to be in 5 years
Sam further along the same path you know
we'd like
to one of the most fun parts of the job
is getting like tons of email every day
from people who are using Tools in these
amazing ways I you know was able to like
diagnosed this health problem that I'd
had for years and I couldn't figure it
out and was making my life miserable and
I just typed my symptoms into Chachi PT
and I got this idea and went to see a
doctor and I'm totally cured or I've
been trying my whole life to learn these
things and couldn't do it and I got
Chach PT to be like a tutor for me or I
you know I'm like three times as
productive as a developer and I'm doing
these amazing things I'm the scientist
using it and I love getting those things
I love how much people love chbt I
really do and 5 years from now I just
hope it's a lot more of that I hope we
have put this tool into the world that
continues to Delight people and let them
do more and like be their best at
whatever they're doing well will we
having more conversations like this down
the road certainly as you go down your
path but I want to thank both of you Sam
wman Brian chesy for taking time and
being with us here in as great
[Music]
conversation nice job thanks for
watching stay updated about breaking
news and top stories on the NBC News app
or follow us on social media
Browse More Related Video
2024年开年AI大牛世界论坛关于AI的三大访谈之一 李飞飞、吴恩达对谈:这一次,AI冬天不会到来2024 A Dialogue between Li Fei-Fei and Andrew Ng
AI and Quantum Computing: Glimpsing the Near Future
AGI: solved already?
Ray Kurzweil & Geoff Hinton Debate the Future of AI | EP #95
The Critical Role of Supply Chains in Business and Society
Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat
5.0 / 5 (0 votes)