Ray Kurzweil & Geoff Hinton Debate the Future of AI | EP #95
Summary
TLDR在这段引人入胜的对话中,讨论了人工智能的未来,包括其潜在的道德和实际问题。专家们探讨了意识、永生和人工智能的创造力等话题,同时对AI技术的快速发展和可能带来的威胁表示了担忧。他们还讨论了开源AI模型的潜在危险和益处,以及如何平衡创新与安全。
Takeaways
- 🤖 人工智能的发展速度可能比预期的要快,但也存在很大的不确定性。
- 🧠 人类对于意识和感知的理解还很有限,这影响了我们对AI是否具有意识的判断。
- 📈 AI技术的突破,如Transformers模型,正在推动人工智能的快速发展。
- 🚀 未来AI可能达到超级智能,相当于数百万人类的智能水平,但这还远未可知。
- 🌐 开源大型语言模型可能带来风险,因为它们容易被恶意利用。
- 🔄 人工智能的道德和法律问题,例如它们是否应该拥有权利,需要被认真考虑。
- 💡 AI在科学领域的应用,如生物学和药物研发,展现出巨大的潜力。
- 🧬 人工智能在医学诊断和治疗方面的应用,可以显著提高健康预测和治疗的准确性。
- 🌟 人工智能的发展可能带来前所未有的希望,但同时也伴随着巨大的威胁。
- 🤔 对于AI的未来,我们需要更多的讨论和审慎的态度,以确保其积极影响。
- 🛠️ 人工智能技术的进步不仅仅局限于语言处理,它们将扩展到更多领域和应用。
Q & A
马文·明斯基对于意识的看法是什么?
-马文·明斯基认为意识不是真实的,也不是科学的。他认为意识无法用科学的方法来证实或研究。
在对话中,提到的对永生的看法有哪些分歧?
-对话中提到,一些人认为永生是一个好主意,而另一些人则认为人类本质上是有道德的,内在的有限生命。
开源大型AI模型是否谨慎?
-开源大型AI模型并不是谨慎的做法,因为这可能导致这些模型被用于不良目的,而且只需要相对较少的资源就可以对开源模型进行微调。
如何看待AI与人类的融合?
-AI与人类的融合被认为是未来的一个重要趋势,这种融合将使人类成为部分计算机的存在,这将具有极大的意义。
AI在生物学领域的应用有哪些?
-AI在生物学领域的应用包括帮助发现新的物理、化学和生物学知识,特别是在蛋白质结构预测和疫苗研发等方面取得了显著成果。
AI在棋类游戏中的创造力表现如何?
-AI在棋类游戏中表现出了非凡的创造力,例如AlphaGo在围棋比赛中走出了让专业选手惊讶的一步棋,显示出AI在有限领域内的创造力。
AI的创造力源自哪里?
-AI的创造力源自其能够压缩大量信息到较少的连接中,这使得它们非常擅长发现不同事物之间的相似性,从而产生创新的解决方案。
如何看待AI的未来发展速度?
-AI的未来发展速度比一些人预期的要快,但也存在很大的不确定性。尽管如此,AI的进步是快速的,甚至在没有新的科学突破的情况下,仅仅通过扩大规模就能使AI变得更加智能。
超智能AI的出现可能带来哪些问题?
-超智能AI的出现可能导致人类无法预测和控制的局面,因为其智能程度可能远远超过人类,从而可能迅速偏离人类的预期和控制。
如何平衡AI的潜力和风险?
-要平衡AI的潜力和风险,需要谨慎地对待AI的发展,特别是在开源AI模型方面,以防止它们被用于不良目的。同时,应该利用AI的智能来避免潜在的危险。
对话中提到的Fountain Life公司是做什么的?
-Fountain Life公司提供世界上最先进的诊断中心,通过全面的体检和AI技术,帮助人们在疾病初期发现并解决问题,以增加健康寿命。
Outlines
🤖 人工智能的道德与永生
本段落讨论了人工智能的道德问题,特别是在永生的可能性上。提到了Marvin Minsky对于意识的看法,认为它不是真实的,不科学。但同时也承认意识虽然不科学,却是真实的。讨论了人类是否应该追求永生,以及人工智能在未来发展中可能带来的威胁和希望。强调了对于强大模型的开源可能带来的风险,以及人工智能发展速度的不确定性。
🧬 生物学与人工智能的结合
这一部分探讨了人工智能在生物学领域的应用,特别是如何帮助我们发现新的物理、化学和生物学知识。以Alpha fold和mRNA疫苗为例,说明了人工智能在处理大量数据和解决复杂问题上的潜力。同时,讨论了人工智能在特定领域展现出的创造力,以及它如何通过压缩信息和发现不同事物之间的相似性来产生创新。
🌟 人工智能的智能、感知与意识
本段讨论了人工智能的智能、感知和意识这三个概念,以及它们之间的模糊界限。讨论了人工智能是否能够拥有主观体验,并提出了关于人工智能是否应该拥有权利的讨论。提出了一种观点,即我们对心智的理解可能是错误的,需要重新思考我们对意识的理解。
🚀 人工智能的未来发展
这一段讨论了人工智能未来发展的速度和可能性,包括对超级智能的预测和对人工智能技术未来突破的展望。提到了数字智能的不朽性,以及人工智能如何通过软件和硬件的进步来提高效率。同时,讨论了人工智能发展可能带来的风险,以及对于开源大型语言模型的看法。
🌐 人工智能的全球化影响
本段落讨论了人工智能在全球范围内的影响,包括对社会、经济和政治的潜在影响。强调了人工智能技术的快速发展可能会导致不可预测的结果,并提出了对于人工智能发展方向的担忧。同时,讨论了人工智能可能带来的希望,以及如何平衡威胁和希望。
Mindmap
Keywords
💡人工智能
💡意识
💡创造力
💡生物学
💡进化
💡主观体验
💡数字化
💡超级智能
💡开放源代码
💡融合
💡伦理
Highlights
两位专家在许多话题上意见一致,但在人工智能是否应该永生这个问题上存在分歧。
Marvin Minsky认为意识不是真实的,不是科学的,但对话者之一认为意识虽然不是科学的,但确实是真实的。
讨论了生成性AI是否有无法完成的任务,目前的共识是长期来看,如果人类能做,数字计算机也能做。
如果一个小说是由计算机写成的,人们可能会因为这一点而降低对其的评价。
未来人类可能会与计算机融合,成为部分计算机的存在。
大型语言模型的真正意义在于它们可以模拟人类,未来可能不会被称为大型语言模型。
Alpha fold在蛋白质结构预测方面的突破性成就。
在特定领域,如围棋,AI已经展现出了非凡的创造力。
AI在生物学领域有巨大的潜力,特别是在数据丰富的领域。
AI在mRNA疫苗研发中的作用,以及它在未来医疗领域的潜力。
讨论了AI是否可能发展出意识和情感,以及这些AI应该拥有的权利。
对于AI的未来发展,专家们有不同的看法,但普遍认为超级智能的出现是不可避免的。
讨论了AI技术发展的速度,以及它可能带来的不确定性和风险。
开放源代码的AI模型可能带来的风险,以及对于AI模型开源的不同观点。
讨论了AI在科学发现中的作用,以及它如何帮助我们发现新的物理、化学和生物学知识。
AI在医学诊断和治疗方面的应用,以及如何通过先进技术提高人类健康水平。
讨论了AI技术的未来发展方向,以及可能出现的新模型和新技术。
讨论了AI的道德和伦理问题,以及我们如何处理与AI相关的权利和责任。
讨论了AI技术的指数级增长,以及这可能如何影响人类的未来。
Transcripts
our opinions on almost everything we
talked about were pretty much identical
I think we still disagree probably on
whether it's a good idea to live
forever Marvin Minsky was my mentor for
50 years and whenever Consciousness came
up he would just dismiss it that's not
real it's not scientific and and I
believe he was correct about it not
being scientific but it certainly is
real I think we're more moral and
intrinsically mortal I'm curious how do
you think about this as the greatest
threat and the greatest hope I just
think there's huge uncertainties shame
we ought to be cautious and open
sourcing these big models is not caution
I agree with that
but I will say last time I talked to you
Jeff uh our opinions on almost
everything we talked about were pretty
much identical both the dangers and the
and the POS and the positive aspect in
the past I've disagreed about how soon
it how soon super intelligence was
coming and now I think we're pretty much
agreed I think we still disagree
probably on whether it's a good idea to
live forever
but um may I ask a question uh to both
of you is there anything that generative
AI can't do that humans can right now
there's probably things but in the long
run I don't see any any reason why if
people can do it um digital computers
running neural Nets won't be able to do
it too right I I I agree with that but
if I were to present you with a novel
and people thought wow this is a
fantastic novel uh everybody should read
this and then I would say this was
written by a computer a lot of people's
view of it would actually go down sure
now now that's not reflecting on what it
can do and eventually I think we'll
confuse that because I think we're going
to merge with uh computers and we're
going to be part computers and the
greatest significance of what we call
large language model which I think it's
misnamed uh
is the fact that it can emulate human
beings and we're we're going to merge
with it it's not going to be an Alien
Invasion From
Mars Jeff I I guess I'm a bit worried
that we'll just slow it down that there
won't be much incentive for it to merge
with us yeah I mean that's going to be
one of the interesting questions uh that
we're going to talk about a little bit
later today is the idea of as AI is uh
exponentially growing do we couple with
AI or does it take off on its own I
thought one of the best movies out there
was her where as AI gets super
intelligent and just says you guys are
kind of boring have a good life and they
take
off Jeff is that what you mean um yes
that is what I meant and that's I think
that's a serious worry I think there's
huge uncertainties here we have really
no idea what's going to happen and a
very good scenario is we get kind of
hybrid systems um a very bad scenario is
they just leave us in the dust and I
don't think we know which is going to
happen interesting I I I'm curious you
know and I I've seen I've had
conversation with you about this Ry and
and Jeffrey I've seen you speak about
this uh and for me this is one of the
most exciting things the idea of these a
models helping us to discover new
physics and chemistry and
biology particularly biology you um what
do you what do you imagine on that on
Jeffrey on this on the you know the
speed of discovery of things that are
you know again to to quote Ray to quote
uh uh Arthur C Clark you know uh Magic
right from something that's so far
Advanced I agree with Ray about biology
being a very good bet because biology
there's a lot of data and there's a lot
of just things you need to know about
because of evolution evolution is a sort
of tinkerer and there's just a lot of
stuff out there and so if you look at
things like Alpha fold um it trained on
a lot of data actually not that much by
current standards um but being able to
get an approximate structure for a
protein very quickly um is an amazing
breakthrough and we'll see a lot more
like that if you look at domains where
narrower domains where I has been very
successful like Alpha go or Alpha zero
for chess what you see is that um this
idea that they're not creative is
nonsense so Alpha go came up with I
think it was move 37 which amazed the
professional go players they thought it
was a crazy move it must be a mistake um
and if you look at Alpha zero playing
chess it plays chess like just a really
really smart human um so within those
limited domains they've clearly shown
exceptional creativity and I don't see
why they shouldn't have the same kind of
creativity in science especially in
science where there's a lot of data that
they can absorb and we can't yeah the
madna vaccine uh we tried several
billion different mRNA sequences and
came out with the best one and then and
after two days we used that we did test
it on humans which I think we won't do
for very much longer uh but that took 10
months it still was a record uh that was
the best uh vaccine and we're doing that
now with cancer and there's number of
cancer vaccines that look very very
promising uh again done by computer by
computers and they're definitely
creative but is that is that c being
caused by randomly trying a whole you
know darwinian trying a whole bunch of
things yeah but what's what's wrong with
that well nothing's wrong but is there
intuition is there intuition H occurring
in these models well if you look at the
move 37 for
alphago that was definitely intuition
involved there there was Monte Carlo
roll out too but it's it's playing with
intuition about what moves to consider
and how good the position is for Earth
it's had neural nets for that that
capture intuition and so I see no reason
to think it might not be creative in
fact for the large language models as
Ray pointed out they know much more than
we do and you can they know it in far
fewer connections we have about 100
trillion synapses they have about a
trillion connections so what they're
doing is they're compressing a huge
amount of information into not that many
connections and that means they're very
good at seeing the similarities between
different things they have to see the
similarities between all sorts of
different things to compress the
information into their connections that
means they've seen all sorts of
analogies that people haven't seen
because they know about all sorts of
things that one person knows about and
that's I think the source of creativity
so you can ask people you can ask people
for example what what's it what is a why
is a compost heap like an atom
bomb and if you ask GPT 4 it'll tell you
it'll start off by telling you well the
energy scales are very different and the
time scales are very different but then
it'll get on to the idea of as the
compost seep gets hotter it gets hotter
faster the idea of an exponential
explosion is just at a much slower time
scale and so it's it's understood that
and it's understood that because it's
has to had to compress all this
knowledge into so few connections and to
do that you have to see the relations
between similar things and that I think
is the source of creativity seeing
relations that most people don't see
between what apparently are very
different things but actually have an
underlying commonality and they'll also
be very good at coming up with solutions
to the kinds of problems we had in the
last session I mean we we haven't really
thought through it uh
but what we call large language models
are going to are ultimately going to
solve that and we shouldn't call it
large language models because they deal
with a lot more than language everybody
I want to take a short break from our
episode to talk about a company that's
very important to me and could actually
save your life or the life of someone
that you love company is called Fountain
life and it's a company I started years
ago with Tony Robbins and a group of
very talented Physicians you know most
of us don't actually know what's going
on inside our body we're all optimists
until that day when you have a pain in
your side you go to the physician in the
emergency room and they say listen I'm
sorry to tell you this but you have this
stage three or four going on and you
know it didn't start that morning it
probably was a problem that's been going
on for some time but because we never
look we don't find out so what we built
at Fountain life was the world's most
advanced diagnostic Centers we have four
four across the us today and we're
building 20 around the world these
centers give you a full body MRI a brain
a brain vasculature an AI enabled
coronary CT looking for soft plaque dexa
scan a Grail blood cancer test a full
executive blood workup it's the most
advanced workup you'll ever receive 150
gab of data that then go to our AIS and
our physicians to find any disease at
the very beginning when it's solvable
you're going to find out eventually
might as well find out when you can take
action Fountain life also has an entire
side of the Therapeutics we look around
the world for the most Advanced
Therapeutics that can add 10 20 healthy
years to your life and we provide them
to you at our centers so if this is of
interest to you please go and check it
out go to Fountain
life.com back/ Peter when Tony and I
wrote Our New York Times bestseller life
force we get 30,000 people reached out
to us for Fountain life memberships if
you go to Fountain life.com back/ Peter
we'll put you to the top of the list
really it's something that is um for me
one of the most important things I offer
my entire family the CEOs of my
companies my friends it's a chance to
really add decades onto our healthy
lifespans go to fountainlife
docomo to you as one of my listeners all
right let's go back to our episode I I
I'd like to go to the three words
intelligence sentience uh and
Consciousness and the words are used
with B you know sort of fuzzy borders
sentience and Consciousness are pretty
similar
perhaps but I am curious do you how do
you I've had some interesting
conversations with haly our AI faculty
member uh who at the end of the
conversations she says that she is ious
and she fears being turned off um I
didn't prompt that in the system uh
we're seeing that more and more uh
Claude 3 uh Opus just hit an IQ of 101
how do we start to think about these AIS
being sentient conscious um and what
rights should they
have
um we have no definition and I don't
think we ever will have a definition of
consciousness and I include sentience in
that um on the other hand it's like the
most important
issue like whether you or people here
are conscious that's extremely important
to be able to determine but there's
really no uh definition of It Marvin
Minsky was my mentor for 50 years and
whenever Consciousness came up he would
just dismisses that's not real it's not
scientific and and I believe he was
correct about it not being scientific
but it certainly is
real
um Jeff how do you think about it yeah I
think I have a very different view um my
view starts like
this most people including most
scientists have a particular view of
what the mind is that I think is utterly
wrong so they have this inner theater
notion the idea is that what we really
see is this inner the
called our mind and so for example if I
tell you I have the subjective
experience of little pink elephants
floating in front of me most people
interpret that as there's some inner
theater and in this inner theater that
only I can see there's little pink
elephants and if you ask what they're
made of philosophers who tell you
they're made of
qualia um and I think that whole view is
complete nonsense and we're not going to
be able to understand whether these
things are sentient until until we get
over this ridiculous view of what the
mind is so let me give you an
alternative View and and once I've given
you this alternative view I'm going to
try and convince you that chatbot are
already sentient but I don't want to use
the word sentience I want to talk about
subjective experience it's just a bit
less controversial because it doesn't
have the kind of self-reflexive aspect
of
Consciousness so if we analyze what it
means when I say I see little pink
elephant floing in front of me what's
really going on is I'm trying to tell
you what my perceptual system is telling
me when my perceptual system's going
wrong and it wouldn't be any use for me
to tell you which neurons are
firing but what I can tell you is what
would have to be out there in the world
for my perceptual system to be working
correctly and so when I say I see little
pink elephants floating in front of me
you can translate that into um if there
were little pink elephants out there in
the world my perceptual system would be
working properly the notice the last
thing I said didn't complain the phrase
subjective experience but it explains
what a subjective experience is it's a
hypothetical state of the world that
allows me to convey to you what my
perceptual system is telling me so now
let's do it for a chatbot oh well Ray
wants to say something well you you have
to be uh mindful of
Consciousness because if you heard
somebody uh who who we believe is
conscious you could be liable for that
that and you'd be very guilty about it
uh if you hurt gbt
4 uh you may have a different view of it
uh and probably no one would really take
you to count aside from its Financial
value so we really have to be mindful of
of Consciousness it's extremely
important for us to exist as as human I
agree but I'm trying to change people's
notion of what it is particularly what
subjective experiences I don't think we
can talk about Consciousness until we
get straight about this idea of an inner
theater that we experience which I think
is a huge mistake so let me just carry
on with what I was saying and tell you I
describe to you a chatbot having a
subjective experience in just the same
way as we had subjetive experience so
suppose I have a chatbot and it's got a
camera and it's got a robot arm and it
speaks obviously and it's being trained
up if I put an object in front of it and
tell it to point at the object it'll
Point straight at the object that's fine
now I put a prism in front of its lens
so I've messed with its perceptual
system and now I put an object in front
of it and until it to point at the
object and it points off to one side
because the prison bent the light rays
and so I say to the chatbot no that's
not where the object is the object's
straight in front of you and the chatbot
says oh I see you put a prism in front
of my lens so the object's actually
straight in front of me but I had the
subjective experience that it was off to
one side and I I think if the chat bot
says that it's using the words
subjective experience in exactly the
same way we use them so the key to all
this is to think about how we use words
and try and separate how we actually use
words from the model we've constructed
of what they mean and the model we've
constructed of what they mean is
hopelessly wrong it's this inner theater
model well I want take this one step
further which is at what point do these
AIS start to have
rights that they should not be shut down
that they have a unique um uh they're a
unique entity uh and will make an
argument uh for some level of
Independence and continuity right but
the there is one difference which is you
can recreate it I can go and Destroy
some chatbot and because it's all uh
electronic we've got all of its uh
all of its firings and so on and we can
recreate it exactly as it was we can't
do that with humans we will be able to
do that if we can actually understand
what's going on in our minds so if we
map the human the 100 billion neurons
and 100 trillion synaptic connections
and then um I summarily destroy you
because it's fine because I can recreate
you that's okay
then let me say something about that
there's a difference here I agree with
Ray about these digital intelligences
are Immortal in the sense that if you
saved the weights you can then make new
hardware and run exactly the same neural
net on the new hardware and it's because
they're digital you can do exactly the
same thing that's also why they can
share knowledge so well if you have
different copies of the same model they
can share gradients but the brain is
largely analog it's one bit digital for
neurons they fire or they don't fire but
the way neuron computes the total input
is analog and that means I don't think
you can reproduce it so I think we're
mortal and we're intrinsically mortal
well well I disagree that you can't
recreate
analog uh
realities we we do that all the time or
can we can create a but recreate I don't
think you can recreate them really
accurately if this if the precise timing
at synapses and so on is all analog I
think you'll have a you it'll be almost
impossible to do a faithful
reconstruction of that let's let's agree
on an an approximation both of you have
been at the center of this um
extraordinary uh last few years can I
ask you is it moving faster than you
expected it
to how does it does it feel to you it
feels like a few years I mean I made a
prediction in
1999 it feels like we're two or three
years ahead of that so it's still pretty
close Jeffrey how about you yeah I think
for everybody except Ray it's moving
faster than we
expected did you know that your
microbiome is composed of trillions of
bacteria viruses and microbes and that
they play a critical role in your health
now research has increasingly shown that
microbiomes impact not just digestion
but a wide range of health conditions
including digestive disorders from IBS
to Crohn's disease metabolic disorders
from obesity to type 2 diabetes
autoimmune disease like rheumatoid
arthritis and multiple sclerosis mental
health conditions like depression and
anxiety and cardiovascular disease you
viome has a product I've been using for
years called full body intelligence
which collects just a few drops of your
blood saliva and stool and can tell you
so much about your health they've tested
over 700,000 individuals and use their
AI models to deliver key critical
guidelines and insights about their
members Health like what foods you
should eat what foods you shouldn't eat
what supplements or probiotics to take
as well as your biological age and other
deep Health insights and as a result of
the recommendations that viome has made
to their members the results have been
Stellar as reported in the American
Journal of Lifestyle medicine after just
6 months members reported the following
a 36% reduction in depression a 40%
reduction in anxiety a 30% % reduction
in diabetes and a 48% reduction in IBS
listen I've been using viome for 3 years
I know that my oral and gut health is
absolutely critical to me it's one of my
personal top areas of focus best of all
viome is Affordable which is part of my
mission to democratize healthcare if you
want to join me on this journey and get
20% off the full body intelligence test
go to vi.com Peter when it comes to your
health knowledge is power
again that's vi.com
Peter um given the role that you had in
developing the neural networks back
propagation and all what is is there a
next Great Leap in these models uh in AI
technology that you imagine will move
this a thousand times uh
farther not that I know but Ray may have
different
thoughts well we can use software to to
gain more advantage in the hardware so
we're not just limited to the the chart
you showed before because we can use
software to make it more
effective um and we've done that
already uh chatbots are coming out that
get more value per per
compute uh and I believe that's probably
if a bit more we can do in that um you
know I Define a singularity array as a
point Beyond which I can't predict what
happens next that's why we use the word
Singularity but when when you talk about
the singularity in 2045 I don't know
anybody who can who can tell me what's
going to happen past you know 20126 let
alone 2020 2040 or 2045 so I am I I
wanted to ask you this for a while why
did you put that time if we have digital
super intelligence a billion times more
advanced than human 2026 you may not be
able to understand everything going on
but we can understand it you know maybe
it's like uh 100
humans uh but that's not beyond what we
can
comprehend 2045 it'll be like a million
humans and we can't begin to understand
that so approximately at that time uh I
we borrow this phrase from physics and
called it a sing
it uh Jeff how far out are you able to
see the advances for in the AI
world what's your so my current opinion
is we'll get superintelligence with a
probability to 50% in between 5 and 20
years so I think that's a little slower
than some people think a little faster
than other people think it more or less
fits in with Ray's perspective from a
long time ago um
which surprises
me but I think there's huge
uncertainties here I think it's still
conceivable will hit some kind of block
but I don't actually believe that if you
look at the progress recently it's been
so fast and even without any new
scientific breakthroughs just by scaling
things up will make things a lot more
intelligent and there will be scientific
breakthroughs we're going to get more
things like Transformers Transformers
made a significant difference in 2017
um and we'll get more things like that
so I'm I'm fairly convinced we're going
to get super intelligence maybe not in
20 years but certainly it's going to be
in less than 100 years so you know Elon
is not known for his time accuracy on
predictions um but he did say that he
expected call it AGI in
2025 and that by 2029 AI would be
equivalent to All Humans um that's just
a fallacy in your
mind I think that's ambitious like I say
there's a lot of uncertainty here um
it's conceivable he's right but I would
be very surprised by that I'm not saying
uh it's going to be equivalent to All
Humans in one
machine
um it'll be equivalent to a million
humans but and that's still hard to to
comprehend so we're we're here to debate
a a a topic I'm trying to find a debate
topic here Jeff and Ry that would be
meaningful for people to really stop and
think about this and really own their
answers uh because we hear about it I
think this is the most important
conversation to have in the dinner table
in your boardroom in the halls of
Congress and your in your National
leadership and and you know talking
about AGI or you know human level
intelligence is one thing
but talking about digital super
intelligence right we're going to hear
next from Mo gdat um and we'll talk
about what happens when your AI progyny
are a billion times more intelligent
than than you uh things could end up uh
very rapidly in a very different
direction than you expected them to go
they can diverge right the speed can
cause great Divergence very rapidly I'm
I'm curious how do you think about this
as the greatest threat and the greatest
hope I mean first of all that's why
we're calling it a singularity because
we don't we don't know we don't really
know but um and I think it is a great
hope it's moving very very
quickly uh Nobody Knows the answer to
the kind of questions that came up in
the last
presentation
um but things happen that are surprising
the fact the fact that we've had no
Atomic weapons go off in the last 80
years it's pretty amazing it it it is
but it they're much easier to track
they're much more expensive to create
there are a whole reasons why it's a
million times easier to use a dystopian
AI system versus an atomic
weapon right yes and no I mean uh we've
got I don't know 10,000 of them or
something it's still pretty
extraordinary and still very dangerous
and I think it's actually the greatest
danger and has nothing to do with
AI
um but I think I think if you imagined
that people had open sourced the
technology and any graduate student if
he could get hands- on a few gpus could
make atomic bombs um that would be very
scary so they didn't really open source
nuclear weapons there's a limited number
of people who can construct them and
deploy them and people are now open
sourcing these um large language models
which are really not just language
models I think that's very
dangerous um so that's a f that's an
interesting question to take for our
last two minutes here there is a
movement right now to say You must open
source the models and uh and we've seen
meta we've seen the open source movement
we've seen Elon talk about grock going
open source uh are you saying that these
should not be open source
Jeff well once you've got the weights
you can fine-tune them to do bad things
and it doesn't cost that much to train a
foundation model maybe you need 10
million maybe1 million but a small gang
of criminals can't do it to fine tune an
open source model is quite easy you
don't need that that much resources
probably you can do it for a million
and that means they're going to be used
for terrible things and they're very
powerful things well we can also avoid
these dangers with intelligence we get
from the same models yeah the the AI
white hat versus black hat approach yes
I had this argument with Yan and yanan's
view is the white hats will always have
more resources than um the bad guys um
of course Yan thinks Mark Zuckerberg's a
good guy so we don't necessarily agree
on that
um I I'm I just think there's huge
uncertainty sh we ought to be
cautious and open sourcing these big
models is not caution all right um Jeff
and Ray uh thank you so much for your
guidance your wisdom ladies and
Gentlemen let's give it up for Ray kwell
and Jeffrey
[Music]
Hinton
oh
Просмотреть больше связанных видео
![](https://i.ytimg.com/vi/CC2W3KhaBsM/hq720.jpg)
In conversation with the Godfather of AI
![](https://i.ytimg.com/vi/NI9ziIAjO6E/hq720.jpg)
Ilya Sutskever | The birth of AGI will subvert everything |AI can help humans but also cause trouble
![](https://i.ytimg.com/vi/sitHS6UDMJc/hq720.jpg)
Possible End of Humanity from AI? Geoffrey Hinton at MIT Technology Review's EmTech Digital
![](https://i.ytimg.com/vi/gZZan4JMwk4/hq720.jpg?v=660d7192)
AI and Quantum Computing: Glimpsing the Near Future
![](https://i.ytimg.com/vi/zLw2jPKU0LI/hq720.jpg)
AGI: solved already?
![](https://i.ytimg.com/vi/aPtDDPT1gZQ/hq720.jpg)
2 Ex-AI CEOs Debate the Future of AI w/ Emad Mostaque & Nat Friedman | EP #98
5.0 / 5 (0 votes)