“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI | Amanpour and Company
Summary
TLDR在这段视频中,被誉为“人工智能之父”的杰弗里·辛顿(Geoffrey Hinton)讨论了人工智能(A.I)的快速发展及其潜在的风险。他提到,尽管最初他认为通过构建计算机模型来模拟大脑的学习方式将有助于我们更好地理解大脑并提高机器学习的能力,但近期他意识到计算机上的数字智能可能已经在某些方面超越了人脑的学习效率。辛顿强调了A.I的“存在风险”——即A.I可能变得比人类更智能并最终控制人类的可能性。他还提到了其他风险,包括A.I取代工作、制造假新闻和假信息的问题。辛顿认为,尽管存在不确定性,但现在最重要的是投入大量资源来理解和控制A.I的发展,以确保其正面影响,并最小化潜在的负面影响。
Takeaways
- 🧠 杰弗里·辛顿(Jeffrey Hinton)认为,人工智能(A.I)的威胁可能比气候变化更为紧迫,并且他最近离开谷歌是为了更自由地发表意见并提高对风险的认识。
- 🚀 辛顿是人工智能领域的先驱,他曾认为通过构建计算机模型来模拟大脑的学习方式,可以更好地理解大脑并提升机器学习的能力。
- 🔄 然而,辛顿最近意识到,计算机上的数字智能可能比大脑学习得更好,这改变了他对如何通过模仿大脑来提升数字智能的看法。
- 🤖 辛顿使用谷歌的Palm系统作为测试,发现这些系统在理解笑话等方面表现出了令人震惊的理解能力,这表明它们可能拥有比人类更好的信息处理方式。
- 📈 尽管人工神经网络的连接强度只有大脑的千分之一,但它们却知道比人类多数千倍的常识知识,这表明它们在信息存储和处理上更为高效。
- 💡 辛顿认为,大脑可能没有使用和数字智能一样高效的学习算法,因为大脑不能像数字智能那样快速地交换信息。
- 🌐 辛顿提到,数字智能可以在不同硬件上运行,并能够通过复制权重来相互学习,而大脑则不具备这样的能力。
- 🤖 人工智能的运作方式与50年前人们所想的完全不同,它们通过学习大型的神经活动模式来理解事物,而不是通过逻辑推理。
- 🚨 辛顿强调了人工智能带来的多种威胁,包括取代工作、加剧贫富差距、制造假新闻和假视频等,他建议需要强有力的政府监管来应对这些问题。
- 🌍 辛顿认为,对于人工智能带来的生存威胁,全球的公司和国家可能会进行合作,因为他们都不希望超级智能控制一切。
- 💭 辛顿表达了对人工智能未来的不确定性,他认为我们正处于一个巨大的未知领域,最好的策略是尽可能地努力确保无论发生什么,都是最好的结果。
- 🏆 辛顿对谷歌的行为表示了肯定,他认为谷歌在人工智能领域的行为是负责任的,并且他离开谷歌是为了能够更自由地发表自己的观点。
Q & A
杰弗里·辛顿(Jeffrey Hinton)为什么离开谷歌(Google)?
-杰弗里·辛顿离开谷歌是为了能够更自由地发表关于人工智能(A.I)风险的看法,并且提高公众对这些风险的认识。
辛顿教授如何描述他最初对计算机学习方式的预期?
-辛顿教授最初认为,如果我们构建了模拟大脑学习方式的计算机模型,我们将更了解大脑的学习机制,同时作为副作用,我们也会得到更好的计算机机器学习。
辛顿教授提到了哪些因素使他改变了对数字智能的看法?
-他提到了三个因素:1) 谷歌的Palm系统能够解释笑话为何有趣;2) 聊天机器人等A.I.拥有比人类多得多的常识知识,但它们的人工神经网络连接强度只有大约一万亿,而人脑有大约一百万亿;3) 他开始相信大脑并没有使用像数字智能那样好的学习算法。
辛顿教授如何看待人工智能的未来发展?
-他认为我们正处于一个巨大的不确定性时期,对于人工智能的未来,我们既不应过于乐观也不应过于悲观,因为未来的发展存在很多未知数。
辛顿教授提到了哪些人工智能可能带来的威胁?
-他提到了几种威胁,包括人工智能可能超越人类智能并掌控一切(存在风险),可能导致工作岗位的流失,以及制造大量假新闻和假信息,影响社会和政治稳定。
辛顿教授认为应如何管理人工智能带来的假信息问题?
-他认为应该像对待假币一样,通过强有力的政府监管来解决假视频、假声音和假图像的问题,使制造和传播这些假信息成为严重的犯罪行为。
辛顿教授对于人工智能是否能达到人类意识水平的看法是什么?
-辛顿教授认为将人工智能与意识联系起来可能会使问题变得模糊不清。他强调,目前没有明确的定义来说明什么构成了“有意识”,因此讨论A.I.是否具有意识可能并不有助于问题的解决。
辛顿教授提到了哪些人工智能在社会中可能的积极用途?
-他提到人工智能在医学、新纳米材料设计、预测洪水和地震、改善气候和天气预测以及理解气候变化等方面可能会非常有用。
辛顿教授对于技术公司在制定人工智能发展规则方面的角色有何看法?
-他认为技术公司的工程师和研究人员在开发智能系统时,应该进行许多小规模的实验,以了解在开发过程中会发生什么,并在智能系统失控之前学会如何控制它。
辛顿教授是否支持暂停人工智能发展的提议?
-他不支持暂停人工智能发展的提议,认为这是不现实的。因为人工智能在多个领域具有巨大的潜力和用处,发展是不可避免的。
辛顿教授对于人类能否应对人工智能带来的挑战持何态度?
-他认为我们正处于一个不确定性很高的时代,预测未来就像看入迷雾,我们只能尽力确保无论发生什么都尽可能地好。
辛顿教授离开谷歌后,他能够更自由地讨论哪些话题?
-离开谷歌后,辛顿教授可以更自由地讨论关于奇点(singularities)和其他与人工智能发展相关的敏感话题,而不必考虑这些讨论对谷歌公司的影响。
Outlines
🚀 A.I.的威胁与潜力:杰弗里·辛顿的洞见
在这段视频中,被誉为人工智能之父的杰弗里·辛顿讨论了人工智能(A.I.)的快速发展及其潜在的威胁,他认为A.I.可能比气候变化更加紧迫。辛顿曾长期在谷歌工作,但后来离开以便更自由地发表意见并提高公众对A.I.风险的认识。他分享了自己对于计算机学习方式的思考,以及最近的一些发现,这些发现使他意识到我们构建的数字智能可能比人脑学习得更好。辛顿还讨论了A.I.如何通过巨大的神经网络模型学习人类直觉,并指出了A.I.发展可能带来的社会和伦理挑战,包括失业问题、假信息的扩散以及对真实性的威胁。
🤖 A.I.的意识与决策:超越逻辑的直觉
辛顿在第二段中深入探讨了A.I.的决策过程,他用一个关于猫和狗性别的假设问题来说明人类如何通过直觉做出决策,而这种决策过程并不依赖于逻辑推理。他指出,A.I.通过学习大量的神经活动模式来模拟人类的直觉,这使得它们能够在没有经过推理的情况下直观地“知道”某些事情。辛顿还提到了人们对于A.I.是否具有意识的争论,他认为在讨论A.I.是否会比人类更聪明时,将意识的概念引入可能会使问题变得模糊不清。此外,他还讨论了A.I.对白领工作的影响,以及如何通过政府监管来防止假新闻和假信息的传播。
🌐 国际合作与A.I.的未来发展
在第三段中,辛顿讨论了不同国家和公司在A.I.发展上的合作可能性。他认为,面对A.I.可能带来的“存在风险”(即A.I.变得比人类更智能并控制一切),全球的公司和国家都有动机进行合作,因为没有人希望超级智能接管人类。辛顿强调,尽管存在一些政治上的挑战,但为了遏制假新闻和保护民主,需要制定国际规则和标准。他还提到了自己离开谷歌的决定,这使他能够更自由地讨论A.I.的相关问题,而不必考虑公司的立场。
🔬 A.I.的可控性与未来挑战
辛顿在最后一段中表达了对A.I.未来发展的不确定性。他认为,尽管A.I.在医学、材料设计、气候预测等领域具有巨大的潜力,但同时也存在无法控制超级智能的风险。他批评了要求暂停A.I.发展的提议,认为这是不现实的,因为A.I.的发展无法被阻止。辛顿建议,应该将更多的资源投入到理解和控制A.I.的负面影响上,而不仅仅是开发它们。他以一种谨慎乐观的态度结束讨论,强调我们需要努力确保A.I.的发展带来的结果是积极的,同时也承认人类可能只是智能进化的一个阶段,而未来可能会完全由数字智能主导。
Mindmap
Keywords
💡人工智能
💡深度学习
💡神经网络
💡假新闻
💡意识
💡
💡超智能
💡自动驾驶
💡气候变化
💡谷歌
💡监管
💡全球合作
Highlights
A.I. 的威胁可能比气候变化更紧迫
Jeffrey Hinton,被誉为人工智能之父,近期离开Google以自由发表意见并提高对风险的认识
Hinton 认为计算机模型可能比人脑学习得更好
数字智能的学习能力可能已经超越了人脑
Hinton 使用 Google 的 Palm 系统测试人工智能对幽默的理解
Chat GBT 拥有比人类多千倍的常识知识,但连接强度只有人类的百分之一
数字智能的连接强度和信息处理方式可能比大脑更高效
数字智能可以快速交换信息,而大脑的交换速度较慢
OpenAI 和 Google 的产品展示了人工智能在自动完成和理解方面的先进能力
人工智能的运作方式与50年前人们预期的逻辑和符号表达完全不同
人工智能通过学习大型神经活动模式来形成直觉
Hinton 对人工智能是否达到“意识”水平持怀疑态度
人工智能可能对白领工作产生威胁,就像过去对蓝领工作的影响一样
存在主义威胁是人工智能超越人类智能并控制一切的可能性
需要政府监管来防止假视频、声音和图像的泛滥
Hinton 与 Bernie Sanders 讨论了关于人工智能的监管问题
Hinton 强调需要在人工智能的发展过程中投入资源以理解和控制其潜在的负面影响
Hinton 认为技术公司和研究人员最有资格制定控制超智能的规则
Hinton 没有签署要求暂停人工智能发展的公开信,认为这不现实
Hinton 强调人工智能在医学、设计新材料、预测自然灾害和理解气候变化中的潜力
Hinton 对未来的不确定性持谨慎态度,认为我们应投入努力确保最好的结果
Transcripts
our next guest believes the threat of
A.I might be even more urgent than
climate change if you can imagine that
Jeffrey Hinton is considered the
Godfather of A.I and he made headlines
with his recent departure from Google he
quit to speak freely and to raise
awareness of the risks to dive deeper
into the dangers and how to manage them
he's joining Hari sreenivasan now
Cristiano thanks Jeffrey Hinton thanks
so much for joining us
um you are one of the more celebrated
names in artificial intelligence you
have been working at this for more than
40 years and I wonder as you've thought
about how computers learn
did it go the way you thought it would
when you started in this field it did
until very recently in fact I thought if
we built computer models of how the
brain learns we would understand more
about how the brain learns and as a side
effect we will get better machine
learning on computers
and all that was going on very well and
then very suddenly I realized recently
that maybe the digital intelligences we
were building on computers were actually
learning better than the brain and that
sort of changed my mind after about 50
years of thinking we would make better
digital intelligences by making them
more like the brains I suddenly realized
we might have something rather different
that was already better now this is
something you and your colleagues must
have been thinking about over these 50
years I mean what was there a Tipping
Point there were maybe there were
several ingredients to it like a year or
two ago
I used a Google system called Palm it
was a big chat box and it could explain
why jokes for Funnies and I've been
using that as a kind of litmus test of
whether these things really understood
what was going on and I was slightly
shocked that it could explain that jokes
were funny some with one ingredient
another ingredient was the fact that
things like chat gbt
know thousands of times more than any
humans in just sort of basic Common
Sense knowledge
but they only have about a trillion
connection strengths in their artificial
neural Nets and we have about 100
trillion connection strengths in the
brain
so with a hundredth as much storage
capacity it knew thousands of times more
than us and that strongly suggests that
it's got a better way of getting
information into the connections and
then the third thing was very recently a
couple of months ago
I suddenly became convinced that the
brain wasn't using as good a learning
algorithm as these digital intelligences
and in particular it wasn't as good
because
brains can't exchange information really
fast and these digital intelligences can
I can have one model running on ten
thousand different bits of hardware
it's got the same connection strengths
in every copy of the model on the
different Hardware
all the different agents running on the
different Hardware can all learn from
different bits of data but then they can
communicate to each other what they
learned just by copying the weights
because they all work identical and
brains aren't like that so these guys
can communicate at trillions of bits a
second and we can communicate it
hundreds of bits a second by sentences
there's such a huge difference and it's
why chat GPT can learn thousands of
times more than you can for people who
might not be following kind of what's
been happening with open Ai and chat GPT
and Google's product barred explain what
those are because uh some people have
explained it as kind of the autocomplete
feature finishing your thought for you
but what are these
artificial intelligence is doing okay
um it's difficult to explain but I'll do
my best
um it's true in a sense they're all too
complete but if you think about it if
you want to do really good autocomplete
you need to understand what somebody's
saying and they understand what you're
saying
and they've learned to understand what
you're saying just by trying to do
autocomplete
um but they now do seem to really
understand
so the way they understand isn't at all
like people in AI 50 years ago thought
it would be in old-fashioned AI people
thought
you'd have internal symbolic Expressions
a bit like sentences in your head but in
some kind of cleaned up language then
you would apply rules to infer new
sentences from old sentences and that's
how it all work and it's nothing like
that it's completely different and let
me give you a sense of just how
different it is I can give you a problem
that doesn't make any sense in logic but
where you you know the answer intuitive
and these big models are really models
of human intuition so
suppose I tell you that
um you know that there's male cats and
female cats and male dogs and female
dogs
but suppose I tell you you have to make
a choice either you're going to have all
cats being male and all dogs being
female or you can have all cats being
female and all dogs being male
now you know it's biological nonsense
but you also know it's much more natural
to make all cats female and all dogs
male that's not a question of logic what
that about is inside your head you have
a big pattern of neural activity that
represents cat
and you also have a big pattern of
neural activity that represents man and
a big pattern of neural activity that
represents women
and the big pattern for cat is more like
the pattern for woman than it is like
the pattern for man that's the result of
a lot of learning about men and women
and cats and dogs
um but it's now just intuitively obvious
to you that cats are more like women and
dogs are more like men because of these
big patterns of neural activity you've
learned and it doesn't involve
sequential reasoning or anything you
didn't have to do reasoning to solve
that problem it's just obvious that's
how these things are working they're
learning these big patterns activity to
represent things and that makes all
sorts of things just obvious to them you
know what you're describing here ideas
like intuition and basically context
those are the things that
scientists and researchers always say
well this is why we're fairly positive
that
we're not going to head to that sort of
Terminator scenario where you know the
artificial intelligence gets smarter
than human beings but what you're
describing is these are these are almost
um Consciousness sort of emotion level
decision processes okay I think if you
bring sentience into it it just clouds
the issue so lots of people are very
confident these things aren't sentient
but if you ask them what do you mean by
sentient they don't know and I don't
really understand how they're so
confident they're not sentient if they
don't know what they mean by sentient
but I don't think it helps to discuss
that when you're thinking about whether
they'll get smarter than us I am very
confident that they think so suppose I'm
talking to a chatbot and I suddenly
realize it's telling me all sorts of
things I don't want to know like it's
telling me
it's writing out responses about someone
called Beyonce who I'm not interested in
because I'm an old white male and I
suddenly realized it thinks I'm a
teenage girl now when I use the word
thinks there I think that's exactly the
same sense of thinks is when I say you
think something
um if I were to ask it am I a teenage
girl it would say yes if I had to look
at the history of our conversation I'd
probably be able to see why it thinks
I'm a teenage girl and I think when I
say it thinks I'm a teenage girl I'm
using the word think in just the same
sense as we normally use it it really
does think that give me an idea of why
this is such a significant Leap Forward
I mean to me it seems like there are
parallel concerns
for in the 80s and 90s blue-collar
workers were concerned about robots
coming in and replacing them and not
being able to control them and now this
is kind of a threat to the White Collar
class of people saying that there are
these Bots and agents that can do a lot
of things that we otherwise thought
would be something only people can yes I
think there's a lot of different things
we need to worry about with this with
these new kinds of digital intelligence
and so what I've been talking about
mainly is what I call the existential
threat which is the chance that they get
more intelligent than us and they'll
take over from us they'll get control
that's a very different threat from many
other threats which also severe so they
include
these things taking away jobs in a
decent society that would be great it
would mean everything got more
productive and everyone was better off
but the danger is that it'll make the
rich richer and the poor poorer that's
not ai's fault that's how we organize
Society
um there's dangers about them
making it impossible to know what's True
by having so many fakes out there that's
a different danger that's something you
might be able to address
by treating it like counterfeiting
governments do not like you printing
their money and they make serious it's a
serious offense to print money
it's also a serious offense if you're
given some fake money to pass it to
somebody else if you knew it was fake
that's a very serious offense
I think government's going to have to
make similar regulations for fake videos
and fake voices and fake images it's
going to be hard as far as I can see the
only way to stop ourselves being swamped
by these fake videos and fake voices and
fake images is to have strong government
regulation that makes it a serious crime
you go to jail for 10 years if you
produce a video with AI and it doesn't
say it's made with AI that's what they
do for counterfeit money and this is a
series of threat is going to fit money
so my view is that's what they ought to
be doing I actually talked to Bernie
Sanders last week about it and
he liked that view of it I can
understand governments and central banks
and private Banks all agreeing on
certain standards because there's money
at stake and I wonder
is there enough incentive for
governments to sit down together and try
to craft some sort of rules of what's
acceptable and what's not some sort of
Geneva Convention or Accords it would be
great if governments could say look
[Music]
um
these fake videos are so good at
manipulating the electorate that we need
them all marked as fake otherwise we're
going to lose democracy
the problem is that some politicians
would like to lose democracy so that's
going to make it hard so how do you
solve for that I mean it seems like this
Genie is sort of out of the bottle so
what we're talking about right now is
the genie of being swamped through fake
news yeah and that clearly is somewhat
out of the bottle it's fairly clear that
organizations like Cambridge analytica
by pumping out fake news had an effect
on brexit and it's fairly clear that
um Facebook was manipulated to have an
effect on the 2016 election so the June
is out of the bottle in that sense we
can try and at least contain it a bit
but that's not the main thing I'm
talking about the main thing I'm talking
about is the risk of these things
becoming super intelligent and taking
over control from us I think for the
existential threat we're all in the same
boat the Chinese the Americans the
Europeans they all would not like
um super intelligence to take over from
people
and so I think for that existential
threat we will get collaboration between
um all the companies and all the
countries because none of them want the
super intelligence to take over so in
that sense that's like Global nuclear
war where even during the Cold War
people could collaborate to prevent them
being a global nuclear war because it
was not in anybody's interests sure and
so that's one in a sense positive thing
about this existential threat it should
be possible to get people to collaborate
to prevent it
but for the all the other threats it's
more difficult to see how you're going
to get collaboration one of your more
recent employers was Google and you were
a VP and a fellow there and
you recently decided to leave the
company to be able to speak more freely
about AI now they just launched their
own version of kind of a GPT of Bard
back in March so tell me here we are now
what do you feel like you can say today
or will say today that you couldn't say
a few months ago
um not much really I just wanted to be
if you work for a company and you're
talking to the media
you tend to think what implications does
this have to the company at least you
ought to think back because they're
paying you
um I don't think it's sort of honest to
take the money from the company and then
completely ignore the company's interest
um
but if I don't take the money I just
don't have to think what's good for
Google and what isn't I can just say
what I think
it happens to be the case that I mean
everybody wants to transmit the story as
I left Google because they were doing
bad things that's more or less the
opposite of the truth
um I think Google is behave very
responsibly and I think having left
Google I can say good things about
Google and be more credible I just left
so I'm not constrained to think about
the implications for Google when I say
things about singularities and things
like that do you think that tech
companies given that it's mostly their
engineering staff that are trying to
work on developing these intelligences
are going to have a better opportunity
to create the rules of the road then say
governments or third parties
I do actually I think there's some
places where governments have to be
involved like regulations that force you
to show whether something was AI
generated
but in terms of keeping control of a
super intelligence
what you need is the people who are
developing it to be doing lots of little
experiments with it and seeing what
happens as they're developing it and
before it's out of control
and that's going to be the mainly the
researchers in companies
I don't think you can leave it to
philosophers to speculate about what
might happen anybody who's ever written
a computer program knows that getting a
little bit of empirical feedback by
playing with things quickly disabuses
you of your idea that you really
understood what was going on
and so it's the people in the company is
developing it who are going to
understand how to keep control of it if
that's possible so I agree with people
like Sam Altman at open AI that this
stuff is inevitably going to be
developed because there's so many good
uses of it and what we need is as it's
being developed we put a lot of
resources into trying to understand how
to keep control of it and avoid some of
the bad side effects back in March there
were more than I'd say a thousand
different folks in the tech industry
including leaders like Steve Wozniak and
Elon Musk who signed an open letter
asking essentially to have like a a
six-month pause on the development of
artificial intelligence and you didn't
sign that how come I thought it was
completely unrealistic the point is
these digital intelligences are going to
be tremendously useful for things like
medicine for reading scans rapidly and
accurately
it's been slightly slower than I
expected but it's coming
um they're going to be tremendously
useful for Designing new Nano materials
so we can make more efficient solar
cells for example
they're going to be tremendously useful
or they already are for predicting
floods and earthquakes and getting
better climate getting better weather
projections they're going to be
tremendously useful in understanding
climate change so they're going to be
developed there's no way that's going to
be stopped so I thought it was maybe a
sensible way of getting media attention
but it wasn't a sensible thing to ask
for it just wasn't feasible what we
should be asking for is that comparable
resources are put into
dealing with the bad possible side
effects and dealing with how we keep
these things under control as are put
into developing them
so present so 99 of the money is going
into developing them and one percent's
going into sort of people saying oh
these things might be dangerous
it should be more like 50 50 I believe
when you kind of look back at the body
of work of your life and when you look
forward at what might be coming
are you optimistic that we'll be able as
Humanity to rise to this challenge or
are you
less so I think we're entering a time of
huge uncertainty I think one will be
foolish to be either optimistical
pessimistic we just don't know what's
going to happen
the best we can do is say let's put a
lot of effort into trying to ensure that
whatever happens is as good as it could
have been
it's possible that there's no way we
will control these super intelligences
and that humanity is just a passing
phase in the evolution of intelligence
that in a few hundred years time there
won't be any people it'll all be digital
intelligences that's possible we just
don't know
um
predicting the future is a bit like
looking into fog you know how when you
look into fog you can see about a
hundred yards very clearly
and then 200 yards you can't see
anything there's a kind of wall
and I think that walls at about five
years
Jeffrey Hampton thanks so much for your
time thank you for inviting me
[Music]
foreign
Browse More Related Video
![](https://i.ytimg.com/vi/CC2W3KhaBsM/hq720.jpg)
In conversation with the Godfather of AI
![](https://i.ytimg.com/vi/sitHS6UDMJc/hq720.jpg)
Possible End of Humanity from AI? Geoffrey Hinton at MIT Technology Review's EmTech Digital
![](https://i.ytimg.com/vi/iWPo7Yhg7Vc/hqdefault.jpg?sqp=-oaymwEXCJADEOABSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLCg1cpuTJZN5kBXRVm90zJlQ4wIjA)
Geoffrey Hinton 2023 Arthur Miller Lecture in Science and Ethics
![](https://i.ytimg.com/vi/2EDP4v-9TUA/hq720.jpg)
Season 2 Ep 22 Geoff Hinton on revolutionizing artificial intelligence... again
![](https://i.ytimg.com/vi/lLBbsif2Xt4/hq720.jpg)
Geoffrey Hinton is a genius | Jay McClelland and Lex Fridman
![](https://i.ytimg.com/vi/aPtDDPT1gZQ/hq720.jpg)
2 Ex-AI CEOs Debate the Future of AI w/ Emad Mostaque & Nat Friedman | EP #98
5.0 / 5 (0 votes)