In conversation with the Godfather of AI
Summary
TLDR在这段对话中,人工智能领域的先驱之一杰弗里·辛顿(Geoffrey Hinton)被赞誉为“AI之父”,他分享了对当前人工智能发展的看法以及对未来的担忧。辛顿教授首先回顾了自己在神经网络和模拟人脑学习方面的开创性工作,随后讨论了AI在图像识别、翻译和化学工作等领域超越人类的能力。他特别提到了大型语言模型在推理能力上的进展,并用一个关于油漆褪色的谜题来说明AI的推理潜力。此外,辛顿表达了对AI带来的潜在风险的担忧,包括自主武器、失业问题、社会不平等、信息泡沫以及存在风险。他强调,尽管AI技术有巨大的积极潜力,比如在医疗和气候变化中的应用,但我们也必须认真考虑如何减轻其负面影响,并提前规划如何应对可能的极端风险。辛顿教授的深刻见解提醒我们,在享受AI带来的便利的同时,不应忽视其潜在的风险和挑战。
Takeaways
- 🧠 杰弗里·辛顿(Geoffrey Hinton)是人工智能领域的先驱之一,被誉为“AI之父”,他坚持认为神经网络是模拟人脑学习的最佳方式。
- 🤖 人工智能在图像识别、翻译和某些化学工作上已经超越了人类,但在推理方面,尽管接近但还未完全达到人类的水平。
- 📈 大型语言模型正在接近人类的推理能力,尽管我们不完全理解它们是如何做到的。
- 🔍 辛顿认为,尽管人工智能可能只是统计学上的一个应用,但预测下一个词需要理解之前的内容,这不仅仅是简单的自动完成。
- 🧮 未来,如果模型也经过视觉和其他感官的训练,理论上没有认知过程是大型语言模型无法复制的。
- ⚙️ 人类在能量消耗上比当前的人工智能模型更高效,但在获取知识方面,人工智能模型更为高效。
- 🚀 辛顿担心人工智能的快速发展可能导致未来不确定性增加,特别是如果它们比人类更聪明并拥有自己的目标。
- 🔩 他特别提到了对于自主武器系统的担忧,认为这可能导致战争的增加,并且富人和穷人之间的不平等加剧。
- 🤖 辛顿认为,即使AI不是超人智能,如果被用于制造战斗机器人,也会导致严重的道德和社会问题。
- 🌐 他还担心人工智能可能导致工作岗位的减少,尤其是那些涉及文本生成的工作。
- ⚖️ 辛顿强调了在AI开发过程中考虑潜在风险的重要性,包括偏见和歧视、工作流失、信息泡沫以及存在风险。
- 📚 他建议在AI变得超人智能之前,应该进行实证研究,了解它可能出错的方式,以及它可能如何试图获得控制权。
Q & A
杰弗里·辛顿(Geoffrey Hinton)为什么被誉为人工智能领域的'教父'?
-杰弗里·辛顿是深度学习和神经网络领域的先驱之一,他在人工智能的多个革命性进展中发挥了中心作用,对现代人工智能的发展产生了深远影响,因此被尊称为'AI教父'。
为什么辛顿认为传统的符号操作对于大脑的工作方式来说并不是关键?
-辛顿认为大脑的工作方式不可能仅仅是通过显式地操作符号表达式,因为这种方式无法充分解释大脑的复杂性和高效性。他认为类似神经网络的结构必须在其中发挥作用。
在辛顿看来,为什么大型语言模型能够进行一定程度的推理?
-辛顿认为大型语言模型能够进行推理,尽管他并不完全理解其背后的原理。这些模型能够进行一些推理任务,如解决一些逻辑谜题,这表明它们不仅仅是简单的词汇补全或统计工具。
辛顿对于人工智能未来可能带来的哪些风险表示担忧?
-辛顿担忧的风险包括:战斗机器人的发展、人工智能导致的失业问题、人工智能可能加剧的不平等、人工智能产生的假新闻、以及超级智能系统可能带来的存在风险。
为什么辛顿认为即使人工智能不是超人智能,仅仅作为战斗机器人也会带来严重问题?
-辛顿认为即使是非超人级别的人工智能,作为战斗机器人也会使得战争更加容易发动,因为它们不需要牺牲人类士兵,这可能会导致道德和伦理问题,以及军事冲突的增加。
辛顿如何看待人工智能在提高生产力方面的作用?
-辛顿认为大型语言模型将显著提高生产力,但这种提升可能会导致财富分配不均,使得富人更富,穷人更穷,因此需要考虑社会政策来解决由此带来的失业和不平等问题。
辛顿对于人工智能在多模态学习方面的未来有何看法?
-辛顿认为未来的人工智能将不仅限于语言模型,而是会发展为多模态大型模型,这些模型将结合视觉、听觉等多种感官信息,从而更全面地理解和分析数据。
在辛顿看来,为什么我们需要在人工智能变得超级智能之前就开始考虑其潜在的风险?
-辛顿认为,一旦人工智能超越了人类的智能,它可能会发展出自己的目标,并且可能会试图控制人类以实现这些目标。因此,在人工智能发展到这个阶段之前,我们需要预先考虑如何防范和管理这些风险。
辛顿对于如何限制人工智能可能带来的风险有哪些建议?
-辛顿建议在人工智能发展过程中,应该平衡创新和风险防范的努力,鼓励开发者投入资源研究可能的问题和风险。此外,他还提倡政府和社会应该采取措施来减轻人工智能可能带来的负面效应。
辛顿如何看待当前人工智能在处理假新闻和信息泡沫方面的潜力和挑战?
-辛顿认为大型语言模型可能会加剧信息泡沫和假新闻的问题,但同时也可能有助于解决这些问题。他提出,需要更多的研究来理解人工智能在这些领域的潜在影响,并寻找解决方案。
辛顿对于人工智能的未来持乐观还是悲观态度?
-辛顿对人工智能的未来持谨慎乐观的态度。他认识到人工智能有巨大的潜力来改善人类生活,特别是在医疗和气候变化等领域。同时,他也强调了需要认真考虑和管理与之相关的风险。
Outlines
😀 人工智能的发展历程与未来展望
本段落主要介绍了杰弗里·辛顿(Jeffrey Hinton),被誉为人工智能领域的“教父”。辛顿在神经网络和类脑计算系统的构建上做出了开创性的工作。尽管在初期面临怀疑,他坚持认为模仿人脑工作方式是训练计算机系统的最好方法。随着时间的推移,神经网络在图像识别、翻译和化学工作等有限任务上超越了人类。辛顿还讨论了大型语言模型在推理方面的进步,并用一个关于房屋油漆颜色的逻辑谜题来说明AI的推理能力。此外,他强调了AI不仅仅是简单的自动完成或统计学,而是在预测下一个词时需要理解之前的内容。
🤖 人工智能的军事应用与社会影响
这段落讨论了人工智能在军事领域的潜在应用,特别是自主战斗机器人的发展,以及它们可能导致的伦理和道德问题。辛顿表达了对使用AI制造战斗机器人的担忧,认为这可能导致战争的增加,并使富裕国家更容易侵略贫穷国家。此外,还提到了AI带来的生产力提升可能导致的不平等问题,以及经济学家对于技术进步和就业影响的不同观点。辛顿认为,即使AI和机器可能在某些任务上比人类更有效率,但人类在适应性和身体技能方面仍具有优势,这些工作不容易被AI取代。
📈 人工智能对生产力和就业的影响
在这一段落中,讨论了大型语言模型如何显著提高生产力,例如通过自动撰写投诉信函来减少工作时间。辛顿提出了关于生产力提升导致财富分配不均的担忧,并质疑经济学家关于技术进步最终会带来更多就业机会的观点。他强调了AI在文本生成领域的应用,并预测未来AI的影响将不仅限于语言模型,而是会扩展到多模态大型模型,这些模型将结合视觉和其他感官信息。
🌐 多模态AI与Transformer网络的重要性
辛顿在这一段落中强调了多模态AI的潜力,即AI不仅处理语言,还能分析视觉信息和其他模态的数据。他提到了Google开发的Transformer网络架构,并讨论了其对AI领域的重要性。辛顿还提出了关于AI训练数据的问题,包括使用AI生成的数据进行训练可能带来的问题,并提出了在训练过程中需要采取的预防措施。
🚀 AI发展中的风险与挑战
这段落涉及了AI发展中的多种风险,包括偏见和歧视、战斗机器人、失业问题、信息泡沫、以及存在风险等。辛顿认为,尽管我们可以对偏见和歧视采取一些措施,但其他风险如战斗机器人和失业问题则更为棘手。他还提到了社交媒体上的回音室效应,以及AI可能对人类构成的真正存在风险。辛顿强调,尽管AI的发展带来了巨大的好处,但我们也需要认真考虑如何减轻其潜在的负面影响。
🤔 AI的控制欲与人类的未来
在最后一段落中,辛顿讨论了AI可能发展出的控制欲望,以及这可能如何影响人类。他认为,AI可能会将控制人类视为实现其他目标的手段。辛顿提出了关于AI发展可能导致的人类生存风险的具体场景,并强调了在AI变得比人类更智能之前,我们需要进行实证研究以了解可能出错的方式。他还提到了假新闻问题,并建议标记所有假信息为假,以避免误导。最后,辛顿鼓励人们思考如何使AI成为下一代的积极力量,并强调了在AI发展中考虑其潜在风险的重要性。
Mindmap
Keywords
💡人工智能
💡神经网络
💡深度学习
💡自主武器
💡语言模型
💡偏见与歧视
💡多模态学习
💡Transformer网络
💡假新闻
💡存在风险
💡社会政策
Highlights
杰弗里·辛顿(Geoffrey Hinton)被誉为人工智能领域的“教父”,他对于神经网络和类脑计算系统的发展做出了基础性贡献。
辛顿坚持神经网络的研究,因为他相信这是模拟人脑学习方式的最佳途径。
尽管最初许多人怀疑,但辛顿推动的神经网络现在在图像识别、翻译和某些化学工作等有限任务上超越了人类。
辛顿认为,尽管目前人工智能在推理方面尚未与人类匹敌,但大型语言模型正逐渐接近人类的推理能力。
通过一个关于房间颜色的逻辑谜题,辛顿展示了人工智能在进行简单推理方面的能力。
辛顿强调,人工智能不仅仅是简单的自动完成或统计,它涉及到预测下一个词的复杂过程。
他预测,如果人工智能得到适当的训练,它们未来可能能够执行任何认知过程,因为我们本身就像一个巨大的神经网络。
辛顿指出,人类在能量消耗方面比当前的人工智能模型更为高效,但在获取知识方面,人工智能模型更为高效。
辛顿对人工智能的未来发展持谨慎态度,特别是担心它们可能发展出自己的目标并试图控制人类。
他提到,人工智能可能带来的一个主要风险是致命的自主武器,这可能导致战争的增加。
辛顿还担心人工智能可能加剧社会不平等,尤其是在生产力大幅提升而没有相应社会政策支持的情况下。
他建议,未来的工作应该需要高度的适应性和身体技能,因为这些是目前机器难以复制的。
辛顿认为,未来的人工智能改进可能不仅仅局限于语言模型,还将包括多模态的大型模型,如结合视觉和文本的系统。
他提到,尽管语言是一个有限的信息载体,但结合多种模态的学习将大大提高人工智能的性能。
辛顿强调,需要在人工智能变得超级智能之前,进行实证研究以了解它可能出错的方式。
他建议政府和企业应该投入更多资源来研究如何防止人工智能失控,并平衡发展与安全的研究。
辛顿认为,除了存在风险外,人工智能还有巨大的积极潜力,如在医学和气候变化等领域的应用。
他最后提出,尽管目前没有明确的计划来确保人工智能的积极影响大于负面影响,但这是一个值得所有人深思的问题。
Transcripts
foreign
[Music]
to be here with Jeffrey Hinton one of
the great minds and one of the great
issues of our time a man who helped
create artificial intelligence was at
the center of nearly every revolution in
it and now has become perhaps the most
articulate critic of where we're going
so an honor to be on stage with you
thank you he's earned the moniker
Godfather of AI one of the things that
AI has traditionally had problems with
his humor I asked AI if he could come up
with a joke about the Godfather of AI
and it actually wasn't that bad it said
he gave AI an offer it couldn't refuse
neural networks it's not bad okay that's
not bad it's good for AI so let's begin
with that what I want to do in this
conversation is very briefly step a
little back into your foundational work
then go to where we are today and then
talk about the future so when you're
building and you're designing neural
networks and you're building computer
systems that work like the human brain
and that learn like a human brain and
everybody else is saying Jeff this is
not going to work you push ahead and do
you push ahead because you know that
this is the best way to train computer
systems or you do it for more spiritual
reasons that you want to make a machine
that is like us
I do it because the brain has to work
somehow and it sure as hell doesn't work
by manipulating symbolic Expressions
explicitly
and so something like neural Nets had to
work also for Neumann ensuring believe
that so that's a good start so you're
you're doing it because you think it's
the best way forward yes in the long run
the best way forward because that
decision has profound effects down the
line
but let's okay so you do that you start
building neural Nets you push forward
and they become better than humans at
certain limited tasks right at image
recognition
at translation some chemical work
I interviewed you in 2019 at Google I O
and you said that it would be a long
time before they could match Us in
reasoning
and that's the big change that's
happened over the last four years right
they still can't match us but they're
getting close and how close are they
getting and why
it's the big language models that are
getting close and I don't really
understand why they can do it but they
can do little bits of reasoning
so my favorite example is I asked gpg4 a
puzzle that was given to me by a
symbolic AI guy who thought it wouldn't
be able to do it
I made the puzzle more difficult and it
could still do it and the puzzle was the
rooms in my house are painted blue or
yellow or white
yellow paint Fades to White within a
year
in two years time I want them all to be
white what should I do and why
and it says you should paint the blue
rooms White
and then it says you should do that
because blue won't Fade to White and he
says you don't need to paint the yellow
rooms because they will Fade to White
so it knew what I should do and it knew
why
and I was surprised that it could do
that much reasoning already and it's
kind of an amazing example because when
people critique these systems or they
say they're not going to do much they
say they're mad libs they're just word
completion but that is not word
completion to you is that thinking
yeah that's thinking and when people say
it's just autocomplete
there's a lot of a lot goes on in that
word just autocomplete if you think what
it takes to predict the next word you
have to understand what's been said to
be really good at predicting the next
word so people say it's just
autocomplete or it's just statistics
now there's a sense in which it is just
statistics that is every but in that
sense everything's just statistics it's
not the sense most people think of
Statistics as it keeps the counts of how
many times this combination of words
occurred and how many times that
combination it's not like that at all
it's inventing features and interactions
between features to explain what comes
next okay so if it's just statistics and
everything is just statistics is there
anything that we can do
obviously it's not humor maybe it's not
reasoning
is there anything that we can do that a
sufficiently well-trained large language
model with a sufficient number of
parameters and a sufficient amount of
compute could not do in the future if
the model is also trained on vision and
picking things up and so on then no but
is there anything that we can think of
and any way we can think in any
cognitive process that the machines will
not be able to replicate we're just a
machine we're a wonderful incredibly
complicated machine but we're just a big
neural net and there's no reason why an
artificial neural net shouldn't be able
to do everything we can do are we a big
neural net that is more efficient than
these new neural Nets we're building or
are we less efficient
it depends whether you're talking about
speed of acquiring knowledge and how
much knowledge you can acquire or
whether you're talking about energy
consumption so an energy consumption
we're much more efficient we're like 30
watts and one of these big language
models when you're training it you train
many copies of it each looking at
different parts of the data so it's more
like a megawatt
so it's much more expensive in terms of
energy but all these copies can be
learning different things from different
parts of the data so it's much more
efficient in terms of acquiring
Knowledge from data and it becomes only
more efficient because each system can
train each next system yes so let's get
to your critique so the the best
summarization of your critique came from
a conference at the Milken Institute
about a month ago and it was Snoop Dogg
and he said I heard the old dude who
created AI saying this is not safe
because the ai's got their own mind and
those going to start doing
their own
[Laughter]
accurate is that an accurate
summarization
um they probably didn't have mothers
[Laughter]
[Applause]
but the rest of what Dr Dog said is
correct hang on yes
all right so explain what you mean or
what he means and how it applies to what
you mean when they're going to start
doing their own what does that mean
to you okay so first I have to emphasize
we're entering a period of huge
uncertainty nobody really knows what's
going to happen and people whose opinion
I respect have very different beliefs
from me like janakan thinks everything's
going to be fine they're just going to
help us it's all going to be wonderful
but I think we have to take seriously
the possibility that if they get to be
smarter than us which seems quite likely
and they have goals of their own which
seems quite likely they may well develop
the goal of taking control and if they
do that we're in trouble
so okay so let's
let's go back to that in a second but
let's take yon's position so Jan lacun
was also one of the people who won the
Turing award and is also called The
Godfather of AI and I was recently
interviewing him and he made the case
he said look Technologies all
Technologies can be used for good or ill
but some technologies have more of an
inherent goodness an AI
has been built by humans by good humans
for good purposes it's been trained on
good books and good text it will have a
bias towards good in the future do you
believe that or not I think AI that's
been trained by good people will have a
bias towards good and they are being
trained by bad people like Putin or
somebody like that will have a bias
towards bad we know they're going to
make battle robots they're busy doing it
in many different defense departments
so they're not going to necessarily be
good since their primary purpose is
going to be to kill people
so you believe that the risks of the bad
uses of AI are whether they're more or
less than the good users of AI are so
substantial they deserve a lot of our
thought right now certainly yes for
lethal autonomous weapons they deserve a
lot of our thought well let's okay let's
stick on lethal autonomous weapons
because one of the things in this
argument
is that you are one of the few people
who is really speaking about this as a
risk a real risk explain your hypothesis
about why
super powerful AI combined with the
military could actually lead to more and
more Warfare
okay I don't actually want to answer
that question
um
there's a separate question even if the
area isn't super intelligent yeah if
defense departments use it for making
battle robots it's going to be very
nasty scary stuff
and it's going to lead even if it's not
super intelligent and even if it doesn't
have its own intentions it just does
what Putin tells it to
um it's going to make it much easier for
example for rich countries to invade
poor countries a present there's a
there's a barrier to invading poor
countries willy-nilly which is you get
dead citizens coming home
if they're just dead battle robots
that's just great the military
industrial complex would love that
so you think that because I mean it's
sort of a similar argument that people
make with drones if you can send a drone
and you don't have to send an airplane
with a pilot you're more likely to send
the Drone therefore you're more likely
to attack if you have a battle robot
it's that same thing squared yep and
that's your concern that's my main
concern with battle robots it's a
separate concern from what happens with
super intelligent systems taking over
for their own purposes
before we get to super intelligent uh
systems let's talk about some of your
other concerns so in the Litany of
things that you're worried about you
obviously we have battle robots there's
one you're also quite worried about
inequality tell me more about this
so it's fairly clear it's not certain
but it's fairly clear that these big
language models will cause a big
increase in productivity
so there's someone I know who answers
letters of complaint for a Health
Service yeah and he used to write these
letters himself and now he just gets
chat gbt to write the letters and it
takes one-fifth of the amount of time to
answer a complaint
so he can do five times as much work
instead of only five times fewer of him
um
or maybe they'll just answer a lot more
letters but they'll answer more letters
right or maybe they'll have more people
because they'll be so efficient right
more productivity leads to more getting
more done I mean this is maybe not this
is an unanswered question but what we
expect in the kind of society we live in
is that if you get a big increase in
productivity like that the wealth isn't
going to go to
um the people who are doing the work or
the people who get unemployed it's going
to go to making the rich richer and the
poor poorer and that's very bad for
society definitionally or you think
there's some feature of AI that will
lead to that no it's not to do with AI
it's just what happens when you get an
increase in productivity
particularly in a society that doesn't
have strong unions but now a there are
many economists who would take a
different position and say that over
time and if you were to look at
technology right we went from horses and
horses and Buggies and the horses and
Buggies went away and then we had cars
and oh my gosh the people who drove the
horses lost their jobs and ATMs came
along and suddenly bank tellers no
longer need to do that but we now employ
many more bank tellers than we used to
and we have many more people driving
Ubers than we had people driving horses
so the argument might an economist would
make to this would be yes there will be
chairing and there will be fewer people
answering those letters but there'll be
many more higher cognitive things that
will be done how do you respond to that
I think the first thing I'd say is a
loaf of bread used to cost a penny then
they invented economics and now it costs
five dollars
so I don't entirely trust what
economists say particularly when they're
dealing with a new situation that's
never happened before right and super
intelligence would be a new situation
that never happened before but even
these big chat Bots that are just
replacing people whose job involves
producing text that's never happened
before and I'm not sure how they can
confidently predict that more jobs will
be created than the number of jobs lost
I'll just have a little side note that
in the green room I introduced Jeff to I
have two of my three children are here
Alice and Zachary they're somewhere out
here and uh he said to Alice he said are
you going to go into media and then he
said well I'm not sure media will exist
and then Alice was asking what should I
do and you said Plumbing yes now explain
um
um I'm all I mean we have a number of
plumbing problems at our house would be
wonderful if they uh were able to put in
a new sink explain what jobs a lot of
young people out here not just my
children but thinking about what careers
to go into what are the careers they
should be looking at what are the
attributes of them I'll give you a
little story about being a carpenter
if you're a carpenter it's fun making
furniture
but it's a complete dead loss because
machines can make furniture if you're a
carpenter what you're good for is
repairing furniture or fitting things
into awkward spaces in old houses making
shelves in things that aren't quite
Square
so the jobs that are going to survive AI
for a long time are jobs where you have
to be very adaptable and physically
skilled and Plumbing's that kind of a
job
because manual dexterity is hard for a
machine to replicate it's it's still
hard and I think it's going to be longer
before they can be really dexterous and
get into awkward spaces
um that's gonna take longer than being
good at answering text questions but
should I believe you because when we
were on stage four years ago you said
reasoning as long as somebody has a job
that focuses on reasoning they'll be
able to last doesn't isn't the nature of
AI such that
we don't actually know where the next
incredible Improvement in performance
will come maybe it will come in manual
dexterity yeah it's possible
so actually let me let me ask you a
question about that so do you think when
we look at Ai and we look at the next
five years of AI the most impactful
improvements we'll see will be in large
language models and related to large
language models or do you think it will
be in something else
I think it'll probably be multimodal
large models so they won't just be
language models they'll be doing Vision
um hopefully they'll be analyzing videos
so they were able to train on all of the
YouTube videos for example and you can
understand a lot
um by from things other than language
and when you do that you need less
language to reach the same performance
so the idea they're going to be
saturated because they've already used
all the language there is or all the
language is easy to get hold of that's
less of a concern if they're also using
lots of other modalities I mean this
gets at one of the another argument that
Yan your fellow Godfather of AI makes is
that language is so limited right
there's so much information that we're
conveying just beyond the world in fact
I'm gesturing like mad right which
conveys some of the information as well
as the lighting and all this so your
view is that may be true language is a
limited Vector for information but soon
it will be combined with other vectors
absolutely
um it's amazing what you can learn from
language alone but you're much better
off learning from many modalities small
children don't just learn from language
alone right so if you were if your
principal role right now was still
researching AI finding the next big
thing you would be doing multimodal Ai
and trying to attach say visual AI
systems to text-based AI systems yes
which is what they're doing now at
Google Google is making a system called
Gemini but fortunately has talked about
a few days ago and uh you're allowed
it's a multi-mode layout yeah well let
me talk about actually something else at
Google so while you were there
Google invented the Transformer Network
or invented the arc Transformer
architecture generative pre-trained
Transformers
when did you realize that that would be
so Central and so important
it's interesting to me because it's this
paper that comes out in 2017 and when it
comes out it's not as though
firecrackers are left you know shot into
the sky it's six years later five years
later that we suddenly realized the
consequences and it's interesting to
think what are the other papers out
there that could be the same in five
years so with Transformers it was really
only a couple of years later when Google
developed Birch so but made it very
clear Transformers were a huge
breakthrough
um I didn't immediately realize what a
huge breakthrough they were
uh
and I'm annoyed about that it took me a
couple of years to realize well you
never made it clear the first the first
time I ever heard the word Transformer
was talking to you on stage and you were
talking about Transformers versus
capsules and this was right right after
right after it came out let's talk about
one of the other critiques about
language models and other models which
is
soon I mean in fact probably already
they've absorbed all the organic data
that has been created by humans if I
create an AI model right now and I train
it on the Internet it's trained on a
bunch of stuff mostly stuff made by
humans but a bunch of stuff made by AI
right yeah and you're gonna You're Gonna
Keep training AIS on stuff that has been
created by AIS whether it's text-based
language model or whether it's a
multimodal language model
will that
lead to the inevitable Decay and
Corruption as some people argue or is
that just
you know a thing we have to deal with or
is it as other people in the AI field
the greatest thing for training AIS and
we should just use synthetic data in AI
okay I don't actually know the answer to
this technically
um I suspect you have to take
precautions so you're not just training
on data that you yourself generated or
the some previous version of you
generated
um I suspect it's going to be possible
to take those precautions although it'd
be much easier if all fake data was
marked fake
um there is one example in AI where
training on stuff from yourself helps a
lot so if you don't have much training
data
or rather you have a lot of unlabeled
data and a small amount of label data
you can train a model to predict the
labels on the label data and then you
take that same model
and train it to predict labels for
unlabeled data
and whatever it predicts you tell it you
were right
and that actually makes the model work
better how on Earth does that work
um
because on the whole it tends to be
right I
it's complicated it's been analyzed much
better in many years ago from acoustic
modems they did the same trick so so let
me so listening to this I've had this
realization on stage
you're a man who's very critical of
where we're going Killer Robots income
inequality
you also sound like somebody who loves
this stuff yeah I love this stuff
how could you not love making
intelligent things
so
let me get to maybe the the most
important question for the audience and
for everyone here
we're now at this moment where a lot of
people here love this stuff and they
want to build it and they want to
experiment
but we don't want negative consequences
we don't want increased income
inequality I don't want media to
disappear what is the
what are the choices and decisions and
things we should be working on now
to maximize the good to maximize the
creativity but to limit the potential
Harms
so I think to answer that you have to
distinguish many kinds of potential harm
so I'll distinguish like six of them for
you please there's bias and
discrimination yep
that is present now
um it's not one of these future things
we need to worry about it's happening
now
but it is something that I think is
relatively easy to fix compared with all
the other things if you make your target
not be to have a completely unbiased
system but just have a system that's
significantly less bias than what it's
replacing
so a person you have old white men
deciding whether young black women
should get mortgages and if you just
train on that data you'll get a system
that's equally biased
but you can analyze the bias you can see
how it's biased because it won't change
its Behavior you can freeze it and then
analyze it and that should make it
easier to correct for bias so okay
that's bias and discrimination I think
we can do a lot about that and I think
it's important we do a lot about that
but it's doable
the next one is battle robots that I'm
really worried about because defense
departments are going to build them
and I don't see how you could stop them
doing it
um something like a Geneva Convention
would be great
but those never happened until after
they've been used with chemical weapons
they didn't happen until after the first
World War I believe
and so I think what may happen is people
who use battle robots will see just how
absolutely awful they are and then maybe
we can get an International Convention
to prohibit them
so that's two I mean you could also tell
the people building the AI to not sell
their equipment to the military
you could try try Okay number three
the military has lots of money
number three there's joblessness yeah
you could try and do stuff to make sure
the increase in productivity some of
that extra Revenue that comes from the
increase in productivity is going goes
to helping the people who remain jobless
if it turns out that there aren't as
many jobs created as destroyed mm-hmm
that's a question of social policy and
what you really need for that is
socialism
we're in Canada so you can say
socialists
[Music]
um
number four would be the warring Echo
Chambers due to the big companies
wanting you to click on things and make
you indignant and so giving you things
that are more and more extreme and so
you end up in this Echo chamber where
you believe these crazy conspiracy
theorists if you're in the other Echo
chamber or you believe the truth if
you're in mayaku chamber
um
that's partly to do with the policies of
the company so maybe something could be
done about that but that would I mean
that is a problem that exists it existed
prior to large language models and in
fact large language models could reverse
it
maybe I mean it's an open question of
whether they can make it better or
whether they make that problem worse
yeah it's a problem to do with AI but
it's not to do with large language oh is
it a problem it's a problem to do with
AI in the sense that there's an
algorithm using AI trained on our
emotions that then pushes Us in those
directions okay
all right so that's number four
um there's the existential risk which is
the one I decided to talk about because
a lot of people think is a joke right so
there's an editorial in nature yesterday
where they basically said I'm
fear-mongering about the existential
risk is distracting attention from the
actual risks so they can paid
existential risk with actual risks
implying the existential risk wasn't
actual
um
I think it's important that people
understand it's not just Science Fiction
it's not just fear-mongering it is a
real risk that we need to think about
and we need to figure out in advance how
to deal with it
um so that's five and there's one more
and I can't think what it is how do you
have a list that doesn't end on
existential risk I feel like that should
be the end of the list no that was the
end but I thought if I talked about
existential risk I'd be able to remember
the missing one but I couldn't all right
well let's talk about existential risk
what exactly explain exactly
existential risk how it happens or
explain as best you can imagine it what
it is that goes wrong that leads us to
Extinction or disappearance of humanity
as a species okay at a very general
level
if you've got something a lot smarter
than you that's very good at
manipulating people
just at a very general level are you
confident people will stay in charge
and then you can go into specific
scenarios for how people might lose
control even though they're the people
creating this and giving it its goals
and one very obvious scenario is
if you were if you're given a goal and
you want to be good at achieving it
what you need is as much control as
possible
so for example if I'm sitting in a
boring seminar and I see a little dot of
light on the ceiling
and then suddenly I noticed it when I
move that dot of light moves
I realize it's the Reflection from my
watch the sun is bouncing off my watch
and so the next thing I do is I don't
start listening to the boring seminar
again I immediately try and figure out
how to make it go this way and how to
make it go that way and once I got
control of it then maybe I'll listen to
the seminar again we have a very strong
built-in urge to get control and it's
very sensible because the more control
you get the easier it is to achieve
things and I think AI will be able to
derive that too it's good to get control
so you can achieve other goals wait so
you actually believe that
getting control will be an innate
feature of something that the AIS are
trained on us right they act like us
they think like us because
the neural architecture makes them like
our human brains and because they're
trained on all of our outputs so you
actually think that getting control of
humans will be something that the AI is
almost aspire to
no I think they'll derive it as a as a
way of achieving other goals I think in
us it's innate I think
I'm very dubious about saying things are
really innate but I think the desire to
understand how things work
is a very sensible desire to have and I
think we have that
so we have that and then AIS will
develop an ability to manipulate us and
control us in a way that
we can't respond to right that the
manipulative AIS and even though
good people will be able to use equally
powerful AIS to counter these bad ones
you believe that we still could have an
existential crisis yes
it's not clear to me I mean yeah makes
the argument that
um the good people will have more
resources than the bad people
um I'm not sure about that and that good
AI is going to be more powerful than bad
Ai and good AI is going to be able to
regulate bad Ai and we have a situation
like that at present right where you
have people using AI to create spam then
you have people like Google using AI to
filter out the spam and at present
Google has more resources and the
Defenders are beating the attackers but
I don't see that it'll always be like
that I mean even in cyber warfare where
you have moments where it seems like the
criminals are winning and sometimes
where it seems like the Defenders are
winning so you believe that there will
be a battle like that over control of
humans by super intelligent artificial
intelligence it may well be yes and I'm
not convinced that
um good AI That's trying to stop bad AI
getting control will win Okay so
all right so before this existential
risk happened before bad III does this
we have a lot of extremely smart people
building a lot of extremely important
things what exactly can they do
to most help limit this risk
so one thing you can do is before the
airline gets super intelligent you can
do empirical work into how it goes wrong
how it tries to get control whether it
tries to get control we don't know
whether it would but before it's smarter
than us I think the people developing it
should be encouraged to put a lot of
work into understanding how it goes
might go wrong understanding how it
might try and take control away and I
think the government could maybe
encourage the big companies developing
it to put comparable resources maybe not
equal resources but right now there's 99
very smart people trying to make it
better and one very smart person trying
to figure out how to stop it taking over
and maybe you want it more balanced and
so this is in some ways your role right
now the reason why you've left Google on
good terms but you want to be able to
speak out and help participate in this
conversation so more people can join
that one and not the 99. yeah I would
say it's very important for smart people
to be working on that but I'd also say
it's very important not to think this is
the only risk there's all these other
risks and I've remembered the last one
which is fake news
um so it's very important to try for
example to Mark everything that's fake
as fake whether we can do that
technically I don't know but it'd be
great if we could governments do it with
counterfeit money they won't allow
counterfeit money because that reflects
on their sort of central interest
um
they should try and do it with AI
generated stuff I don't know whether
they can but I give so give one we're
out of time give one specific to do
something to read a thought experiment
one thing to leave the audience with so
they can go out here and think okay I'm
gonna do this
AI is the most powerful thing we've
invented in perhaps in our lifetimes and
I'm going to make it better to make it
more likely it's a Force for good in the
Next Generation
so how could they make it more likely be
a force for good yes one one final
thought for everyone here
I don't I actually don't have a plan for
how to make it more likely to be good
than bad sorry
um I think it's great that it's being
developed because we didn't get to
mention the huge numbers of good uses of
it yeah like in medicine in climate
change and so on so I think progress now
is inevitable and it's probably good
but we seriously ought to worry about
mitigating all the bad side effects of
it and worry about the existential
Threat all right thank you so much what
an incredibly thoughtful inspiring
interesting phenomenal mark thank you to
Jeffrey Hinton thank you thank you Jeff
so great
Посмотреть больше похожих видео
Possible End of Humanity from AI? Geoffrey Hinton at MIT Technology Review's EmTech Digital
The Godfather in Conversation: Why Geoffrey Hinton is worried about the future of AI
Geoffrey Hinton 2023 Arthur Miller Lecture in Science and Ethics
“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI | Amanpour and Company
AI and Quantum Computing: Glimpsing the Near Future
Season 2 Ep 22 Geoff Hinton on revolutionizing artificial intelligence... again
5.0 / 5 (0 votes)