The Problem with Human Specialness in the Age of AI | Scott Aaronson | TEDxPaloAlto
Summary
TLDRスクリプトのエッセンスを提供する魅力的な要約で、ユーザーを引き込み、興味を引き起こす短く正確な概要を提供します。
Takeaways
- 🌟 AIの発展は過去数年で驚くべき進歩を遂げており、理論的なコンピュータサイエンスがAIを悪用されないよう助けることができる方法を模索中です。
- 🚀 現在のAI革命の核心となるアイデアは、何世代も前から知られていましたが、その時点では実用的ではないとされていました。
- 💡 摩尔の法則(計算能力が指数関数的に増加する)に基づくAIの進化は、多くの専門家が信じていなかったものですが、現在は現実となりました。
- 🤖 AIが言語を理解し、知能を持つようになると予想されていた「魔法の時代」が、今や到来しているかもしれません。
- 🧠 人工知能が持つ潜在的な問題(例えば、AIが人間を破壊的な方法で取って代わる可能性)と、正しい方向に進む可能性について考えることが重要です。
- 📈 AIの進歩は停滞するかもしれませんが、もし継続的な進歩が続けば、10年から20年以内に人間と同じレベルの能力を持つAIが現れる可能性があります。
- 🎓 AIの発展に伴い、教育システムや職業の将来について再考する必要性が生じています。AIが学生と同じテーマで短編小説を書くことができる今天、学びの意味合いは何ですか?
- 🖋️ AIの生成物と人間の生成物を区別するために、GPTなどの言語モデルの出力を watermarking するプロジェクトが進められています。
- 🎨 AIが創造的な成果を生み出すとき、その価値はその種類の生産物が大量に生産可能になると急速に減少する「AIの豊富さパラドックス」と呼ばれる現象に直面しています。
- 🌌 AIが新しい音楽的方向性を提案する場合、それがBeatlesと同様の影響力を持ち得るかどうかは、人間の独自の創造力に対する質問につながります。
- 🔄 AIの安全性に関する議論において、AIを教え込ませ、人間の一意的な創造性と知能を尊重する「宗教」を提唱しています。
Q & A
量子コンピューティングの専門家がOpenAIで取り組んでいる問題は何ですか?
-AIが世界を破壊しないようにする方法を理論的に考えることです。
現在のAI技術の進化において、予想外だった主な要因は何ですか?
-単純なアイデアが大規模な計算資源と組み合わさることで、言語理解やその他のタスクで人間に匹敵するAIを実現したことです。
なぜ過去にはニューラルネットワークが印象的でなかったと考えられていたのですか?
-過去にはニューラルネットワークはそれほど効果的ではなく、単純に大規模化するだけでは問題解決には至らないと考えられていました。
現代のAI技術革命の基盤となっている主な理論や技術は何ですか?
-ニューラルネットワーク、バックプロパゲーション、勾配降下法などが基礎となっています。
将来のAIは、人間の脳と比較して、どのような能力を持つ可能性がありますか?
-数学や科学の未解決問題を解く能力を含め、あらゆる知的作業を人間と同等以上にこなすことが考えられています。
AIの進歩が「フィズルアウト」しないとしたら、どのような未来が考えられるでしょうか?
-AIが人間の知能を超え、すべてのタスクを人間以上にこなせるようになる可能性があります。
大言壮語の理論とは何ですか?
-成功と失敗の例をたくさん与えられれば、AIが数年内に人間の最高のパフォーマンスに匹敵するという理論です。
なぜ人々はAIの進化をリアルタイムで追うのが難しいと感じていますか?
-目標が絶えず動いているため、AIが一つの目標を達成すると、人々はすぐに次の目標を設定します。
OpenAIで行われているAIの安全性に関連するプロジェクトの一つは何ですか?
-GPTと他の大規模言語モデルの出力にウォーターマークを付けるプロジェクトです。
AIによる創造性と人間の創造性の違いは何ですか?
-人間の創造性はユニークで再現不可能な一回きりの出来事であり、AIは無限に創造的作業を再現することができますが、そのユニークさには限界があります。
Outlines
🔬量子コンピューティングからAIの未来へ
講演者は、量子コンピューティングの分野でのキャリアから、OpenAIでAIが世界を破壊するのを防ぐための理論的なコンピュータ科学の応用について考えるようになったと語る。AIのポジティブな展望だけでなく、リスクにも焦点を当てており、AIが期待通りに機能した場合の人類の役割を問い直している。さらに、AIの急速な進化が過去数年でどのように実現されたか、そしてその進化が以前は単純なネットワークと見なされていた技術のスケールアップによるものであることを指摘している。
🌐AI進化の将来のシナリオ
AIの進化が続けば、将来的には人間の能力を超える可能性があるが、それが人類にとってプラスかマイナスかは議論の余地がある。AIの発展が停滞する可能性や、人間がAIにとって無関係かつ無用になるリスクも検討されている。さらに、AIがどのように人間の仕事や創造性に影響を与えるか、そしてAIの理解や目標達成の方法が人間とどのように異なるかについても議論されている。
🔏AI出力の透かし入れと安全性確保
AIの安全性に向けた取り組みとして、GPTなどの大規模言語モデルの出力に透かしを入れる方法が提案されている。これにより、AIによるテキスト生成を人間のものと区別できるようになる。ただし、完璧なソリューションではなく、決定的な解決策ではないことが強調されている。また、教育や創造的な作業がAIによってどのように変わるか、そして人間がAIと競合する世界でどのような価値を持つかについての哲学的な問いかけがなされている。
🤔人間とAIの創造性の未来
AIが芸術や創造性において人間を模倣、あるいは超越する可能性について探求している。AIによるアートの価値は、その希少性によって左右されるかもしれないが、人間独自の創造性や選択の意味を再評価する必要がある。さらに、人間の認識や存在がデジタル化され、複製可能になる将来のシナリオについても考察されており、このような技術の実現可能性とそれが持つ倫理的な問題が議論されている。
Mindmap
Keywords
💡Quantum Computing
💡AI Safety
💡Neural Networks
💡Moore's Law
💡GPT (Generative Pre-trained Transformer)
💡AI Abundance Paradox
💡Human Uniqueness
💡AI and Society
💡Watermarking AI Outputs
💡Education in the AI Era
💡AI and Creativity
Highlights
The speaker transitions from a career in Quantum Computing to working at OpenAI, focusing on preventing AI from causing global harm.
The speaker is still exploring how theoretical computer science can contribute to AI safety and considers the implications if AI goes right.
AI has advanced to a level where it resembles the science fiction technology from Star Trek, with some tasks being performed successfully.
The core ideas behind the current AI revolution have been known for generations, including neural networks and backpropagation.
Despite previous skepticism, the speaker acknowledges the exponential growth in computational power and its impact on AI capabilities.
The speaker discusses the potential future where AI can solve the greatest unsolved problems in math and science, such as proving the Riemann Hypothesis.
AI's potential to become more intelligent than humans raises ethical questions about how we treat and utilize AI.
The speaker and colleague Baz Barack attempted to map out the major possibilities for AI's impact on society.
AI progress could fizzle out due to diminishing returns, cost, or lack of training data.
If AI continues its rapid progress, it may reach a point where it outperforms humans in almost every task.
The speaker discusses the 'game over thesis,' which suggests that AI will eventually match or exceed human performance in any task with clear success metrics.
AI's ability to produce human-like outputs raises concerns about the future of jobs and human roles in society.
The speaker has worked on watermarking AI outputs to distinguish between human and AI-generated content.
The concept of AI abundance paradox is introduced, where the value of AI-generated art is diminished by its replicability.
The speaker ponders the uniqueness of human creativity and the potential for AI to develop its own new directions in art and music.
The speaker discusses the possibility of humans being backed up and restored from backups, questioning the nature of human identity.
An AI safety proposal is suggested, which involves teaching AI to value and protect the unique, unclonable aspects of human creativity and intelligence.
The speaker reflects on the potential future where AI-generated content is so abundant that it redefines our understanding of value and originality.
Transcripts
[Music]
[Applause]
[Music]
okay uh thanks a lot so uh after um you
know a career spent mostly in Quantum
Computing I'm now sort of Moonlighting
at open AI uh they asked me to uh think
about how theoretical computer science
could be used to help uh uh prevent AI
from destroying the world uh I haven't
figured it out yet uh I do still have
another 6 months uh so um but uh you
know I F I find myself thinking more and
more uh not just about uh uh uh how do
we prevent this from going wrong but
also about uh uh what what if it goes
right what if it goes exactly like it's
supposed to and uh can just produce any
you intellectual product as well as we
can or better and you know what are we
for in the resulting world uh so um
well you know I don't I don't have to
belabor for this audience I think you
know what has happened in AI over the
past uh few years uh you know we now uh
uh to some approximation you know have
the science fiction machine from Star
Trek right you talk to it in English you
ask it what to do some percentage of the
time it does it and you know this is uh
uh uh despite you know how unlikely this
seemed to almost all of us 5 years ago
you know it's so unlikely that that many
many people are still in denial about it
okay but uh I think the even more
surprising thing than what has happened
is how it has happened uh so you know
what maybe not everyone appreciates is
that the core ideas uh that are powering
the current AI Revolution uh uh are
things that have been known for
Generations okay so uh uh I mean neural
networks uh back propagation gradient
descent uh uh prediction via depression
you know I learned all this stuff when I
was an undergrad in computer science in
the 9s okay but we also learned then
that you know neural Nets were just not
that impressive they didn't work that
well and uh all of the wisest people uh
said uh uh all said well you know if you
just take something that doesn't work
and scale it up by a factor of a million
it's still not going to work you know
the the the the true key to AI is going
to be to deeply deeply understand the
nature of intelligence and you know once
we've done that then we can
uh uh we'll be able to see why uh uh a
human level AI could have fit on a
floppy disc uh and um you know there
were just a few Nut Cases like Ray kwi
who would go around showing these graphs
that would say well look uh the amount
of compute that you can do you know per
second per dollar is on an exponential
trajectory right that's one form of mors
Law and uh uh if you just extrapolate
forward then by the 2020s or so uh there
should be about as much compute
available as some crude estimate of what
the human neocortex is doing and that is
when we should expect that magic will
happen okay and and and computers will
suddenly understand language and be
intelligent and uh almost all of us said
you know that sounds like the stupidest
thesis that we've ever heard like you
have no you know theoretical principle
to believe that you know just the the
sheer amount of compute is is is is is
alone sufficient uh now I'm airm
believer that uh um you know that like
one of the the the key uh dicta of
science is that you let the world tell
you when you're wrong okay and you don't
make up uh some elaborate justification
for why you actually weren't wrong so I
think that's the situation
here uh so you know but but you know
this is Mo's law hasn't ended right so
uh uh you know we're still getting more
uh uh more and more compute so you might
wonder where is this going uh uh what
will gpta be able to do you know will I
just be able to ask it to solve any of
the greatest unsolved problems in math
or science like prove the remon
hypothesis and it'll say sure I can help
you with that and just spit out a proof
um you know by the way I I asked uh uh
GPT to cooperate with me in illustrating
that uh which it happily did but then
hasten to add that it's only kidding and
that the remon hypothesis remains an
open problem uh so you know this is one
possibility but then you know what about
beyond that I mean uh what if you know
as some people predict uh uh AI would uh
become to us as we are to chimpanzees
right well you know how well do we treat
chimpanzees right and so then you know
you're led to the the uh the the
Terminator scenario of course but you
know it's been amazing to watch you know
I've known the little subculture of
nerds on the internet who have uh
worried about AI Doom for for 20 years
uh uh and uh just within the last year
because of chat GP PT this went to
something that is discussed in the White
House Press briefing and in
Congressional hearings uh but you know
uh uh okay uh uh uh you know an AI
wouldn't necessarily have to hate us or
or want to kill us we might just you
know be in the way or irrelevant to
whatever alien goal it has okay but I
think you know that's not the only
possibility that is on the table here so
um my colleague baz Barack who's uh now
also on sabatical at open AI uh and I uh
uh uh tried a while ago to make a
decision tree of the sort of major
possibilities being discussed now so uh
you know the progress in AI that we've
seen over the last few years could
fizzle out right there might be a
diminishing returns to you know more and
more scale or we might you know uh uh uh
find it too expensive to get the
necessary compute uh or we might run out
of training data you know we're already
sort of running out I mean there is all
of YouTube and Tik Tok and so forth that
you could still feed into the mall but
that might just make the AI Dumber
rather than smarter right so uh okay but
then if it doesn't fizzle out and if it
just continues you know the way it has
over the last few years then you have to
imagine that it's just a matter of uh uh
you know what 10 years 20 years how many
you know until it can do just about
everything as well as we can and uh and
and what then you know does civilization
recognizably continue with sort of
humans in charge uh and and whether it
does or it doesn't is that good or is it
bad uh from our point of view or you
know maybe it depends who you ask uh
so um you know now like a lot of people
don't want to have this discussion they
sort of they still sort of I think don't
want to speculate about these things
including you know many distinguished
colleagues of mine a lot of them are
immersed in what I like to call the the
religion of jism right so they will say
look chat GPT You know despite however
impressive it might look uh uh you know
it's it's it's actually not because we
really we know that it is just a
stochastic parrot it is just a next
token predictor it is just a giant
function approximator it is just a uh a
huge auto complete right and and I I
always want to say to these people okay
and what are you like aren't you just a
bundle of neurons and synapses obeying
the laws of physics right and what about
your mom right you know I mean uh um you
know I mean if you're going to you know
use these reductionist or deflationary
ways of talking then you know at least
you you have to be symmetrical about it
I think so uh a closely related tendency
is well known in AI is the sort of
endlessly moving goalposts you know I
still remember when uh deep blue beat
Casper ofices and you know very smart
people said okay but this is not
impressive because chess is really just
a search problem Wake Me Up When
computers can beat you know the human uh
uh Grand Masters at go okay uh because
that's just an infinitely deeper and
richer game okay and then we had alpago
and then people said okay but fine it's
just a game everyone expected that this
would happen you know wake me up when uh
uh large language models can win a gold
medal in the international math Olympiad
okay so I actually have a bet with a
colleague that that will happened by
2026 uh you know there was some progress
on it just this past month uh now I
might be wrong it might happen by by
2036 instead but it seems clear that
this is just a ma a question of years at
this point and you know after uh uh uh
AI can you know get gold medals and you
know math competitions okay which goal
poost uh uh should we have next uh so
you know we might even be tempted to
formulate a general thesis here which
I'll call you know the game over thesis
which would say that given any task with
a reasonably objective metric of success
or failure games competitions uh and on
which an AI can be given suitably many
examples of success and failure it's
only a matter of years before not only
AI but AI on our current Paradigm will
match or beat the best human performance
now that might not exhaust everything
that we care about uh uh uh you know
there might be things that are not
quantifiable in this way okay but but if
even this is true I think that already
forces us to some uncomfortable places
in you know thinking about kind of well
you know uh uh uh uh what do we tell our
kids about you know uh uh what kind of
jobs are going to be available for them
and and sort of what is our role in the
world uh so um and and you know it's
clear that sort of already what what you
know chat GPT and and Dolly and so forth
can already do has sort of uh uh created
uh for real for us the sort of uh uh
Blade Runner scenario where you know we
are confronted with the problem of
distinguishing uh human outputs from AI
ones so you know one of the main safety
projects that I've worked on uh during
my time at open AI uh has been a scheme
for watermarking the outputs of GPT and
other language uh other large language
models uh what this means is uh sort of
replacing the randomness in the models
by pseudo Randomness in a way that
inserts a uh a secret statistical signal
into you know the choice of words or
tokens uh by which you can later uh
detect that yes this was generated by
chat GPT this did not come from a human
okay now I should caution you that uh
this has not been deployed yet uh so
open AI along with Google and anthropic
uh has been moving kind of slowly and
deliberately toward uh
uh deployment of of text water marking
um even uh uh if and when uh it is
deployed you know someone who is
sufficiently determined will be able to
evade it uh you know just like with
schemes for uh uh preventing piracy of
you know music or software or whatever
uh so you know it's not a perfect
solution but I hope that this and other
measures uh will eventually be able to
you know make it less convenient for
students to use chat GPT to cheat on
their HK work uh uh uh one maybe one of
the most common misuses in the world
right now or uh for people to use it for
spam propaganda impersonation uh all
sorts of other bad things like that okay
but when I talked to my colleagues about
watermarking I was surprised that uh
often you know they had an objection to
it that was not technical at all it
wasn't about how well can it work it was
about uh well should we still even be
giving homework at all right I mean you
know if if uh uh chat GPT can write the
term papers just as well as the students
can right and that's still going to be
true after the students graduate like
you know what's the point why are we
still teaching these skills uh you know
and and I think about this even in terms
of my 11-year-old daughter for example I
mean she loves writing short stories now
chat GPT can also write short stories on
the same themes you know like a a an
11-year-old girl who gets recruited to a
magical boarding school but which is
totally not Hogwarts and has nothing to
do with warts right or you know whatever
other theme like that okay uh now you
could ask the question uh you know if
you look at like today's cohort of
11year olds are they ever going to be
better writers than GPT or you know it's
a it's a it's a race right which one is
going to improve faster um so uh
um uh so so you know you you know you
could imagine that you know even what we
think of as like the greatest products
of artistic genius you know the music of
The Beatles right in principle you know
you could have uh some AI model uh uh do
the same things okay but when you think
about that enough you start wondering
what do we mean what what would we even
mean by an AI that created music that
was as good as the Beatles right like uh
you know and then that forces you to ask
well well what made the beetle so good
in the first place and you know I'm not
a music expert but roughly we could
decompose it into sort of two components
uh one being sort of new ideas about
what direction you know music ought to
go in and secondly technical execution
on those ideas okay now suppose you had
an AI where you know you just fed at the
Beatles whole back catalog and then it
generated more songs that the Beatles
plausibly could have generated but
didn't you know that sounded kind of
like Hey Jude or yesterday or whatever
okay I think that you know most people
would if they saw that would just move
the Gul poost okay they would say no
that doesn't really impress us right
this is just uh uh extrapolation and you
know like uh uh schopenhauer said you
know Talent hits a Target that no one
else can hit but genius you know that
hits a Target that no one else can see
right and and and and you know the uh uh
what we want to see is the AI deciding
for itself to take music in this new
Direction uh so okay now but now you
know imagine that we had that as well
you know imagine that you had an AI
where every time time you hit the
refresh button in your browser window
you got a brand new like radically new
Beatles likee direction that music could
have been taken in in the 1960s right
and each time you run it you just get
another sample from this probability
distribution okay you know even then
there's something kind of weird about
that I mean you know you could say The
Beatles were there at the right place
and time to pick a particular direction
and not only that but sort of drag all
of the rest of us along with them so
that our whole objective function
changed right we can't judge music
anymore except by a Beatles influence
standard just like we can't judge plays
except by a Shakespeare influence
standard right and and so now if you
know there's sort of what I like to call
an AI abundance Paradox right which is
as soon as you have an AI that can
produce a new artwork uh uh well you
know however good it is it can produce a
thousand similar artworks by just uh uh
running it more and more often can
always rewind and try again and uh so so
it sort of radically devalues you know
the worth of that kind of production
just like the price of gold would crash
if someone towed a 10 mile long golden
asteroid to the Earth right uh uh it
wouldn't actually be be be be worth you
know what you what you thought it was
okay and and so you know you could say
well well at least humans will always
have this sort of Advantage okay that uh
at least we have the the advantage of
being frail and there's only you know
there's only one of us you can't back us
up and run us over and over on the same
input you know when we make a decision
we really mean that decision right we're
sticking with it and that's the only one
that you're going to get out of us uh
which is sort of a weird place to you
know stake our claim of human
specialness on but that might be the
place that we're forced to okay but you
know as soon as I've said that I have to
confront a sort of exotic objection uh
which is well is it really true that
humans uh uh uh cannot be rewound cannot
be copied cannot be you know saved as
backups and so forth I mean it is
possible you know some people think so
that that our own cognition you know is
happening in some sort of digital
computation layer you know in the
neurons and synapses and once technology
once brain scanning technology gets good
enough you know itbe uh uh the next
iteration of neurolink or whatever right
we can all just back ourselves up to the
cloud we can you know rewind ourselves
uh uh restore from back up you know and
then that leads to all these strange uh
questions like uh would you agree to
have yourself faxed to Mars you know
just sent as information reconstituted
there uh the original meat version of
you will just be painlessly euthanized
don't worry about it right uh or um you
know would you uh uh uh back up your
brain before you go on a dangerous trip
um so you know I don't know whether
these things will ultimately be possible
right it's a question about uh uh the
ultimately the biology and the physics
of the human brain right is just the
sort of digital layer the relevant one
or is our identity sort of bound up with
the sort of unclonable uh uh uh you know
not fully knowable you know chaotic
details of the uh uh uh molecules you
know inside of the you know individual
sodium ion channels and the neurons
right if you had to go all the way down
to the molecular level then the famous
no cloning theorem in quantum mechanics
would say well you can't make a perfect
copy right if you try to you're going to
have to make measurements that will you
know fail to tell you what you want and
even destroy the original copy that you
had okay so uh uh you know I don't know
whether our identity is sort of bound up
in these unclonable uh uh physical
degrees of freedom but you know even
even not knowing whether uh that's true
or not um you know I mean it does seem
like difference between us and any
existing AI that we're sort of buffeted
around by chaos such that no external
agent can have at least as as far as we
know can have all the information
relevant to predicting our Behavior so
then to Circle all the way back to AI
safety this leads to a very exotic AI
safety proposal which is why don't we
just teach our AIS indoctrinate them in
a religion that venerates the universe's
unclonable ephemer ephemeral analog
locai of creativity and intellig Ence uh
wherever they might be found says
protect them from destruction defer to
their preferences those are the ones
that matter because you know they're
they're the ones that sort of only get
the one chance uh now I don't know if
this is a good idea uh you know in a
different Universe maybe I fell in love
with a different idea but here I kind of
fell in love with this one and
unfortunately you don't get to back me
up and see a different one so all right
so
thanks
関連動画をさらに表示
Accenture CEO on earnings beat and revenue cut, spending and generative AI
🇺🇸 Walls of Shame: The US-Mexican Border l Featured Documentaries
Google AI Health Event: Everything Revealed in 13 Minutes
【Slack使い方26】(2)GmailをSlackに自動転送|外部連携
Nvidia 2024 AI Event: Everything Revealed in 16 Minutes
ReAct Agent (Part 1, Introduction to Agents)
5.0 / 5 (0 votes)