AI 2041: Ten Visions for Our Future | Kai-Fu Lee
Summary
TLDRこのビデオスクリプトでは、AIの未来に関する驚くべき本「AI 2041」について、著者のカイフ・リーとの熱い議論が繰り広げられます。スティーブ・オルリアンスがホストとして、カイフ・リーの創造的な思考と、AIが我々の未来にどのように影響を与えるかについての彼の洞察に光を当てています。AIのリアリスティックな応用から、その潜在的な危険、さらには国際協力の必要性に至るまで、幅広いトピックが探求されます。この対話は、AI技術の将来と、それが世界中の産業と社会に与える影響について、聴衆に深い洞察を提供します。
Takeaways
- 📘 カイフ・リーとチェンチョ・ファンが共著したAIに関する新書「AI 2041」についてスティーブ・オーリアンズが熱烈に推薦。
- 🤖 本書は、AIの現実的な応用とその社会的影響に関する科学フィクションの物語を通してAIを紹介している。
- 🌏 各物語は、今後20年間で実現可能なAI技術を取り上げ、世界中のさまざまな場所での展開を描いている。
- 🔬 カイフ・リーは、AI技術の解説を行い、それが現実的かどうかを評価している。
- 🚀 AI技術の応用範囲は広く、言語処理から量子コンピューティング、医薬品発見まで多岐にわたる。
- 🌐 米中関係において、AIの発展は両国間の協力と信頼を強化するための重要な分野とされている。
- 🤔 人工知能は日常の仕事に大きな変化をもたらし、多くのルーチンワークを自動化する可能性がある。
- 🛡️ 量子コンピューティングのような特定のAI技術分野は、国家安全保障と関連しているため、国際協力が限定される可能性がある。
- 📊 AIがもたらす労働市場への影響に対し、政府は適切な安全網の提供に取り組む必要がある。
- 👥 AIの利用により生じる可能性のある社会的不平等に対しては、適切な規制とデータの偏りを減らす努力が重要である。
Please replace the link and try again.
Outlines
📘「AI2041」の紹介とその影響
スティーブ・オーリンズ(米中関係全国委員会会長)が、カイフ・リーとの対話を通じて、彼とチャン・チョウファンが共著した「AI2041」について紹介します。この本は、AIに関連する10個のサイエンスフィクションストーリーを通じて、AIの将来のビジョンを探求しています。オーリンズは、物語を通じてメッセージを伝える手法を評価し、リーがこの本で実現したことを称賛しています。彼はまた、AIについての知識が乏しい自分でも、この本を通じてAIについて多くを学べたと述べています。
🤖AIの応用とその将来に関する議論
カイフ・リーは、AI技術の将来についての10のストーリーをどのように選んだか、そしてそれらが異なる産業にどのように適用され得るかについて語ります。彼は、自然言語処理から量子コンピューティング、薬物発見まで、約20の技術をカバーしたかったと述べ、これらの技術がエンターテイメント、ヘルスケア、仕事など、さまざまな分野にどのように応用されるかについて議論します。また、米中関係におけるAIの影響についても触れ、AIの発展と応用における米国と中国の役割について考察しています。
🌏AI技術の国際協力と競争
リーは、特定のAI技術分野での米中の協力と競争について語り、民間と軍事の両方での応用の可能性を探ります。彼はAIの民間用途における協力の重要性を強調し、一方で国家安全保障に関連する技術については協力が制限されるべきだと主張しています。また、量子コンピューティングのような特定の技術がいかに戦略的な重要性を持つかについても議論しています。
💡AIの誤用に対する懸念と対策
カイフ・リーは、AIの誤用、特に非国家行為者による潜在的な脅威について議論します。彼は、AI技術を使った自律型兵器が低コストで容易にアクセス可能になり、テロ行為の手段として使用され得ることを指摘します。このような脅威に対抗するためには、国際的な協力と規制が必要であると強調しています。
🚀AIの先進的応用と倫理的課題
「AI2041」のストーリーを通じて、リーはAI技術の先進的な応用とそれに伴う倫理的な課題について考察します。彼は、AIがもたらす可能性のあるポジティブな影響と、それによって引き起こされる可能性のあるリスクや問題点について議論し、AI技術の倫理的な使用を促します。
🌐AIの社会的影響と将来の展望
リーは、AIが労働市場に与える影響、特にルーチンワークの自動化とそれによる雇用の変化について語ります。彼は、AI技術が社会にもたらすポジティブな影響、例えば労働時間の削減や生産性の向上に焦点を当てつつ、これらの変化が人々に与える影響についても考察しています。
🔍AIによる自動運転の未来と規制の課題
カイフ・リーは、自動運転車が将来的に交通事故を大幅に減少させる可能性について説明します。彼は、AIによる自動運転技術の発展がもたらすメリットと、それに伴う規制や倫理的な課題について議論します。特に、技術が完全に成熟するまでの過渡期において、規制当局がどのように対応すべきかについての考察が含まれます。
Mindmap
Keywords
💡AI
💡機械学習
💡自律兵器
Highlights
本のアイデアはAIを誰でも理解できるように平易な言葉で説明すること
SF作家が想像力を制限して現実的な話を書くことに同意した
自律兵器はテロリストによって悪用される可能性が高い
AIはバイアスを持つ可能性があるが、適切なデータセットを用いれば人間よりも公平である可能性がある
AIがルーティンな仕事の50%を置き換えると予測
AIがもたらす失業問題への対策が必要
AIがもたらす格差問題への対策が必要
翻訳者の需要は一時的に増加するが、最終的にはAIに置き換えられる
AIアシスタントが次第に主要な労働者となる
ヨーロッパはAI開発を制限しようとしている
商業AI分野では各国企業が競争・協調する
自律走行車両は最終的には人間の運転よりも安全
AIは偏見を持つ可能性があるが、適切なデータセットを用いれば人間よりも公平
ヨーロッパはAI開発を制限しようとしている
商業AI分野では各国企業が競争・協調
Transcripts
[Music]
okay well let us reward the people who
are prompt and the many
thousands of people who will view this
after we've done this program but i am
steve orleans president of the national
committee on u.s china relations and i'm
thrilled
absolutely thrilled to be joined by
kaifu lee my old friend
um and i'm even more thrilled to have
read this extraordinary book
a.i
10 visions for our future
2041
which kaifu and chancho fan have
written and recently translated into
english one of the great things about my
job is i get to read books by friends
and colleagues
and this one was really a pleasure i
blew more than a pleasure it was
absolutely enthralling
um
more
you know at the national committee what
we often try to do is find ways to
educate people and what i'm frequently
telling
my colleagues chinese officials american
officials is you educate through story
that you tell a story that delivers a
message and then you explain the message
that it's really a terrific political
technique on how to educate
and what kaipu has done with this book
is
and with
uh mr chun
who wrote the
fiction part of it and kai who wrote the
non-fiction part of it is tell these
absolutely compelling
science science fiction stories related
to a.i
and then kaifu explains is this
realistic is this not realistic um how
does it work it it's you know i loved ai
superpowers his former book his his late
his last book which was on the new york
bestseller the new york times
bestsellers list uh but this one is
really compelling if anyone on this
has not read it yet
read it it is truly a
wonderful experience one
which for me who doesn't know much about
ai it really educated me about ai and
what we should think about looking into
the future so i've always been an
admirer of kaifu now even more so even
more so i won't go over you know he's
currently we won't go over his bio
because we talked about all the awards
and all the things
that he has done we wouldn't have time
uh for the program but he is the ceo of
sinovent
synovation ventures
um and as everybody knows his story from
ai uh super powers started out at
carnegie mellon was it google microsoft
other companies and then started
cynovase which invests in startup
companies so kaifu
um i've given a big pitch for the book
this is this is the cover
um
tell me it was so imaginative how did
you come up
with the idea
of kind of combining
fiction
with ai
analysis well thank you steve uh for for
having me on this uh uh webinar
uh the idea is that i i believe is
incredibly important for everyone to
understand what ai is and is capable of
and its potential dangers and how to fix
them and and yet many people find ai
intimidating because it seems like as a
rocket science but but it really isn't
so i tried to explain it in plain
language in ai superpowers i think i had
some limited success and people told me
they and they thought they understood
some of ai actually with me explaining
it
in a non-fiction
but then still people were intimidated
so i thought well the only way to get
people to truly understand ai
is through
entertaining and engaging
storytelling which i personally cannot
do so i reached out to my friend um chen
chilfan also known as stanley chen
to ask if he would write the stories and
and he uh kindly agreed uh it is rather
uh unusual that a science fiction writer
who's used to
let the imagination run wild
was willing to constrain his imagination
to what i paint as feasible
and unfeasible and only write about
what's feasible in the next 20 years but
he's done an amazing job and hopefully
the book delivers the engaging
aspect and draws people in who otherwise
might find ai to be intimidating and and
who might have been misinformed about ai
who can now get hopefully the right
picture and after each story
i give an explanation of the technology
what the canon cannot do the problems it
might introduce on society and how we
might deal with it so that's how it came
about and how did you arrive at the ten
stories and you know the different
kind of aspects of ai that must have
been difficult because you could think
about five thousand
yeah
it was um it was a puzzle we're putting
together a puzzle i wanted about 20
technologies covered from natural
language processing to quantum computing
um to
drug discovery i wanted about 20
technologies covered and
i wanted to see them covered from easy
to hard and i wanted to see them covered
applied to different industries like
entertainment
communications healthcare
and
work uh and uh etc so so that was my
puzzle and then stanley introduced
another puzzle he wanted to
have 10 stories take place in 10
different parts of the world
uh partly to make the stories more
interesting and partly to show that this
will impact all countries and all
industries so we mix these four puzzles
together and then we brainstorm possible
uh story lines and then um uh and then
he went off and and wrote the stories
and then i wrote the commentary that's
how the puzzle came together we didn't
quite cover every technology there were
a few
we wish we could get in but the puzzle
just didn't didn't fit
now it's because we're the national
committee on u.s china relations um and
our audience are basically people who
are looking at the u.s china
relationship what is the book
what is the message the book conveys
about u.s china relations
uh this book isn't in particular about
u.s china but it paints a world in which
we really need to work closely together
under more not less globalism with more
not less trust between countries because
our fates is very linked
for example
autonomous weapons can only be
regulated with uh cooperation from
countries
and
many of the governance ideas
can can become universal if people come
to a common understanding
and and also
technology advances were driven by china
and u.s
and
the scientists ought to work together
fortunately they still do so those are
some things one could read through the
stories
um and and also
uh in some of the background it is
clearly still portraying that
technologies coming from u.s and china
are the two most significant
technology superpowers that will
continue in 20 years so these are the
subtle aspects one could find in the
book but it's really not prominent
which areas
should the united states and china with
respect to aib cooperating which ones
should cooperation be limited and which
ones can we really not cooperate on i
mean i hear discussions in washington
that quantum computing is just we should
not really be cooperating in that
because it can be there's too many uh
there's too much military applicability
there so how do you kind of divide those
areas
right
um i i think
ai as a general omnius technology deep
learning extensions uh
including uh
some of the more recent advances beyond
deep learning are pretty universal they
the the papers are published even with
source code and data
and
the chinese european american scientists
are already working together and then
the technologies are already applied to
industries so um so i think that is uh
the the horse has left the barn and and
it's becoming omni use applied to
industries uh more cooperation i think i
think would be very suitable
there are obviously
civilian and non-civilian applications
but the cooperation on civilian which is
a much larger part i think canon should
go on
specifically
the use of ai in climate in uh
healthcare ought to be less
controversial and potentially uh the use
for uh profit making for for for use in
financial industries and other
industries i think could also span
multiple countries
uh i think both countries will assert
that
on technologies that
relate to national
security or defense that's an area where
each country should develop its own and
probably uh you know europe and russia
will also want to develop their own
and um i think
autonomous weapons the development of
that um should either be should i think
will happen independently but they
should be regulated working together
and i think quantum computing i can see
the logic of why
having a quantum supremacy is important
to many countries and
because i think quantum computing uh
basically changes the paradigm of
computing and makes possible
things like breaking computer security
figuring out uh extremely fast
communications uh completely disrupting
the type of every algorithm from ai and
and
and so on so i i think i can see
uh countries wanting to be
superior in quantum and and it's not a
technology that i think people are
inclined to work together with other
companies companies are doing it
and i think each company and probably
each country views it as an expensive
endeavor that would give it an advantage
in a disruptive future direction so i
think i i would understand understand
that one
and there are probably other
basically the question i think is if
it's really legitimately related to
national defense
uh and security i think it's
understandable that cooperation be
limited everything else i would hope for
more cooperation
aren't we seeing
[Music]
both governments
if i agree with you i'm a hundred
percent in agreement with you we should
find ways to cooperate but aren't we
seeing both governments actually move in
the opposite direction more restrictions
on data
blocking of
chinese acquisitions of companies in the
united states that have access or
are a a bank for health care data for
individuals data
dating apps um
you know obviously the the
you know dd
you know having certain data and that
actually the walls
rather than getting lower
are getting higher what i've always
argued for is defining national security
narrowly and building those walls very
high but for the other things don't have
walls at all
yeah i agree with you
uh i think if we trace back on how this
began
i think it was under president trump
that went after a number of these
aspects
i don't want i don't i'm not an expert
on which of the policies are legitimate
which are questionable but i think
uh china's i think the chinese
government's preference would have been
to continue globalism china clearly has
been a beneficiary and cannot contribute
but
i think seeing some of these
companies that have been put on entities
list with export control cepheus and the
frequency and the degree of the
application of these um
measures are making china feel that it
needs to be self-sufficient in
technologies otherwise
every company could follow the path of
huawei of being
limited in its access to necessary uh
infrastructural technologies so yes in
recently china has been
extending its regulation too it is kind
of a
symmetrical escalation which is
unfortunate and i hope there will be
some de-escalation otherwise this will
probably get worse not better
yeah
the national committee runs what are
called track two dialogues and one of
them is actually on the digital economy
and our hope is to be able to propose to
both governments some rules of the road
where we don't have this continuing
expansion of restrictions because
ultimately these expansions of
restrictions take the dream of ai and
the good things that it can do and it
impedes the realization of that dream
you know i know the book's not about u.s
china relations but the chapter it was
interesting the chapter on um a quantum
genocide which was you know riveting i
mean i i was late for an appointment
because i was reading
it was the fiction part not the
analytical part
but
it's about it's about rogue actors
and
is it fair to say the greatest threat
of the misuse of ai is actually not from
state actors but from rogue non-state
actors
uh
i it's hard to say which is
larger but uh yeah i would i would tend
to agree
uh because that's the difference with
let's say nuclear weapons
while that's incredibly
dangerous it is only a few countries
that have it and they can hopefully
negotiate treaties and regulations and
and control themselves and
due to deterrence and
some degree of trust etc
because states i think generally
speaking are much more trustworthy than
than non-state actors so the big danger
for autonomous weapons is that the cost
of building one can be very low like a
thousand dollars
equip equipping a
drone with facial recognition and gps
and uh and a little bit of dynamite then
it becomes an assassination machine that
flies a very fast speed very small very
difficult to catch and shoot someone
point blank
and and the other danger is that the
terrorists do not have to sacrifice
their lives unlike the uh the the
suicide bombers who do so this lowers
the barrier the cost is lower and also
one could a terrorist group can send a
swarm of these
so so i think that slowers the cost of
terrorism
and increases their
uh
lethality so i i do think it is much
more dangerous and in fact i'm
i think i think any day now we're going
to see some such activities and
people are in
countries actually generally not taking
this seriously enough and it's going to
take another autonomous weapon terrorist
group 9-1-1 like event that i think will
wake everyone up
yeah
how realistic by the is is the
quantum genocide
kind of
fiction part you know where this
effectively a mad scientist you know
whose life has been ruined
you know kind of takes over
uh i i think it's higher than ever
before
because you know with unabomber is the
the the characters was built on
unabomber but unabomber is um
deranged but and and smart but but not a
deep scientist so so nowadays many more
people have access to these ai
technologies and the drone technologies
and and can program them
uh so i think that is more realistic
than ever more dangerous than ever the
part about that that
character uh becoming the first person
to invent the quantum computer and uses
the quantum computer to do bad things
that's much more speculative
obviously it's really the national
laboratories and the ibms and googles
that really are likely to make the big
break in quantum
and and and those large companies and
large national laboratories are not very
likely to have such a deranged person at
the top
yeah
you know um
i think it was trump used to joke he
said we don't know if this is a state or
a fat guy sitting in the in the basement
who's hacking into stuff and trying to
do all this so the question you know
how
do you need states to be behind this or
is it possible for kind of
literally the mad scientist to you know
use ai to accomplish very nefarious
objectives
i i think the mad scientist can do it
and depends on what bad things are being
done
if it's really to build a small number
of killer drones or slaughter bots i
think even uh even an advanced hobbyist
could build that don't you really need a
deep scientist
so
so that's why i think the danger is
becoming greater and greater because the
barrier to building ai
is lower and lower more and more people
every year
can program ai and do good things and
also do bad things
yeah
you know it was interesting as i was
reading the book um you know there were
two
major incidents involving ai
one was
the alleged assassination by israel
of the leading nuclear scientist the
person trying to create a nuclear weapon
in iran
who apparently a uh
a machine gun was placed which was
operated fully by ai
and the other was the sad
action by the united states to have a
drone strike on a car
where the intelligence apparently
was faulty
one of those are we very close to to
kind of what you know you say 2041 but
is it this is 202 one
sounds like you're getting pretty close
i think we're pretty close
there's also i think then attempted
assassination on the venezuelan
presidents and also the alleged strike
on the saudi oil fields by
iran
and all of those
half drones playing a role there are two
types of drones
in most of these cases
very sturdy
military-grade drones were used and
those are very high expensive and still
out of reach by terrorists because
they're not acquirable commercially but
i think the venezuelan president
assassination i think that was using
more of a standard hobbyist drone i i'm
not certain but
but those can be equally lethal today so
i i do think in the next
three years we will see
these killer drones uh do something
terrible and
and then we'll wake up and start reading
all these papers that various people
wrote the part my book
autonomous weapon the section was
excerpted and and published in the
atlantic
and there are other people who have
written
a thousand several thousand ai
scientists uh along with the late
stephen hawking elon musk have written a
plea that
that government should look at the
regulation or perhaps spanning of
autonomous weapons but it's all fallen
on deaf ears and
and i'm afraid it's going to take a a
terrible uh um
atrocity that will uh wake people up
what is the widespread use of 5g going
to affect this
oh
certainly communications is
a an important element for any
misuse of ai technologies
um
right uh yeah of course positive and
negative uh that the drones wouldn't be
able to operate
if they couldn't use the gps element for
example and the 5g
but that's already i think a um
a reality it is the way it is yeah so
and i think going on to 6g
it will even enable other types of
things
um
i saw and when i was
watching a demonstration of 5g
um
[Music]
i was in shanghai and they they showed
um
a
mechanical arm mining rare roof
i think in guijo or something something
where
[Music]
years ago
um
miners would have died
now it's just a machine and they can lit
and they obviously you know it's a
thousand miles 1500 kilometers and they
could sit there and the 5g was so
perfect
that they could mine
them operate accordingly i mean isn't
and that's obviously all ai and a
combination of ai and 5g it was it was
one of the most
you know it really brought home to me
the lives that could be saved
by ai
uh yeah absolutely in a dangerous
situations in mines or
accidents or fires
robotic technologies can be life-saving
and the way robotic technologies are
likely to develop is first in
extreme conditions where people are
willing to pay a very high price such as
these
and then moving into the factories and
then within factories there will be
smart forklifts smart
autonomous vehicles smart arms that can
grasp any object and then that will be
refined by use at the high price
paid by the manufacturing companies to
basically
replace routine work by people
and then the technology will become
cheaper than adopted in the camera in
commercial in commercial applications
like restaurants and malls and then it
will come to our homes and become great
household helpers so so that's something
we can look at a 20-year horizon and and
see pretty much all of the routine human
labor will be doable by um by robots and
i think that will be one of the big
advances and it will free up a lot of
our time so the the positive benefits
are are definitely much larger
yeah
now in the holy driver
um you know you talk about you know
autonomous vehicles and you know the use
of ai and it's this i mean i want to
ruin the ending you know how how lives
could be saved this way but you also
then in the analytical part you talk
about
uh are people
will the regulatory apparatus be willing
to deal with
the general
savings of life so you will have the
data will show we save lives but there
will be an instance or two or five or
ten where someone dies as a result how
does that get resolved
and is that something where china is
able to look
at the broader data
whereas
democracies
can't
it's possible um yeah the specific
issue that steve you were talking about
is ai gets better with data so
if you allow an autonomous vehicle to
launch
and it will
certainly make mistakes but maybe we
don't allow to launch unless it drives
roughly as well as people maybe a little
better i think that is feasible then it
will gather more data
and then in another month a new software
will be sent to all the vehicles and
then it will drive better much better
than people and and then in a year five
years ten years uh it will become so
much better
at driving than people because it's seen
you know billions of miles and no human
has ever seen that
uh and it's it's a honed to a perfect
driving capability also the autonomous
vehicles can talk to each other and just
miss each other by an inch and humans
cannot have that precision
and also
a autonomous vehicle that might be
having trouble like a blown tire can
broadcast to nearby cars to stay away
from me and humans can't respond in that
kind of a split second uh
accident or issue
so uh it's very clear that uh in let's
say given 10 years from launch to 10
years autonomous vehicles ought to save
90 percent
of the lives lost on the road today this
is this is a scientifically estimated
projection by mckenzie so the question
is what if
we launch it
and it's uh yet many people die because
of it uh not more people died than human
drivers are we willing to say we'll
launch the product when it's as good as
human
uh
and the autonomous vehicle will make
mistakes uh there will be people who get
hit and who die uh not worse than people
but different people
and then over
time it gets better
is that the price we're willing to pay
so i think different people and
different governments will thrill
differently
and and we'll see how that plays out but
uh i would bet many countries perhaps
including china would feel at a ten-year
horizon that's a good thing and at no
given moment in time is it worse than
people then it's something we could look
into one could also extrapolate on
robotic surgeries on doctors who
diagnose patients
similar issues with human lives will be
involved
so so i think we should go in with our
eyes open and have the intellectual
debate now
ai
is going to make fewer mistakes
an ai radiologist is going to make fewer
mistakes than the best radiologist in
the world
because
the best radiologist maybe he's seen a
hundred thousand but the ai has seen
a hundred million
right so they will simply make fewer
mistakes so and so no one should suffer
from that
transition
to
mediology
i i don't know i don't know because the
problem is when ai makes the mistakes
many of those mistakes look silly to
people and actually look ridiculous
and and it could be viewed as uh
irresponsible how could you launch a
product like that that is so pretty
immature for example when tesla
for the first tesla accident that killed
the driver using autopilot the tesla saw
a giant white truck
and the reflections as sky and it drove
right into the truck i would say
probably no human driver would ever make
that mistake and people are shocked and
angry that how could tesla launch such a
ridiculous product but if you look at
the track record of autopilot it's
actually driven
safer than people in terms of total
fatalities but when it makes a mistake
it's a ridiculous unforgivable mistake
that i think is the dilemma i will be
facing wow that's that that is uh
that is fascinating that that is that is
fascinating um
you've talked about kind of the the and
the book talks about this in various
places that you know and especially in
the chapter on plenitude so ai is able
to kind of reduce the amount of time we
need to do things things are able to be
produced less expensively so they're
more broadly distributed money becomes
less
less important
two questions on that one is china is
confronting
um a demographic challenge which we talk
about a lot of the at the national
committee its workforce has already
peaked it's reducing its population is
on the verge of peaking is ai going to
solve that problem for china
uh the problem of population not growing
or the population
yeah population not growing you know
generally population not growing would
lead to reductions in gdp that
population growth is generally one of
the ways that pdp will grow
right right
i'm probably somewhat contrarian on this
view so i'll answer it but i would say
many people would disagree with me
i feel with
aiona robotics taking over so much
routine work
countries that grow too much in
population may not see
the historical
correlation with
gdp growth anymore
and in in that case
countries like india may be facing more
of a challenge than countries like china
in terms of the population growth uh
there are obviously a lot of smart
people who disagree on that so we'll
have to see how it plays out in my
opinion
if we believe ai over the next 20 years
will displace 50 percent of human
human work which is routine
and that means there will be a large job
rotation huge issue with
redistributing income to the people who
lost their jobs and a big problem in
terms of retraining people for skills
that
are not easily replaced by ai and the
individual is capable of being trained
and learning that new skill
i think that is a a set of challenges i
feel would be of a highest priority
we're seeing the very beginnings of that
not enough sign to worry any
governments or politicians yet but i
think it it may get worse especially as
covid um
is is we get out of covid yet
companies may not be hiring people back
and maybe using automation
we might see a a jump but but that's
something i'm i believe i believe will
happen one day and uh but we have to see
the data to to uh to validate that
yeah certainly in the healthcare sector
we're using much more robotics
uh much more intelli you know
telemedicine that we've seen a shift
which i would have thought probably
would have taken 20 years has occurred
in 18 months
and and there's no question it will also
reduce
reduce employment and in in
in your chapter and plenitude you talk
about
the need to retrain
people right you know and government
taking over that responsibility the um
you have this window also in plenitude
you talk about moolah
which is
the new money which you you collect by
doing good deeds
um
so two questions one
does this kind of does this in your mind
stem from china's social credit system
that is beginning to
take hold in china you know that people
if they don't visit their parents they
they lose credit you know if they
jaywalk they lose credit but if they do
good things for science society they
gain credit so is that related and then
the second kind of subsidiary question
is
is china's central bank digital currency
some step in that direction making all
current you know getting rid of paper
money
uh
actually those were not my inspirations
uh the the social credit system and the
central bank system uh but clear the the
reason i decided to put that in the
story it's a very speculative direction
of course i can't
prove that is the direction we must go
it's more motivated by my belief that
work for economic gains
will
become more diminished that is we if we
only had human jobs and professions
for
for for for work that will have an
economic benefit to the society we won't
have enough jobs for everyone because ai
will have taken over so much of it
uh and and yet if we think about what
humans can do that ai cannot do
uh obviously there's creativity there's
you know your job my job the ceo's job m
a expert's job scientist job yes there
are those but this is a small percentage
what is the large
number of
existing
lower middle class people going to do
when ai takes over all the routine job
so my thoughts that i began to express
in ai superpowers that went into ai
2041 is that
it is really service jobs that will not
be replaceable by ai because the human
connection required because as as
everything becomes cheaper people want
to pay a premium for services for a
wonderful masseuse for a great concierge
for
for a tour guide and for health care
services and some of the health care
services are not necessarily you know
economic requirements that is in elderly
care elderly companion
someone to take an elderly person to see
a doctor
or foster home volunteer hotline
volunteer
someone who decides to
to homeschool their children these are
all activities worth compensating people
for
worth calling jobs if you will yet they
don't really contribute much
economically to society but it gives
people something meaningful to do
these jobs are create positive social
energy it gives people a sense of
satisfaction having helped seeing a
smile from the elderly person they uh
spend time with so there should be
encouragement of this type of
human to human connection service type
of services
so it was with this thought in mind that
i thought um
instead of just encouraging people with
a pay they should be encouraged with
some kind of um
digital system that measures their
contributions socially so it was
inspired by this not not the other
factors
the um you know it's interesting i mean
some
you know you say tour guides
i would say tour guides have to some
degree already been replaced by a very
simplified ai so once upon a time you
need somebody to walk you around a
museum
now you simply put on earphones
and when you get to a particular place
the
museum talks to you with what the tour
guide formally said
very simple kind of ai
yes yes but i think there's also room
for a storyteller if tour guide tour
guys should compete against that by
being a brilliant storyteller
incorporating personal experience fun
anecdotes things that are very personal
and connect human to human
um there are still many of those
turquoise i hope i hope that could
emerge to be uh sufficiently competitive
um
yeah
they're also you know like a chef and a
waiter you know we're investors uh in
china and there are a lot of companies
coming up with robotics chefs and
robotic waiters and waitresses and and
they're very effective very cost
effective i think they will populate uh
to middle or lower end restaurants you
know maybe like equivalent of denny's in
the u.s higher than mcdonald's but not a
not a high-end restaurant but then that
i think accentuates the value when you
go to a a top restaurant a michelin
restaurant or maybe something less
expensive but still a fancy restaurant
people will treasure even more the human
service that's provided so i think a lot
of things will become multi-tiered at
the top will be the human
service providers curators and people
who deliver an amazing experience and on
the bottom will be uh robots taking over
the jobs
i was in i mean it was a lien yeah it
was a uh whatever the unshort band is it
was a a chain restaurant but it was a
main focus restaurant
you pressed what you wanted
and you made your order and then i
expected the robot to bring the food out
but then a person brought the food out
so their ai was rather imperfect um i
mean their robotics were isn't perfect
you know somebody this gets to a
question someone has asked which is leo
from beijing language and culture
university
she thinks ai has its limits for example
this is an interesting question no
matter how advanced ai technologies are
professional translators and
interpreters are still needed
are we exaggerating the potential of ai
by thinking that those jobs
will get eliminated
uh the highest gen well it goes back to
the concierge and uh the chef and the
waiter uh the same will happen with
translators
uh the very high-end super you know if
you translate for the president of a
country or a president of a large
fortune 500 company that is unlikely to
be taken over by ai in the next 20 years
because mistakes are extremely costly
and there's a lot of subtlety but
business translation is rapidly being
taken over by
semi-autonomous methods i'll describe to
you what is happening today so we're
investors in two companies uh one called
the transient the other is called lane
boat and the two of them are working
together one is on a domain specific
high quality text translation the other
is building a tool for translators and
the tool still has people using the
tools the people has the final say on
what the translation looks like but the
ai does the first pass and we're seeing
ai improving very rapidly by 12 just in
the last year so that means 12 more of
the translations don't have to be
touched by the human anymore
and and we're seeing the overall
productivity of the translator pool go
up significantly costs come down
significantly because ai is doing more
and more and more of course the
translators feel very empowered because
ai is doing all the routine basic
translation and the translator gets to
tweak a little here and little there but
what they don't see is the amount of
reliance
on their human capacity is coming down
over time so we're actually in a very
strange time in history right now
because
i i know you as the person who asked the
question is is probably looking at the
boom in the human translator space i i
do believe in the past year there are
more people who get paid by more
translator jobs as a result of more
people using machine translation and not
happy with the result and hiring a human
to fix it but this boom is a transient
thing and as technology gets better with
more data the human reliance and
requirement for human will come down
it's very much similar to uh
bank tellers and atms when when atms
first came out it drew people to the
bank they had to hire more tellers but
eventually atm became more and more
powerful and tellers had to be moved to
other jobs so i would be quite confident
that in a 10 to 20 year horizon the
number of professional translators will
come down significantly even
dramatically and the ones who remain are
going to be the ones who are so good at
it they're like instantaneous voice
translator or extremely high quality no
mistake tolerated kind of jobs
why is your former employer's uh
translation function so mediocre
i i i talk when i use it i'm just
shocked at
because they should have data
more data than they possibly
can analyze to make their translations
more accurate what's going on
i don't think they put the state of the
art the state of the art requires um a
lot more compute power and the number of
users who use it and also they make no
money from the product so they put a
older version but even then if you look
at the quality of the product five years
ago ten years ago there's been big
advances and also in the last just in
the last two years there's a huge um
advancement almost a breakthrough
called
self-supervised learning and and if you
probably know it by gpt 3 or transformer
or birds these are the
technologies coming out of
google microsoft and openai
that allows
essentially you know trillions of data
to be used for training a super smart
natural language engine on top of which
you can build machine translation and
specif and and also
targeted for specific industries like
electronics or
finance and we are seeing big jump in
performance so
you i i would be very comfortable
predicting that in five years
we will have speech recognition
dramatically better than they were today
even though they're pretty good already
will have machine translation
both
text to text but also a simultaneous
voice to voice translation so that you
can go to a foreign country with your
sets and um have a decent conversation
with someone with some mistakes but a
decent um fluid conversation it will
jump it will jump in the next five years
that will be i mean certainly the voice
recognition
is already
you know 99.99
when i give speeches in china
yeah you know i i see on the sides they
have
you know a transcription going on if
people don't understand my chinese or my
english so
it's pretty good it's pretty good it's
very distracting for the speaker
um yes
i mean are jobs
you know
partly because i come from wall street
and started out doing credit analysis is
that kind of job
just going to disappear it's all going
to be done
by ai because ai
is going to be better at judging the
borrower
with all the data that they have and all
the data that they're able to sweep in
through ali and 10 cent and in the
future for the digital currency that
those jobs are going to basically
disappear
uh
yes i think all routine jobs are going
to be gone and there are some jobs that
you would not think are routine
they're going to be gone too
for example a radiologist's job one
would not think that's routine
a translator's job one would not think
that's routine but it's all about data
huge amounts of data
fed through to a mathematical
quantitative algorithm and the
improvements are just very dramatic and
the way that jobs will be displaced will
be first ai will come out as an
assistant radiologist assistant
translators assistant doctors assistant
for diagnosis then they will become
quite good doing more decisions
autonomously then one day will come when
the professional will feel wow the ai is
better than me i don't dare override its
decision anymore and then it's going to
flip and take over more jobs i think
people really have to be
prepared for that we are seeing the
writing on the wall as you were
describing speech recognition i worked
on speech recognition
in the 80s and it barely worked back
then if you draw a curve of improvement
it goes like this especially it actually
goes like this in the more last 10 years
big jump due to deep learning and its
descendants so we really
should become prepared for domains in
which ai will emerge as an assistant and
actually evolve into the main
worker
the chapter on that you call golden
elephant kind of
highlights potential inequalities in the
use of ai you know with
i mean it's so
it's so smart it's so interesting
because it you know talks about
insurance and how if you do one thing
you're you know given you've consented
to kind of be followed and have
everything you do be monitored your
insurance premium would jump up or drop
down which was which was i think just
wonderfully interesting i was going to
ask my insurance my friends who
insurance companies if they're moving in
that direction um but my question is how
do we deal
with the systemic you know the potential
systemic inequality in ai
uh yeah today i think the inequality in
ai is quite a serious matter and people
have to work on it uh fortunately i
think we can make a big improvement in
the short term
uh we've probably read about uh you know
a large american company trained this hr
ai using more men than a lot more men
than women and it ended up being very
biased against
letting women pass the screen and that
kind of error in the imbalance of data
that you expose to an ai system so much
of one gender or race or whatever and so
little of others will cause ai systems
to be biased and those can be caught by
automatic tools that will alert the ai
programmer saying you should not launch
this because it will have this kind of
an impact so i think we can catch you
know 80 90 percent of the most obvious
of of the problems because they're
pretty obvious mistakes we should also
train ai engineers to be conscientious
that they're not just trying to build a
tool make money but rather
they're they're going to impact people's
lives so i think we can keep that under
control but there are a lot of
subtleties that are very hard
the golden elephant was particularly
written that way
there you have a
well-intended benevolent insurance
company meaning to help people reduce
their insurance premium which ought to
be correlated with them not getting sick
as often seems like everybody wins but
yet still terrible things happen so it
is pointing out there are extreme cases
and a lot more research needs to be done
we can capture them obvious cases and
make it work much better but the extreme
cases require a lot more work
i would close on this question by saying
if we captured if we do a really good
job on the big mistakes i think we will
reach a point that ai will be already
less biased than people uh we we don't
recognize how biased we are
um
think think about a um some a loan
officer at the bank if you ask them well
why did you turn down that
person's loan you know they'll usually
give you a a
a legitimate reason
insufficient income
uh
two new at the job or something but
buried in that person's um
subconsciousness is a lot of bias and
prejudice uh you know
things like
the person doesn't look trustworthy or i
don't trust you know men or women or
whatever
that kind of thing does does come
through and ai will we can do such a
good job by honing the right data set
eliminating biases in the data as much
as we can so that ai can and will do
better than people another example with
people is there were there's a study in
israel that showed that judges were gave
harsher sentences just before lunch just
because they were hungry so it's not
even caused by prejudice they're just
i'm hungry i'm going to be mean so so i
think ai can and will do better and and
this doesn't mean
we shouldn't work on it we should work
very hard on it to make ai as fair
unbiased as possible but we should not
look at it and say well it's so much
worse than people because uh
you know we're all people we can
think about you know are we really
unbiased i think we actually are quite
poor and ai should be a blessing if we
do a good job
of course the data will tell us whether
there's a correlation between potential
bias and and outcomes
and then hopefully the person who's then
creating
you know the programs can kind of
get the get the bias out of the of the
uh
the ai
we can get a lot of it out you can
actually go further if let's say
uh racial bias is your biggest concern
then just remove um race out of the data
then it won't be pivoting on that column
of saying okay i'll treat the chinese
worse and treat the you know filipino
better or something like that but but i
would also caution that even if you take
that one out there are probably other
ways to infer race by you know the la
the surname and right place they live
they live in chinatown they're probably
chinese so
you probably have to remove a fair
amount of data to remove most of the
inferible racial elements but if that's
really important to the ai you want to
build
then remove remove all of that yeah
i want to make sure i get to some
audience questions before we close um
morgan pierce from csis asks
experts for predicting that ai will
displace many blue-collar workers as you
said uh what's the chinese government
doing to provide a safety net for those
who are going to be impacted
yeah uh we are not seeing
much action by any government right now
on this ai displacement issue
because i think most of the
displacements are absorbed in the
employment process that is you know
people lose their jobs then they go on
and find something else so we haven't
reached a point where large numbers of
displacements are causing the government
to have to step in you know on the you
know on the u.s side i think covet has
made unemployment numbers not so
dependable so uh despite efforts you
know you know like andrew yang he speaks
up about the need for universal basic
income due to ai displacement but you
know one percent of u.s listens to him
so he's getting some voice but the
government hasn't really seen it
necessary to come up with any policies i
would say
china is not looking at the terrible
unemployment number and i would say
generally speaking when governments are
not seeing bad unemployment numbers
they're not likely to proactively deal
with this
humo
from his law firm i asked since china is
the leading country in development and
implementation of ai what's the
implication for the u.s and europe in
light of the open competition and
potential conflict with china in the
years ahead
well commercial ai is not really a
conflict between countries uh you know
tencent alibaba google amazon can can
all be successful
they'll have different products
different geographies so i i think on
the commercial aspect i would anticipate
you american companies to be really
successful um probably more so in
enterprise software space like c3ai and
palantir and the like uh chinese
companies will probably see more
robots and automation because of the
manufacturing prowess
and i think both countries will do well
europe i think will not be able to
emerge as a giant in ai
partly because um i think the eu wants
to limit ai because of their concerns
about personal data and um and their
concerns about um
internet companies having too much power
and also eu is not really one language
one culture entity so a ai company will
have a harder time penetrate all of eu
whereas the chinese or american
companies would not have that problem
it's a cohesive
single language single culture large
market so u.s china continue will
continue to be ai superpowers as i
predicted in my last book
and that's why europe developed this
gdpr which is replete with problems and
makes kind of development of ai and
collection of data which would enhance
ai extremely difficult yeah i think gdpr
is very well intentioned it tries to
protect things that we should
try to protect but it does so in ways
that i think will impede the growth of
ai and and to some extent it will
influence other countries including u.s
and china to to use gdpr as a reference
and develop their own laws but i think
europe will enforce it with the
strictest
requirement for compliance and thereby
making it more difficult
to start an ai company in europe
compared to us or china
yeah there's no question that europe the
amendment of the amount of venture
capital in europe the amount of kind of
stock the number of startups in europe
is a tiny tiny portion of the united
states or china
right
it's and i guess that's how it's going
to be the europeans
are willing
to
live with that result it's interesting
yes i spoke once to a european regulator
and i said all these policies will cause
ai to slow down in europe and his answer
was dr lee
that's not a side effect that is what we
intend so that is a very different
mentality than the uh american or the
chinese field yeah
and is that this uh
that'll let you get to your your paying
job in a couple of minutes the um
is that
are the views because you've spent your
life in china and the united states is
are the views of the individuals just
different with respect to data and
privacy that that a chinese is willing
to give up
um their data in exchange for
potentially more personal safety or more
advancements in science americans are
less willing and europeans the least
willing is that a fair characterization
uh it is a
reasonable high level characterization
but the answer is much more nuanced i
think you know i think it's universal
value that everybody
wants to keep their personal data as
private uh as possible but when it comes
to two priorities of uh needing to be
prioritized
i think chinese and europeans and
americans may prioritize them
differently uh an example is a co
incovid right because of kovid
everyone in china has a extremely
accurate con contact tracing app that
knows exactly where i've been and
whether i've been contact with anyone
who may have contracted coronavirus and
as a result
and also the use of you know cameras a
facial recognition along with
temperature and when i go into a
building uh it
at least the building i work in it knows
who i am and what my temperature is so
if i have a fever
i will be invited i will be asked to go
to a hospital
no matter
no matter where i go so that's the
chinese way
you know i think most americans and
certainly europeans will find that
unpleasant and maybe unacceptable
but today if you do a survey to china of
saying chinese people given this is how
china controls coronavirus these are the
things you give up these are the things
you gain safety lower deaths etc
do you would you rather go for a chinese
approach or a european american approach
i would say almost 100 percent of
chinese people would say i think it's a
good trade-off that
our government and our companies do what
they do
and and conversely i also think most
americans europeans would prefer to keep
the systems they have rather than adopt
a chinese system so you know while
everyone wants personal private data
the answer is nuanced and this is a case
in point
wow
fascinating kaifu thank you so much to
our listeners and viewers
this book will give you hours of
pleasure and lots of education
so thank you so much for being a great
friend of the national committee and
thank you for writing
another great book
you
関連する他のビデオを見る
OpenAI CEO Sam Altman and CTO Mira Murati on the Future of AI and ChatGPT | WSJ Tech Live 2023
'AI Superpowers': A Conversation With Kai-Fu Lee
【10分で解説】生成AIで世界はこう変わる
生成AI対談 -最前線編- 日本企業でも活用の進む生成AI その最前線とは - 日立
日本と世界の明らかにおかしい一致。隠された歴史の真実、日本の本当の姿がヤバすぎる【 都市伝説 日本 雛形論 】
Fireside Chat with Naval Ravikant & Niklas Anzinger | AI & Technological Progress - Vitalia
5.0 / 5 (0 votes)