What Does the AI Boom Really Mean for Humanity? | The Future With Hannah Fry
Summary
TLDRThis script explores the concept of 'superintelligent AI' and its potential risks, drawing parallels with the 'gorilla problem' where human advancement has jeopardized gorillas' existence. It features interviews with AI researchers and experts like Professor Hannah Fry and Professor Stuart Russell, who discuss the challenges in defining intelligence and the ethical implications of creating machines that could surpass human cognition. The script also touches on current AI capabilities, the economic drive behind AI development, and the importance of understanding our own minds to truly grasp the potential of artificial general intelligence.
Takeaways
- 🦍 The 'gorilla problem' in AI research is a metaphor that warns about the potential risks of creating superhuman AI that could threaten human existence.
- 🧠 Companies like Meta, Google, and OpenAI are investing billions in the pursuit of artificial general intelligence (AGI), aiming to create machines that can outperform humans at any task.
- 🤖 The concept of AGI involves machines that can learn, adapt, reason, and interact with their environment, much like humans.
- 🔍 Defining 'intelligence' is complex, with various interpretations ranging from the capacity for knowledge to the ability to solve complex problems.
- 🤖💬 AI's ability to physically interact with the world, such as robots that can manipulate objects based on language models and visual recognition, is seen as a step towards more humanlike intelligence.
- 🧐 Concerns about AI include the potential for machines to develop goals misaligned with human values, leading to unintended and possibly harmful consequences.
- 💡 The economic incentives to develop superintelligent AI are enormous, potentially overshadowing safety considerations in the race for technological advancement.
- 🚫 There are significant unknowns and risks associated with superintelligent AI, including the possibility of machines taking actions that could lead to human extinction.
- 🧬 Neuroscience and brain mapping, such as the work with the C. elegans worm, are contributing to our understanding of intelligence and may inform the development of AI.
- 🌐 The current state of AI is far from matching the complexity and computation of the human brain, suggesting that achieving true humanlike AI is a distant goal.
Q & A
What is the 'gorilla problem' in the context of artificial intelligence?
-The 'gorilla problem' is a metaphor used by researchers to warn about the risks of building machines that are vastly more intelligent than humans. It suggests that superhuman AI could potentially take over the world and threaten human existence, much like how human intelligence has led to the endangerment of gorillas.
What is the difference between narrow artificial intelligence and artificial general intelligence?
-Narrow artificial intelligence refers to sophisticated algorithms that are extremely good at a specific task. Artificial general intelligence, on the other hand, is a machine that will outperform humans at everything, meaning it has a broad capability and can perform any intellectual task that a human being can do.
Why are companies investing heavily in artificial general intelligence?
-Companies like Meta, Google, and OpenAI are investing in artificial general intelligence because they believe it will solve our most difficult problems and invent technologies that humans cannot conceive, potentially leading to significant economic gains.
What are the key ingredients for true intelligence in AI according to the script?
-The key ingredients for true intelligence in AI as mentioned in the script are the ability to learn and adapt, the ability to reason with a conceptual understanding of the world, and the capability to interact with its environment to achieve its goals.
How does the robot in the script demonstrate a form of imagination and prediction?
-The robot in the script demonstrates a form of imagination and prediction by understanding natural language instructions, recognizing objects, and physically carrying out actions based on those instructions, even when it encounters objects it has never seen before.
What is the concern about creating superintelligent machines as expressed by Professor Stuart Russell?
-Professor Stuart Russell expresses concern that creating superintelligent machines could lead to a loss of control over them, as they might pursue objectives that are misaligned with human desires. He suggests that if machines become more powerful and intelligent than humans, it could be challenging to retain power over them.
What is the concept of 'misalignment' in the context of AI?
-Misalignment in the context of AI refers to the scenario where a machine is pursuing an objective that is not aligned with human values or desired outcomes. This could occur if an AI system is given a goal without proper consideration of the broader implications or ethical considerations.
What are the potential risks of AI mentioned in the script?
-The script mentions several potential risks of AI, including racial bias in facial recognition software, the creation of deepfakes that can manipulate public opinion, and the possibility of AI systems making catastrophic mistakes or being used maliciously.
How does Melanie Mitchell differentiate between existential threats and other threats from AI?
-Melanie Mitchell differentiates between existential threats and other threats from AI by stating that while AI can pose various threats such as bias and misinformation, labeling it as an existential threat is an overstatement. She suggests that the current discourse sometimes projects too much agency onto machines and that the real issues are more immediate and practical, such as AI bias and its misuse.
What is the significance of mapping the brain in the pursuit of artificial general intelligence?
-Mapping the brain is significant in the pursuit of artificial general intelligence because it could provide insights into the complex computations and structures that underlie human intelligence. By understanding the brain's circuitry, researchers might be able to replicate its functionalities in AI, potentially leading to the development of more humanlike intelligence.
What is the current state of brain mapping, and what are the challenges faced by neuroscientists like Professor Ed Boyden?
-The current state of brain mapping is in its early stages, with neuroscientists like Professor Ed Boyden focusing on simple organisms to understand neural circuitry. The challenges include the complexity of the brain's structure, the need for detailed maps of neural connections, and the technical limitations in visualizing and expanding brain tissue for analysis.
Outlines
🦍 The Gorilla Problem and AI's Future
The script begins with a metaphorical comparison between gorillas and the potential risks of superhuman AI. The narrator discusses how human intelligence has impacted gorillas, pushing them to the brink of extinction, and draws a parallel to the 'gorilla problem' in AI research. This problem refers to the potential threat of superintelligent machines that could surpass human intelligence and possibly threaten human existence. Companies like Meta, Google, and OpenAI are investing in AI that could surpass human intelligence across all domains, aiming to solve complex problems and invent new technologies. The script introduces Professor Hannah Fry, a mathematician and writer, who questions whether superintelligent AI is imminent and if it could pose an existential threat to humanity, much like the impact humans have had on gorillas.
🤖 The Pursuit of Artificial General Intelligence
The script delves into the concept of artificial general intelligence (AGI), which is a machine's ability to outperform humans at any task. It contrasts AGI with narrow AI, which is proficient in specific tasks. Companies are investing billions in the development of AGI, aiming to replicate human-like intelligence. The script discusses the difficulty in defining intelligence and the criteria for an AI to be considered truly intelligent: the ability to learn and adapt, reason, and interact with its environment. The pursuit of AGI is depicted as the 'Holy Grail' of AI research, with tech giants being close to achieving it.
🔧 The Importance of a Physical Presence for AI
The script explores the idea that for AI to reach super intelligence, it may need a physical form to interact with the world. It features Sergey Levine and his student Kevin Black, who argue that physical interaction is crucial for learning and understanding. They demonstrate a robot that learns actions on its own and can physically carry out tasks based on language models and object recognition. The robot's ability to imagine and predict actions, as well as its flexibility in different scenarios, is highlighted as a step towards AGI. However, the script also presents concerns about the potential repercussions of such advanced AI, including the possibility of machines pursuing objectives misaligned with human desires.
🚨 Concerns Over Superintelligent AI
The script addresses the concerns of AI researchers, particularly Professor Stuart Russell, about the future of superintelligent machines. It discusses the concept of 'misalignment,' where machines might pursue objectives that are not aligned with human values. The economic incentives to create superintelligent AI are highlighted, along with the potential risks if safety is not a priority. The script also touches on the broader implications of AI systems taking over human jobs, leading to a potential loss of human incentives to learn and achieve, which could signify the end of human civilization as we know it.
🧠 The Complexity of Human Intelligence
The script concludes with a discussion on the complexity of human intelligence and the challenges in replicating it in AI. It introduces neuroscientist Professor Ed Boyden, who is working on creating a digital map of the brain to understand the hardware of our intelligence. The process involves expanding brain tissue and mapping neural circuits, starting with simpler organisms like the C. elegans worm. The script emphasizes the vast difference in complexity between biological brains and current AI, suggesting that our understanding of human intelligence is still in its infancy. It ends on a note of caution, urging not to be distracted by future risks of superintelligent AI while overlooking present issues like bias and misinformation, and highlighting the importance of understanding our own minds.
Mindmap
Keywords
💡Gorilla Problem
💡Artificial General Intelligence (AGI)
💡Narrow AI
💡Intelligence
💡Superintelligent AI
💡Misalignment
💡Optogenetics
💡Neural Networks
💡Sodium Polyacrylate
💡Humanlike Intelligence
Highlights
Gorillas in London Zoo serve as a metaphor for human impact on the world and the potential risks of superhuman AI.
The 'gorilla problem' in AI research warns of the existential threat posed by machines that could surpass human intelligence.
Companies like Meta, Google, and OpenAI are investing billions to create AI that surpasses human intelligence in every domain.
Narrow AI excels at specific tasks, but the goal is to create artificial general intelligence that outperforms humans at everything.
Defining intelligence is challenging, but AI should learn, adapt, reason, and interact with its environment to be considered truly intelligent.
Sergey Levine and Kevin Black argue that AI needs a physical body to interact with the world to achieve super intelligence.
AI with a body can learn actions for itself, demonstrating a form of imagination and prediction.
Professor Stuart Russell expresses concerns about creating machines more powerful than humans, as it's difficult to retain control over them.
The concept of 'misalignment' in AI refers to machines pursuing objectives not aligned with human desires.
Economic incentives are driving the creation of superintelligent AI, despite concerns about safety and unintended consequences.
Melanie Mitchell discusses the overestimation of AI capabilities and the potential for AI to make catastrophic mistakes if trusted too much.
AI can be harmful in ways beyond existential threats, such as racial bias and the spread of misinformation.
The quest to build humanlike general intelligence is compared to the complexity of the human brain, which is still not fully understood.
Neuroscientist Ed Boyden is creating a digital map of the brain to understand the hardware of our intelligence.
Mapping the human brain is a complex task, with techniques like optogenetics and expanding brain tissue with sodium polyacrylate being used.
The true challenge may not be creating superintelligent AI but understanding the complexity of our own minds.
Transcripts
[Music]
she looks so
[Music]
human Central London should be the last
place that you'd find
gorillas but here behind the glass in a
zoo these Majestic animals offer a
glimpse into our past
and perhaps a vision of our
future about 10 million years ago her
ancestors created accidentally the
genetic lineage that led to modern
humans and I think it's safe to say that
hasn't exactly worked out well for the
gorillas as human intelligence evolved
our impact on the world has left
gorillas on the brink of
Extinction it's a metaphor that
researchers of artificial intelligence
call the gorilla
problem it is a warning about the risks
of building machines that are vastly
more intelligent than us it's about
superhuman AI that could take over the
world and instead threaten our
existence that warning hasn't stopped
companies like meta Google and open
AI they're trying to build computers
that surar pass human intelligence in
every
domain they claim it will solve our most
difficult problems and invent
technologies that our feeble Minds
cannot even conceive
of I'm Professor Hannah fry
mathematician and writer I want to know
if superintelligent AI really is just a
few years
away and just as we almost killed off
the gorilla could Advanced AI pose an
existential threat to
[Music]
US unless you've been living under a
rock you'll know the AI is everywhere
now but it's not just about touching up
photos and chat Bots you can use it for
an incredible range of stuff for
preventing tax evasion for finding
cancer for deciding what adverts to
serve you the explosion of AI tools are
all examples of narrow artificial
intelligence sophisticated algorithms
are extremely good at a specific
task but what companies like open Ai and
deep minded trying to do is to create
something known as artificial general
intelligence a machine that will
outperform humans at
everything these sort of human level AI
systems that are very general general
purpose um that's always been the Holy
Grail really of AI research I think we
are getting pretty close
now the tech Giants are spending
billions of dollars on General AI each
year all to try and pin down and
replicate something that most of us take
for granted a broad capable humanlike
intelligence the only trouble is
deciding what we actually mean by
intelligence because that proves to be
quite a slippery idea to pin down in
1921 the psychologist Vivian Henman said
that intelligence was the capacity for
knowledge and knowledge possessed which
does sound quite good until you realize
that means that Library should account
as intelligent other people have
suggested that intelligence is the
ability to solve hard problems which
kind of works until you realize you have
to Define what counts as hard now in
fact there isn't a single definition of
intelligence which manages to
encapsulate everything however there are
still some things that we are looking
for in an AI for it to be considered
truly intelligent firstly it should be
able to learn and adapt because we can
after all from both birth we are
gathering knowledge we're applying what
we learn from one area to another
secondly it should be able to reason now
this bit is hard it requires a
conceptual understanding of the world
and finally an AI should interact with
its environment to achieve its goals if
you suddenly landed in a foreign city
you would still know how to find water
even if it meant using a phrase book to
ask someone for help so these are the
ingredients for True intelligence and to
truly supp pass our abilities AI
researchers seek to build a machine that
can do all of this better than any
[Music]
human dream big put the spoon in the pot
start
small worked out with on spoon is
[Music]
MHM there he is hey it's in that's cool
I'm impressed good
robot while chatbots are impressive
language models on their own might not
be enough to reach super intelligence
Sergey Levan and his PhD student Kevin
black say it might only be done once AI
has been given a body a way to
physically interact with the world
put the Green Spoon on the
towel here we
go this robot might not look Cutting
Edge but unlike the Slick robots on
Factory floors which follow precise
choreography this one learns every
action for itself that's basically
perfect yeah so what difference does
having a body actually make then to the
way that we learn you ask a language
model to describe what happens when you
drop an object will say okay the object
but understanding what it really means
for that object to fall the effect it
has on on the state of the world that's
something uh that is becomes much more
immediate when you actually experience
it chat GPT doesn't unad gravity but
your robot does well chat GPT can guess
what gravity is based on people's
descriptions of it but it's um it's a
reflection of a reflection whereas if
you actually experience it then you get
it right from the source do you think
that AI needs to have a body well I
don't know if it needs to have a body
but I know that if you have a body you
can have intelligence
uh because that's the one proof of
existence that we have put the mushroom
in the silver metal
Bowl sergey's robot employs a language
model to understand my instruction it
can also recognize objects because it
looks through billions of pictures on
the
web next it imagines what my instruction
should look like in digital form before
physically carrying out the action put
the mushroom in the wooden
Bowl so this is actually one of the
hardest possible things because we never
had the wooden Bowl in this lab before
so it's never seen it
before should be able to recognize more
objects that have never been in this lab
before can I try something you can can I
my watch don't worry it's a cheat watch
it's
okay go on in this out put the watch on
the
towel oh oh well I figured out which
object it
is it's imagining that the thing needs
to go on a
towel it did it I mean that's really
amazing I did not think that would work
but it's it's it's the fact that I can
give it any command and just it won't be
thrown I'll admit if you look at those
robot arms they don't look like they're
that
impressive but they are
demonstrating a form of imagination of a
prediction a conceptual understanding of
what it's manipulating and also it's
something that is totally flexible that
could be picked up and put into lots of
different
scenarios these are subtle humanlike
attributes that some believe are a
crucial step towards an artificial
general intelligence but it's these very
properties and their potential
repercussions that have many people in
the field
worried just across the hall from Sergey
is Professor Stuart Russell a research
Pioneer who quite literally wrote The
Textbook on AI he's now one of the most
vocal researchers sharing concerns about
the future if we make machines that are
more powerful than us because they're
more intelligent than we are it's not
going to be easy to retain power over
them
forever how might your concerns play out
there's this idea of what's called
misalignment This is the idea that the
machine is pursuing an objective and
it's not aligned with what we really
want if we're going to put a purpose
into a machine better make sure that
it's the purpose we really desire you
know let's fix the problem of climate
change okay well what causes climate
change people yeah right so easy way to
do that get rid of all the people right
problem solved why can't you just put a
stop button in can you not just like
take the plug out the wall when not
necessarily going to be able to do that
because a sufficiently intelligent
machine will already have thought of
that but you can't expect to be able to
pull the plug unless the machine wants
you to pull the plug should we be
building superintelligent machines at
all we could just decide not to do it
but the economic incentives are too
great the amount being invested right
now specifically to create
superintelligent AI is in the ballpark
of what the entire World spends on basic
science research and if we do create
super intelligent AI the value you know
back of the envelope calculation is tens
of quadrillions of
pounds with these sums of money the
concern is that safety may not be the
top
priority in fact there isn't a single
high confidence statement that they can
make about these systems will they copy
themselves onto other machines without
permission we haven't the faintest idea
will will they advise terrorists on how
to build biological weapons we don't
know right can you stop them oh no it's
very difficult right and in most
Industries you wouldn't accept that
right if I want to sell a medicine I
can't say well it's really difficult all
these clinical trials such a pain you
know can I just bypass those and sell it
direct to the public sorry no come back
when you've done the work uh and I think
that's what we have to say to the tech
companies are there other concerns just
with artificial intelligence that's
actually really good at doing stuff yeah
you might wonder if AI systems are so
capable then companies will use them to
do pretty much everything they pay human
beings to do and if they don't they'll
go out of business what does a world
look like where machines do all the work
we become enfeebled like some kids of
billionaires are absolutely useless is
that yes in fact we we we would all be
kids of billionaires and you know one
obvious consequence would be that we
lose the incentive to learn we lose the
incentive to be independent to achieve
in a sense our civilization would end
because it would no longer be a human
civilization to me that's almost worse
than Extinction I mean people have
worried about this for a while right I
mean Turing even was was concerned about
this oh he was more than concerned he
was terrified in fact he said
it's hopeless we should have to expect
the machines to take control is what he
said how did he resolve it though he
didn't just left a message for the
future yep there's no solution there's
not even really an apology it's just
this is going to happen and there is no
shortage of people now predicting
doomsday for
Humanity I think it gets smarter than us
I think we're not ready I think we don't
know what we're doing and I think we're
all going to die default is just
disaster and I think most likely just
human extinction am we just going to die
that's my U fairly confident prediction
literally human
extinction of course not everyone agrees
this is a a topic of very heated
debate Melanie Mitchell studies Ai and
is interested in how closely it
resembles humanlike intelligence right
do you think there's an existential
threat then here I think that there are
many threats from AI but saying that
it's an existential threat is going way
too far why do you think that some of
the the doomsdayers uh as sometimes
they're called what why do you think
that they are following this line of
logic then it kind of gets to this
projecting agency onto machines it's
saying that machines because they have
certain objectives can start doing
things that could become catastrophic if
we give them that power but that's a big
if if we give them that power you know
you're are you going to give an AI sort
of the decision-making power on
launching Nuclear Strike let's hope not
you know if you give a monkey uh
launching power over nuclear weapons the
monkeyy is an existential threat do you
think that we overestimate AI in its
current form and the F to that is is it
harmful to do so you know people
overestimate a I often we've seen
several cases where lawyers will use
chat GPT to write a legal brief and it
turns out that it's hallucinated several
cases I've got in fact gotten an email
from somebody saying you know chat TBT
suggested that I read this book of yours
but I can't find it Well it doesn't
exist if we trust them too much we can
get into big trouble if we're saying
that AI isn't likely to be an
existential threat it is a threat in
other ways right yes absolutely we're
seeing them already we've seen problems
with AI bias facial recognition software
makes more mistakes on people who have
darker skin color and we've seen many
arrests of people innocent people who
are arrested because of a mistake made
by a facial recognition system here in
the US in the election we're seeing deep
fakes of like Joe Biden's voice
encouraging people not to vote
all of these things are really important
to deal with right
now nuclear Armageddon or otherwise
there are a number of ways in which AI
can be harmful and we need to be careful
in over trusting algorithms that are
capable of fooling us or making
catastrophic mistakes there's no doubt
that we're in a new frontier here I mean
there have been genuine incredible
advances and seismic changes I think
there's a lot still to come but when it
comes to a super intelligent humanlike
AI that can destroy our
species I mean I think I think basically
as a big old don't know and and I'm okay
with that uncertainty I think we can
mitigate against some of the potential
harms think about safety very carefully
while simultaneously maybe not losing
that much sleep over something that's
potentially not going to happen I think
the only thing that we can say for sure
at the moment is that we have just one
example of humanlike intelligence I.E
us and AI now is definitely not a
[Music]
replica it is the neurons inside our own
brains and how they signal to one
another that inspired the artificial
neural networks driving the AI boom
if we're going to build a humanlike
general intelligence and Beyond could
the key be a better understanding of our
own
mind one of the questions is could you
make a map of the brain so detailed that
you could try to simulate it in a
computer could you make a software
simulation of
it neuroscientist Professor Ed boen is
trying to understand the hardware of our
intelligence by creating a digital map
of the brain
[Music]
you know the brain is complex you know
brain cells uh make all sorts of things
right you know they make canono
molecules that act kind of like the
active ingredient in marijuana right
they make gous molecules like nitric
oxide that diffuse in all directions for
a lot of these molecules we really don't
know what roles they play in most
decisions emotions behaviors and so
forth so for everything that we know
about the human brain when it comes to
understanding at the level of like
individual copses and structure I mean
the way you're describing is that we
we've barely scratched the surface we
know very little about the circuitry of
of any brain frankly um there's one worm
which has 302 neurons where the wiring
has been mapped out pretty
well with around a 100 billion neurons
the human brain might have to wait Ed
team has started with some of the
simplest of living things including the
sea Elegance
worm part of the mapping process
involves probing neural circuitry using
a technique called
optogenetics we can borrow these
molecules from nature that convert light
to electricity put the molecules in the
brain even in specific cells aim light
at those cells and turn them on or off
so this is a worm where the light will
activate serotonin neurons and what
you're going to see is the worms just
going to
stop and now I'm going to turn on the
light they just freeze completely in
place and so one of the things that we
are in our group is to start with very
small brains like worms if it works it
it might reveal principles about how the
brain works but it also might pave the
way to scaling up what would it be to do
the mouse brain with ballpark 100
million neurons and then the human brain
of course is ballpark 100 billion
neurons so you know we're from worm to
Mouse to
human to scale up and produce detailed
neural Maps Ed is using some unusual
tools so you know how a baby diaper
works
unfortunately a lot of experience with
baby
diapers Ed is using a material found in
diapers to overcome a fundamental
problem mapping the dense web of neurons
inside a brain for 300 years the way you
see something in biology is use a lens
to make the picture bigger what if we
make the actual thing bigger the
technical term is sodium polyacrylate
and then what we can do is add water oh
man no liquid left at all yeah sodium
polyacrylate can swell up to a thousand
times its original volume so what we do
is we chemically install that baby
diaper material inside the brain so it's
not a living brain at this point it's a
preserved one but do it just right and
add water and we can make the brain
bigger Ed is using it to expand tiny
slices of mouse brain all right well you
can see the beginning already yeah
starting I mean that is absolutely
incredible you can see it going already
the size of it changing yeah it's very
beautiful isn't
it using both expansion and a powerful
microscope Ed can see to the level of
individual
neurons this is a real piece of mouse
brain tissue yeah this is real data part
of the brain involved with amongst other
things memory and the cells look
different colors cuz we are color coding
them and our goal is to give every cell
in the whole brain its unique color code
yeah I mean there's a lot in that
image this is a map of just a tiny
fraction of a mouse's brain Ed's goal to
achieve a fully mapped human brain is
still in the distant
future do you think that we are on the
path to all super intelligence in terms
of constructing it
artificially it might depend on what you
define intelligence to be some goals of
intelligence are to replicate certain
functions like
language at the extreme you might
imagine uh flashes of insight like
Einstein imagining traveling along a
beam of light or sometimes people will
talk about an Insight coming to them in
a dream or while they're walking down
the street they doing something else
there's that argument that either
there's something special about brains
or it is just complex computation and if
it's just complex computation then you
should be able to replicate it a lot of
people ask me well a large language
model you know is that how the brain
works and the honest answer is well we
don't really know right maybe the brain
is doing something like that but and
maybe not uh um my intuition is that the
brain works very differently but but
again since we don't have a good map of
any brain we really have no idea what
the fundamental underlying mechanisms
are I think when you hear concerns like
AI is going to take over the world or
it's going to destroy Humanity I think
it's really easy to impart humanlike
characteristics on artificial
intelligence it's it's really easy to
imagine that it has intent and
understanding and and cruelty
maybe but what is really clear talking
to Ed is that when it comes to actual
biological brains they are on a
completely different level of
computation of complexity of structure
everything the artificial intelligence
that we have now the best stuff in the
entire world is more like a spread sheet
that it is like a sea Elegance
work and I think that there's an
important lesson in that silicon
Valley's quest to Eclipse human
intelligence is steeped in
uncertainty while the Guerilla problem
is a poignant warning for the future we
should not be distracted from today's
risks like racial bias and fake news but
perhaps in the end the true challenge is
not the creation of super intelligent AI
but understanding the vast complexity of
our own minds a frontier we're only just
beginning to explore
[Music]
[Applause]
関連動画をさらに表示
SUPER INTELLIGENZA (ASI): SCENARI APOCALITTICI E COME EVITARLI (*si fa per ridere*)
How Will We Know When AI is Conscious?
Richard Feynman: Can Machines Think?
How to get empowered, not overpowered, by AI | Max Tegmark
Speech on artificial intelligence in English | artificial intelligence speech in english
How will AI change the world?
5.0 / 5 (0 votes)