The AI series with Maria Ressa: An introduction | Studio B: Unscripted
Summary
TLDRJournalist Maria Ressa interviews AI expert Mike Widge on the transformative impact of artificial intelligence. They discuss AI's rapid growth, potential risks including misinformation and cyber threats, and the importance of guardrails. Widge highlights AI's role in exacerbating societal issues and the need for regulation to prevent misuse, emphasizing the urgency of addressing AI's societal effects.
Takeaways
- 🌐 Artificial Intelligence (AI) is transforming various aspects of human life, from work to warfare, with both opportunities and risks.
- 📈 The progress in AI accelerated significantly after 2022, driven by advancements in machine learning and the availability of big data.
- 💾 AI relies heavily on data; social media posts, for example, contribute to training AI models by providing labeled data.
- 🔍 AlexNet in 2012 marked a turning point in AI capabilities, demonstrating a leap in image analysis and interpretation.
- 🧠 Large language models like ChatGPT function on a principle similar to smartphone autocomplete, but at an unprecedented scale.
- 🌐 The training data for AI models is vast, with the entire worldwide web as a starting point, including only 3% from Wikipedia.
- 🛠️ AI's potential dystopian outcomes range from job displacement to existential threats, though current consensus suggests AI will augment rather than replace human jobs.
- 🌿 There's optimism that AI could help solve major global issues like climate change, with applications in fields such as synthetic biology.
- 🚩 The misuse of AI poses significant risks, including cyber-attacks and information warfare, which are more pressing concerns than sci-fi scenarios.
- 🔒 Transparency and guardrails are crucial for responsible AI development, to prevent unintended consequences and misuse.
- 🗳️ AI's impact on democracy and electoral integrity is a pressing concern, with the potential to industrialize disinformation on a massive scale.
Q & A
What is the significance of the year 2022 in the context of AI development?
-The year 2022 is significant because it marked the launch of Chat GPT, which led to exponential growth in AI capabilities and interest from big tech companies.
What is the role of data in training AI systems?
-Data is essential for training AI systems. It is used to train neural networks, with social media uploads often serving as training data.
What was the impact of AlexNet on AI development?
-AlexNet was a pivotal AI program that demonstrated a significant leap in image analysis capabilities, marking the beginning of a new era in AI.
How does a large language model like Chat GPT function?
-Large language models like Chat GPT function by predicting the most likely completion of a text input, similar to smartphone autocomplete features, but on a much larger scale.
What is the scale of data used to train large language models?
-The scale of data used is immense, starting with downloading the entire worldwide web, with Wikipedia constituting only 3% of the training data.
What is the best-case scenario for AI according to the discussion?
-The best-case scenario for AI is that it becomes a tool used by most people in their jobs, enhancing productivity without replacing human roles.
What are the existential risks associated with AI?
-Existential risks associated with AI include the potential for AI to become so powerful that it could lead to the end of humanity if it can self-improve without human supervision.
How can AI impact democracy and societal structures?
-AI can impact democracy and societal structures by enabling the spread of misinformation and manipulation, influencing public opinion and potentially undermining electoral integrity.
What are guard rails in the context of AI?
-Guard rails in AI refer to the safety measures and protocols implemented to prevent AI from generating inappropriate content or being used maliciously.
Why is transparency in AI development important?
-Transparency in AI development is important to understand the training data used and to ensure that AI is not inadvertently promoting harmful biases or behaviors.
How can individuals protect themselves from the negative impacts of AI?
-Individuals can protect themselves by being aware of AI's potential to manipulate, seeking information from trusted sources, and understanding how their data is used.
What is the future of jobs in relation to AI?
-AI is expected to change the nature of work, with some tasks being automated, but it is unlikely to replace all human jobs, especially those requiring creativity and empathy.
Outlines
🌐 Introduction to AI's Impact
The paragraph introduces the transformative power of artificial intelligence (AI) and its profound potential to change human history. It mentions AI's influence on various aspects of life, from work to warfare to societal structures. The speaker, Maria Ressa, a journalist and Nobel Peace Prize laureate, discusses the dual nature of AI as both an opportunity and a risk, highlighting her personal experience with online harassment and the role of social media in spreading misinformation. She emphasizes the importance of understanding AI, setting the stage for a discussion with Professor Mike Wridge, an AI researcher with over 35 years of experience.
📈 AI's Evolution and Data Dependence
This section delves into the history and development of AI, noting that despite its seemingly recent surge in prominence, AI has been an area of study since the 1950s. The slow progress in AI is attributed to the lack of computational power and data until this century. The advent of big data and increased computing capabilities have been pivotal in propelling AI forward. The discussion highlights how social media contributes to AI training by providing vast amounts of data through user interactions and content sharing. The paragraph also introduces AlexNet, a pivotal AI program in image analysis that marked a significant leap in AI capabilities.
🧠 Understanding Large Language Models
The conversation explains the workings of large language models like Chat GPT by drawing an analogy with smartphone autocomplete features. These models are trained on extensive datasets, including the entire worldwide web, to predict and generate text based on patterns learned from the data. The scale of data used is immense, with Wikipedia constituting only a small fraction of the total training data. The potential applications of AI are broad, ranging from solving complex problems like climate change to concerns about AI's role in replacing human jobs, which the speaker refutes as unlikely.
🚀 AI's Role in Society and Misinformation
This paragraph addresses the dichotomy in public perception of AI, swinging between utopian and dystopian views. It emphasizes the need for a balanced understanding and identifies realistic concerns such as AI's potential to enable malicious activities and cybersecurity threats. The discussion also touches on AI in warfare, the lack of regulations, and the ethical implications of AI's unchecked advancement. The potential for AI to be used in information warfare, influencing public sentiment and electoral outcomes, is highlighted as a pressing issue.
🛡️ Guardrails and AI's Unintended Consequences
The dialogue focuses on the importance of implementing 'guardrails' or safety measures in AI to prevent unintended harmful consequences. It acknowledges the current measures as inadequate and the need for transparency in AI development. The conversation raises concerns about AI being developed by a few entities with potential biases in training data, affecting society negatively. The rapid evolution of AI and its impact on democracy, mental health, and the potential for personalized misinformation campaigns are also discussed.
🏛️ AI in Warfare and the Future of Work
This section discusses the international community's stance on lethal autonomous weapons and the ethical concerns surrounding AI in warfare. It also addresses the use of AI in recruitment and the potential for AI to replace human decision-making in the workplace. The speaker advocates for human involvement in critical decisions and expresses concern over the dehumanization of jobs through AI. The paragraph concludes with a contemplation on the future of jobs and how AI might alter traditional work roles.
🌟 Final Thoughts on AI
In the concluding paragraph, the discussion is summarized with a call for caution and responsibility in the development and use of AI. The speaker emphasizes the need to balance technological advancement with ethical considerations and societal impact. There's an acknowledgment of AI's potential to bring about significant changes in how future generations live and work, while also stressing the importance of maintaining human values and relationships amidst technological progress.
Mindmap
Keywords
💡Artificial Intelligence (AI)
💡Existential Risk
💡Disinformation Campaign
💡Social Media
💡Large Language Models
💡Data
💡Neural Networks
💡AlexNet
💡The Singularity
💡Cybersecurity
💡Electoral Integrity
Highlights
Artificial intelligence is transforming our world and expected to bring profound changes.
AI's impact on work, warfare, and societal structures is significant.
AI presents both huge opportunities and risks, including potential existential threats.
Journalist Maria Ressa discusses the dangers of AI and its threat to democracy.
Social media design prioritizes the spread of lies and anger, exacerbated by AI.
AI has been studied since the 1950s, but progress accelerated in the 21st century with increased computing power and data.
Data is crucial for training AI, with social media providing a wealth of training data.
AlexNet in 2012 marked a significant leap in AI's image analysis capabilities.
Large language models like ChatGPT work on a principle similar to smartphone autocomplete.
AI training involves downloading the entire worldwide web, with Wikipedia constituting only 3% of the data.
AI's potential to solve major problems like climate change is discussed.
AI is unlikely to replace human jobs entirely, but will become a tool used in jobs.
The concept of 'The Singularity', where AI surpasses human intelligence, is explored.
Current AI risks include cyber attacks and AI weapons of war, rather than existential threats.
The importance of guard rails and checks in AI to prevent the spread of inappropriate content.
AI's impact on society is a concern, with the technology evolving faster than government regulation.
The potential for AI to be used in disinformation campaigns, impacting electoral integrity.
AI's role in the job market and the concern about it filtering CVs and introducing biases.
The future of AI in warfare, with discussions on lethal autonomous weapons.
AI's potential to change administrative jobs, making human involvement seem strange in the future.
Transcripts
[Music]
artificial intelligence AI is already
Transforming Our World and is expected
to bring some of the most profound
changes in human history think of me as
a friendly companion who can provide
helpful insights from the way we work to
the way Wars are fought to the very
fabric of our societies it seems set to
bring huge opportunities but others warn
it could lead to our own destruction
it's one of the existential risks that
uh we facing and potentially the most
pressing one my name is Maria oressa and
I'm a journalist from the Philippines
through our investigations I became the
target of a harassment and
disinformation campaign receiving
thousands of death threats online I
received the Nobel Peace Prize in 2021
an acknowledgement of how difficult it
is for journalists to do our jobs today
I saw firsthand the dangers of tech and
its threat to to democracy the design of
the systems of social media prioritizes
the spread of Lies laced with anger and
hate in this special series of Studio B
on artificial intelligence I'll be
meeting some of the brightest Minds
working in the field today my guest this
week is Professor Mike wridge he's been
working in AI research for over 35 years
in Oxford and at the prestigious Allan
touring Institute a prolific author he's
written nine books and over 4 400
scientific articles on the subject so
what exactly is artificial intelligence
how did we get here and is it really a
threat to our very
[Music]
existence
Mike it is so good to see you and you
know you have been studying artificial
intelligence for 35 years but something
changed right it grew exponential
exponential right after November 2022
after chat GPT was
launched how did we get to where we are
first how do you define where we are
what does the science tell
us so artificial intelligence despite
appearances is not a new field it's been
studied very very actively since uh
since since the 1950s but the truth is
that actually progress in AI was really
glacially slow until this Century um
computers in the past just weren't
powerful enough and we didn't have the
data and we are in the world of big data
and AI is nothing without data you
absolutely need data to to to train AI
to use the terminology and every time
you upload a picture of yourself to
social media and you helpfully label it
with your name or your kids do what they
are doing is providing training data
social social media companies that's
literally what their role is in in doing
that so you need data and you need lots
and lots of computer power to be able to
build neural networks that were big
enough so around about 2012 or so Alex
tell us about Alex net so Alex net um
was a computer program and AI program to
do basically image analysis and it was
entered into a competition and entries
in this competition were judged at how
well they could interpret pictures in in
images and the point about Alex net was
that in one year we saw a step change in
capability and this got everybody's
attention and it it became clear at that
point that we really were in a kind of a
new era of AI and that was the point I
have to say that the big tech companies
noticed uh and started to get really
really really interested can I ask you
something very geeky
uh you talked about training data about
machine learning artificial intelligence
neural networks large language models
how do these all fit together okay the
way that large language models like chat
GPT work is really bizarrely simple it's
just doing exactly what your smartphone
does when you do autocomplete so if you
open up your smartphone uh and you start
sending a text message so for example I
start sending a tech message to my kids
and I type have you it will suggest
completions and the completions might be
tided your room or walked the dog right
those might be the likeliest completions
of that so how is it doing that it's
been trained on all of the text messages
I've sent my kids and learned that the
likeliest completions of have you are
either going to be walked your dog or or
tidied your room so chat GPT is doing
nothing more than that the difference is
the scale right chat GPT is built with
AI supercomputers that run for months uh
and cost tens of millions of dollars to
be able to do that training and the
training data is basically it's not your
smartphone messages it's all the Digital
Data available in the world and the
standard way that you build these is to
start by downloading the whole of the
worldwide web right the entirety of the
worldwide web so Wikipedia makes up just
3% of the training data for uh for these
large language models so the scale of
the data is incredible um there are
people who say uh that this will solve
humanities worst problems like climate
change deep mind uh which is behind
Google Search now because they bought it
also does synthetic biology right and
that maybe we can use phytol plantant
that can pull carbon out of the air like
can you give me your best case
scenario okay so it is quite remarkable
that the discussion around AI either
veers to the extremely dystopian it's
going to be the end of humanity or the
extremely utopian and there's not
actually a lot between those two the
reality is going to be between those two
the idea I mean I think Elon Musk was on
record recently as suggesting that AI
was going to take all our jobs that
seems very unlikely to me not in the
lifetime of anybody in this room AI will
become a tool that most people use in
their jobs but it's not going to replace
people I mean for example there are
going to be lots of applications of AI
in education which is going to be really
wonderful but what teachers do is a very
human thing it's not going to replace
all of humanity and allow us to spend
our lives writing poetry or whatever it
is that we would do if we did didn't
have uh jobs I think so that scenario is
is extremely unlikely the dystopian
scenarios have been really hotly
discussed and people talk about
existential risk and that literally
means the end of humanity that AI could
become so powerful that somehow it ends
Humanity if it can program itself if it
can get resources it can continue doing
it without human supervision right so so
there's this scenario uh called The
Singularity yes and it's a beautiful
scenario which makes for great science
fiction and the idea is only fiction go
go the idea is at some point in the
future at some point we don't know when
AI is going to be as smart as we are and
at that point it can start to improve
itself it can literally rewrite its PR
code and then at that point it's smarter
than we are and that improved AI can
then improve its code again itself uh
and it just continues that process and
the fear is at that point that AI is out
of our control I saw this on Black
Mirror and actually of all of the
Contemporary science fiction shows Black
Mirror I think is absolutely by far the
best it's very thought-provoking stuff
so is it so i' in all of this discussion
I've never seen a single genuinely
plausible scenario for existential
threat and it really has been discussed
endlessly with some very very very smart
people thinking about it the biggest
risks right now are that AI is a
powerful tool yeah and it enables bad
people to do bad things bad things that
they couldn't previously have done that
it enables a whole category of risks not
existential risks but risks like cyber
security attacks which would just not
have been feasible uh previously that I
think focusing our attention on those
issues I think would be much more
productive than on science fiction
issues or AI weapons of war right like
AI drones which they've used in Ukraine
are being used in Moscow um there again
are no boundaries set on this and yet
the
scientists with a profit motive are
rushing ahead and we are like Pavlov's
dogs in real time how can we protect
ourselves in this because if you look in
in the Nobel lecture in 2021 I actually
said that um we had data that showed
that we're being insidiously manipulated
it has so much of our data
that it cuts in through our emotions
information Warfare changes the way we
feel changes the way we think and then
the way we act electoral Integrity for
example I mean I don't I don't think
it's a coincidence that you have now
according to VM 72% of the world under
authoritarian rule right so these are
some of the impact of it Mr scientist
tell me right because you don't have a
profit motive right you're studying the
science how do we rein them in
so I think there's there's two sets of
issues the first is we we're pretty
confident right now that social media
one of the unintended consequences of
social media was a mental health crisis
in teenagers and we didn't see that
coming right but that's just one of the
unintended consequences and I think what
you're saying is what are the unintended
consequences of AI going to be so for
example what if we end up with some
future large language model which just
completely inadvertently makes us more
aggressive
or more depressive for example and what
impact would that have globally for
example a widely used AI tool that made
us more aggressive might lead to more
conflict in but isn't that happening
since they took all of the Big Data the
unstructured Big Data of social media
full of fear anger hate right isn't that
happening now okay as we already
mentioned the way that this technology
is configured is you download the whole
of the worldwide web now you don't have
to look very hard on the worldwide web
to find all sorts of unpleasant and I
mean if you go on you know some social
media platforms they have types of
unpleasantness that we could scarcely
imagine right so and if all of that has
been absorbed by a large language model
then it's a seething cauldron of
unpleasantness now I think genuinely you
know responsible AI companies have no
intention whatsoever of uh unleashing
that on the world so what they do is
they're building guard rails and so they
try to intercept queries that are how do
I build a pipe bomb they will try to
inter such a and also they will look at
the outputs of the large language model
and try to intercept which is
inadvertently com out with which is in
appropriate at the moment those guard
rails I think are the technological
equivalent of gaffa tape they're just
being you know they're being plastered
onto these exactly there's no deep
fixers to that and one of the worries is
if this technology is owned by a small
group of actors who develop this
technology behind closed doors we don't
to see the training data so you have no
idea what this has been trained on about
you and you're a public figure there
would have been a great deal of content
about you and some of it won't have been
very nice that's a safe bet so this is I
think a real concern and this issue of
transparency I think is is is really a
concern which needs to be taken very
very Ser you talked about guard rails
right there's no incentive for them to
put guard rails in I mean they the only
incentive is that they won't be attacked
by people right it's a reputational
thing but if they can get by without it
they have as they have with social media
we still haven't done anything well I
think here we are in a situation uh
which is very awkward we've got AI which
has gone viral uh it's the first large
language models are the first general
purpose and I'm choosing those words
very carefully general purpose AI tools
that have reached a mass market and
they're very powerful yeah and the tech
companies see Empires they see na
Empires and they want to they want to
stake their claim on those Empires they
want to be those Empires they want to be
the Google they want to be the they want
to be the Amazon of the generative AI
world and the very big risk is that what
they're doing to try to get an advantage
on their competitors is Rush ahead with
this technology without thinking about
for example whether it's really fit for
prime time and that really is a worry um
but these worries are not you know
they're not unknown I mean the the UK
government convened an international AI
safety Summit and I have to tell you
there was a there was some skepticism
about what it was going to achieve but
actually the the debate was was a
sensible debate uh and it got it on the
international agenda so I think what's
going to be challenging is the extent to
which government can really hold the
richest companies in the world to
account and the irony of course is that
if they get it out to all of us they get
all of our data we train their large
language models they gain more power
even as the very nation state that are
going to try to put regulations in place
to control them lose power because the
technology is already impacting Society
all around the world right it's a tough
one um I guess you know in I I have a
bleak picture of this as you can tell
because having been
attacked and to hear them say you know
well we didn't intend that doesn't
really matter what the intent was and so
how do we what can we do right now right
it's moving too slowly governments move
at the pace of years while the tech
evolves in every two weeks Agile
development means they're rolling out
code every two weeks right so is there
anything anyone watching can do well I
think there is there is concretely
something we can do so we're heading
into elections in the UK and the US the
US India one of the very prominent risks
with this technology is the possibility
of industrializing the production of
disinformation and misinformation uh uh
on a massive scale on an unprecedented
scale and personalizing it down to the
level of individuals so the AI can look
at my social media feed and pick up on
the sentiments that I express in my
social media feed pick up on my
political stance which is going to be
implicit within sometimes explicit
within my social media feed and then
feed me personally tailored very high
quality misinformation the sentiment
analysis exists to do that the
generative AI makes it possible to do
that and uh and and the cost of
launching a disinformation campaign in
an election because of generative AIS
come down massively yeah and let's be
honest there are people in the world
with a huge interest in disrupting
elections in the US or the UK or India
and so on uh and it could be people just
with an interest in vandalizing the
process or it could be state level
actors that really want to disrupt
what's going on so what can we do
concretely I think we absolutely need to
be alert to that issue I think trusted
news sources are going to become so
valuable the difficulty with that of
course is that we end up in a world
where you know we're all completely
paranoid and don't believe anything but
trusted news sources I think are going
to be essential and understanding how we
can be manipulated I think is really
really important there is so much more
we can talk about this because in 2024
one in three people around the world are
going to vote and this is the Tipping
Point for both electoral systems our
democracies um but we're getting to the
Q&A so let me toss it to you the
gentleman in the back was the first hand
up leading on from what you were saying
Michael about news and trusted news
given the growth of generational AI
technology which can actually do deep
fakes pretty convincingly both in audio
and video uh on social media feeds how
long before we the poor public cannot
tell the difference anymore
well I think uh in terms of being able
to tell the difference AI right now can
perfectly duplicate your voice to the
point where nobody would be able to tell
the difference that's a technology which
exists deep fake images uh is not quite
there but very very close but I don't
know if people saw do you remember
seeing this picture of the Pope in this
big puffer jacket that went viral and I
have to tell you when I first saw that I
didn't actually twig that this was not a
a real image I just assumed it was and I
thought this was a a slightly strange
clothing choice for the pope um so we
need to raise our guards for that and we
need I say the the the issue of trusted
news sources is just going to be so so
so important you know they're going to
be facing this technology and they're
going to need to think of new ways of
dealing with that technology but let's
hope they can rise to the occasion I
mean I I'll pick it up and I'm slightly
more
pessimistic um I promise you won't walk
out depressed completely right um I
think our shared reality is already
broken the political Dominoes on social
media of information operations the
political dominoes fell in 2016 duterte
was elected in the Philippines in May
about a month later you had brexit and
then you had all of the elections moving
Trump was elected in November and you
know we have the data to show that there
were information operations that were
there it plays to our fears our hatred I
mean this is playing out right now and
our shared reality is splintered right
so what do we we have to do um news
organizations are under attack both on
the business model side right um the
money that used to go to news uh we
still have to maintain very expensive
systems of checking everything because
we stand behind it we're legally liable
that now goes to microt targeting microt
targeting is not the same as advertising
it goes to your weakest moment to a
message that's and it's cheap right so
that's still there and then Mike is
going to bring large language models we
have it there and we've already seen
this wow I sound really Bleak it's only
cuz I was getting 90 hate messages per
hour and in order to keep doing my job I
had to be okay with going to jail for
the rest of my life that's a lot to ask
from your journalists so what do we do
come out in the real world right
understand you are being manipulated up
until the guard rails are put in place
we need to organize ourselves and have a
shared reality this is a shared reality
right now so we want to try to get more
questions from the audience go ahead
saadya UK campaign to stop Killer Robots
um and my question is um in your view
what do you think the International
Community should be doing uh to address
the concerns and challenges of the use
of AI in Warfare given that we're seeing
this being used in Ukraine and in isra
Gaza well again we're stepping well
outside my comfort zone I can tell you
firstly the international AI Community
is broadly but not universally against
lethal aonomus weapons so in 2015 I was
organizing a conference and we had a
panel on exactly this topic and I
thought the the views were going to be
absolutely unanimously against lethal
autonomous weapons and was really
startled to discover that there are
people of good faith who think that no
that this is this would this is how my
children can avoid having to be involved
in warfare that's literally how some
people viewed this so it was a it was a
more complex issue than I thought but I
I can tell you what my perspective is
and my perspective is that I do not
think it's acceptable that a machine
decides autonomously whether to take a
human life if a human life is taken then
which uh you know is is uh is is an
extremely undesirable situation in any
case but somebody who takes that
decision on a battlefield has to be able
has to be capable of empathy and
understand the consequences what it
means for a human being to be deprived
of their life what can we do with it
well we've we've moved on landmines
internationally imperfectly but that
shows that there are ways ahead with
this at the same time we need to be
realistic there are nation states that
are not remotely interested in the
niceties of these issues and they will
develop this technology uh in secret and
we won't see that and so we do have an
obligation to make sure that we can
protect against the the attacks by that
kind of technology I think that's the
only realistic and respons responsible
Way Forward but um by and large the the
Crum of comfort I can offer you is that
the vast majority of the AI Community
think the technology is AB Boren and
want nothing to do with it let's take
one last question uh so AI has been used
more and more in recruitment uh to stiff
through vast amounts of CVS which for
people of my age is a bit of a concern
and obviously a lot of other people as
well uh the sad thing about this is that
a lot of people don't actually know this
and also don't understand what language
it picks up into text and it doesn't
necessarily look at niches so is this a
concern that people might miss good CVS
and people that would actually be the
perfect candidate because they don't
know this and what can be done to
eliminate this or at least reduce the
risk in the future or the inherent
biases in the technology actually make
those selections I I'm completely with
you on this one I don't think this is a
good use of the technology I want humans
involved in that decision this to me is
just lazy and inappropriate HR practice
if a HR manager says oh but the
decisions will be fairer I think it's
absolutely nonsense I just don't think
that that's something that we should do
but if you think that's not a nice use
of the technology imagine you've got AI
as a boss telling you what to do Moment
by moment through your working life and
there are companies pursuing that you
know looking at every email that you
send commenting on it you know wridge
you know you only sent 20 emails today
the company average is 22nd you took
five bathroom breaks today the company
average is three that kind of do you
want to live in a world where AI is
giving you that kind of people I don't
and I dare say you don't either there
are companies pursuing that nonsense now
yeah yeah we won't name it okay we we do
get one more question so let's end with
the gentleman in the back he's hand up
first I have a very simple question um
we talk about loss of jobs due to AI um
do we think in a hundred years time it
will be strange to explain to a child
that um the jobs being done in admin and
companies would have done by humans I
think they will find that very
strange well the f future is going to be
not just weirder than we imagine but
weirder than we can imagine I have
teenage kids who've grown up with the
internet and they just assume that it's
there and it's always on and when it
doesn't work for whatever reason they're
just perplexed and don't you know
something's gone wrong with the world if
the the internet doesn't work for them
kids that are seven or eight years old
now are going to grow up they're the
first generation in history that's going
to grow up being surrounded by very
powerful general purpose AI tools like
chat G PT and they are going to do the
weirdest things with it and the the best
example I can I can give you is is you
go back to the origins of YouTube which
is 2005 or so and you nobody really knew
what it was at the time you could upload
family videos and share them with family
members or you know you could upload
clips of your favorite TV shows nobody
predicted YouTube influencers or the
fact that people would not just be able
to make a living but actually make a
fortune by making videos of themselves
playing computer games and talking over
it and people do you know nobody
predicted that in exactly the same way
no we can't predict right now how our
kids are going to use uh AI in the
future but the basics of humanity are
not going to change they didn't change
with rock and roll they didn't change
with television they didn't change with
Cinema they didn't change with novels
the fundamentals of humanity and human
relationships are going to be the same
but our kids are going to be creative in
ways that we just find I say weird and
hard to imagine but for them it's going
to be it's going to be a
ride he's again very optimistic it is a
ride right but I think this is this
moment in time uh this moment in time is
critical we need the science um and we
need to curtail the for-profit motive so
that we can be safe with the technology
absolutely Michael wrid thank you thank
you so much for joining us in Studio B
the AI series I'm maressa thank you for
joining
[Applause]
[Music]
us
Ver Más Videos Relacionados
Luciano Floridi | I veri rischi e le grandi opportunità dell’Intelligenza Artificiale
'AI Wow,' dokumentaryo ni Atom Araullo | I-Witness
GEF Madrid 2024: Navigating AI Legal Frontiers
AI: What is the future of artificial intelligence? - BBC News
As artificial intelligence rapidly advances, experts debate level of threat to humanity
AI and the future of humanity | Yuval Noah Harari at the Frontiers Forum
5.0 / 5 (0 votes)