AI: What is the future of artificial intelligence? - BBC News
Summary
TLDRIn this discussion, experts Evan Burfield and Gary Marcus debate the need for global governance of AI to prevent job loss, misinformation, and potential threats posed by AI. They argue for a coordinated approach to AI regulation and research, warning against waiting to act until problems arise. The conversation highlights the urgent need for policymakers to understand and prepare for AI's societal impacts, particularly in the upcoming 2024 U.S. election cycle.
Takeaways
- 🤖 There is a growing concern about the rapid advancement of AI and its potential societal impacts.
- 🚫 Elon Musk and other influential figures are calling for a temporary halt in AI development to better understand its implications.
- 🌐 Professor Gary Marcus advocates for global governance of AI, similar to the International Atomic Energy Authority, to prevent fragmented and ineffective policies.
- 🧠 Evan Burfield highlights the difficulty of enforcing a moratorium on AI development due to competitive pressures and varying levels of compliance.
- 📈 The potential of AI to significantly disrupt society and democracy is a major concern, with the 2024 political cycle being a critical timeframe for impact.
- 🛑 The idea of a moratorium is not only about regulation but also about preparing society for AI's wide-ranging effects.
- 📚 Lessons from the internet and social media's impact suggest that proactive measures are needed to avoid repeating past mistakes with AI.
- 🔒 There is a pressing need for education of policymakers to understand AI technologies and their implications for governance and society.
- 🌐 The creation of a global AI governance body is proposed to foster international cooperation and to manage the rapid pace of AI advancements.
- 🔮 The future of AI is not just about technological advancement but also about the ethical and societal frameworks that guide its development and use.
Q & A
What is the main concern regarding the advancement of AI as discussed in the transcript?
-The main concern is the potential loss of control over AI, the spread of propaganda and fake news, and the possibility of machines outsmarting humans, which could lead to significant societal and ethical challenges.
Who are some of the key figures calling for a moratorium on AI development?
-Key figures calling for a moratorium include Elon Musk, along with thousands of entrepreneurs, academics, and scientists.
What does Professor Gary Marcus suggest as a solution for global AI governance?
-Professor Gary Marcus suggests a global governance model similar to the International Atomic Energy Authority, where the world collaborates to set rules and develop tools to mitigate AI threats.
What are the potential negative impacts of AI on society according to the discussion?
-Potential negative impacts include job displacement, the spread of misinformation, cybercrime, and the risk of AI models being used to create propaganda.
What does Evan Burfield think about the feasibility of a moratorium on AI development?
-Evan Burfield believes that a moratorium is hard to enforce and might give a false sense of security. He suggests focusing on preparing society for AI implications rather than trying to halt progress.
What does the discussion indicate about the current state of AI understanding among policymakers?
-The discussion indicates that many policymakers lack a deep understanding of AI and its implications, which is a significant concern as AI continues to advance rapidly.
What historical lesson from the internet and social media does Professor Marcus highlight?
-Professor Marcus highlights the lesson of not waiting too long to act on new technologies, referencing the issues of privacy, polarization, and misinformation that arose from the internet and social media.
What does Evan Burfield foresee as the impact of AI on the 2024 political cycle in the U.S.?
-Evan Burfield foresees AI having a profound impact on the 2024 political cycle in the U.S., including the potential for AI to influence elections through misinformation and propaganda.
What is the 'tsunami'比喻 mentioned by Evan Burfield referring to?
-The 'tsunami'比喻 refers to the overwhelming and transformative impact that AI advancements are expected to have on society, economy, and various aspects of life.
What does Professor Marcus recommend for dealing with the rapid advancements in AI?
-Professor Marcus recommends immediate action and the establishment of a central oversight or global organization to coordinate responses to AI advancements and mitigate potential threats.
What is the Auto gbt mentioned in the transcript, and why is it concerning?
-Auto gbt refers to AI systems training other AI systems, which is concerning because it represents a rapid and potentially uncontrolled advancement in AI capabilities, raising questions about the speed at which AI is evolving and the potential lack of oversight.
Outlines
🤖 AI Ethics and Global Governance
The paragraph discusses the ethical implications of AI advancements and the necessity for global governance. It mentions concerns about job automation, AI-generated propaganda, and the potential for AI to surpass human intelligence. The conversation highlights the call for a moratorium on AI development until further understanding is achieved. Elon Musk and other influential figures are noted as supporters of this pause. The discussion introduces two experts: Evan Burfield, a tech investor, and Gary Marcus, a professor at New York University, who advocate for coordinated global governance of AI, drawing parallels to international organizations like the International Atomic Energy Authority.
🌪 Preparing for the AI Tsunami
This section of the script focuses on the inevitability of AI's impact on society and the economy. Evan Burfield, based in Austin, Texas, observes that startups are actively integrating AI into their operations. He discusses both the dystopian risks and the potential for AI to enhance various sectors, including medicine and government services. The conversation touches on the challenges of regulating AI and the importance of adapting social and market policies to harness AI's benefits while mitigating its risks. There's a consensus that policymakers must be better educated about AI to prepare for its transformative effects, which could significantly impact the 2024 political cycle in the U.S.
🏛 Policymakers and AI Understanding
The paragraph emphasizes the gap in understanding AI among policymakers. It points out that while there's a growing awareness of AI's potential, there's a lack of proactive measures to prepare for its implications. The discussion suggests that policymakers are not considering the broader implications of AI, including its potential to be supercharged by other technologies like quantum computing. There's a call for the establishment of institutions that focus on AI's long-term impact and the need for a global organization to coordinate AI governance, similar to existing international entities.
🗳️ AI and Democracy: Challenges Ahead
This final paragraph of the script delves into the potential for AI to disrupt democracy through misinformation and propaganda. It raises concerns about AI's role in influencing election outcomes, drawing parallels to past instances of foreign interference. The conversation underscores the urgency for policymakers to understand AI to effectively address these challenges. It also highlights the need for education and proactive measures to harness AI's potential for positive societal impact while safeguarding against its risks.
Mindmap
Keywords
💡Artificial Intelligence
💡Moratorium
💡Global governance
💡Misinformation
💡Cyber warfare
💡Competitive Advantage
💡Quantum Computing
💡Policymakers
💡Election Security
💡Terminator Scenario
💡Generative Models
Highlights
Discussion on the potential risks of AI and automation, including job displacement and the spread of propaganda.
Call for a moratorium on AI development until we better understand its implications.
Elon Musk and thousands of others signed an open letter advocating for a pause in AI training.
The need for global governance of AI to prevent a patchwork of regulations.
Proposal for an international AI authority similar to the International Atomic Energy Authority.
Concerns about the difficulty of enforcing a moratorium on AI development.
The importance of preparing society for the implications of AI.
Discussion on the potential for AI to cause a 'tsunami' of upheaval in the next five years.
The current state of AI is already profound and will change how we live, work, and engage with each other.
The impact of AI on democracies and the 2024 political cycle.
The challenge of educating policymakers about AI and its implications.
The potential for AI to be supercharged by other technologies like Quantum Computing.
The UK government's decision not to have a dedicated AI regulator and the implications of this decision.
The need for a central oversight body for AI in the United States and potentially a G7 meeting of AI ministers.
The rapid pace of AI development and the challenges it poses to regulation and governance.
The potential for AI to interfere in democracies and elections, and the need for education and preparation.
Transcripts
should we automate away all the jobs
including the fulfilling ones should we
allow AI machines to flood the internet
with propaganda and fake news should we
develop non-human Minds smarter than our
own machines that might one day
outnumber us or outsmart us do we risk
losing control now you might think that
sounds like some futuristic script from
a Terminator movie but last month some
of the most well-known figures who are
involved in the development and training
of artificial intelligence call for a
moratorium until we better understand
where we're going an open letter was
signed by thousands of entrepreneurs
academics and scientists including Elon
Musk who wants the training of
intelligence haltered for at least six
months we're going to dig deep into this
over the next 20 minutes or so in the
company of two people who know a thing
or two about it joining me is the tech
investor even burfield the Evan burfield
the author of regulatory hacking A
playbook for startups he's in Texas I'm
Professor Gary Marcus is in Vancouver
he's the professor emeritus at New York
University and author of rebooting AI
Professor let me start with you
um clearly with such advances that we're
seeing we have to set some guard rails
who do you think should be in charge of
that
uh were we both professors I think that
we need Global governance for AI I think
that we have a lot of patchworks right
now almost balkanized
um the worst case from the company's
perspective and the world's perspective
is if there's 193 jurisdictions each
deciding their own rules requiring their
own training of these models
um each run by governments that don't
have much specific expertise in AI so
what I called for an economist editorial
earlier this weekend in a TED talk
earlier this week was to have a global
system modeled on something like the
international atomic energy Authority
where the world comes together and says
we have a new threat here but it's
really a new set of threats and we need
to work together on this so I think the
number one thing is it should be Global
and the number two thing is it can't be
just
um policy but it also has to be a
research side because we need to invent
new tools like we had to invent for
fighting spam and cyber warfare and so
forth there's so many different threats
as you mentioned around misinformation
cyber crime and so forth so we need to
have a kind of standing organization
This Global and well-financed to try to
build tools to mitigate those threats so
Evan burfield there are many many people
who who just want to press the pause
button until we work out some of these
things but I can already see and I've
heard uh the reasons why that probably
isn't possible and and that is because
not everybody will stop and people are
worried about losing competitive
Advantage so how do we best go about
this
yeah I think that's exactly right
um you know the the challenge with a
moratorium is that it's incredibly hard
to enforce
um the responsible actors would be more
likely to follow it the irresponsible
actors wouldn't but that's actually not
so much my concern with the moratorium
there's absolutely questions we need to
be asking about the governance of AI
about what industry can do what
government can do uh I think the letter
did spark a conversation Schumer is
working on a new AI bill here in the U.S
rumors McCarthy's working on a
republican version
but what I think is actually much more
important is to start to have the
conversations about how we prepare our
society our economy uh our political
system democracy itself for all of the
implications of AI that are coming one
way or another and I suspect we'll see a
year from now we'll go this was less
impactful than we thought five years
from now it will be an absolute tsunami
of upheaval and we have this window
right now where we can have this
conversation and we can get creative and
I think we've got to use it and a
moratorium gives us this false sense of
security that we have control and can
stop it versus figuring out how we ride
this tsunami and try to direct it in a
much better Direction Professor Marcus
did we
did we learn anything from the the last
technological Advance the advance of the
internet of social media are there
lessons from that which we let's face it
we didn't do very well that are
applicable here
I think the number one lesson is you
don't want to close the Barn Door after
the horse has left I think you know
we're very late in in figuring out what
to do about social media I think we
probably handled privacy alone in the
wrong way
um we wound up with so much polarization
and hostility we wound up with
misinformation
um I think we waited too long to act I
think the number one lesson is we should
get on it right now and I agree with the
other panelists that the moratorium You
could argue about the merits whether it
was the right thing or the wrong thing
it was absolutely the right thing to
raise this and get it on everybody's
agenda this is not something we want six
months from now something we need now so
Evan burfield when you talk about a
tsunami in five years time what does
what does that look like
uh look I'm down here in Austin Texas uh
at Capitol Factory startup accelerate
I've spent the whole day uh you know
meeting with startups and there's not a
startup right now out there that is not
applying these AI generative models
these large language models to every
interesting problem of the Sun and
there's all of the the scary dystopian
possibilities that you uh led into this
segment with but there's also uh
incredible advances in how to make work
more fulfilling and more impactful how
to apply uh tremendous personalization
to Medicine based on our genetics our
environment the particular issues we're
having how do you make government more
responsive and feel more like a
concierge to Citizens all of that is
also being worked on
um and I think figuring out how we put
the guard rails in place around some of
the scarier things which isn't just
about regulating Aya it's about changing
our social policy changing our Market
policies themselves
so that we can mitigate some of that and
and direct this into the the much more
hopeful and optimistic Direction why why
on that point though Evan is it is it
imaginable in the current scenario you
are around it all the time that a
research lab would cross a critical line
here without even noticing
I I'm I'm personally skeptic you know uh
Gary's written some wonderful points
about the fact that we are we are very
very far I believe from artificial
general intelligence and the Terminator
scenarios I I think we've got to be very
aware of right now is simply that this
technology is already right now today at
a state if it did not Advance any
further where its application is going
to profoundly change how we live our
lives how we work how we engage with
each other in communities how our
democracies function uh the impacts on
our democracies are going to be felt
right in the 2024 political cycle here
in the U.S that's what I think we need
to be talking about and preparing for
the scenarios of AI is like nuclear
weapons we have to ban it immediately I
think are much less applicable to the
the much more realistic uh changes that
are already happening around us right
now that are going to accelerate miles
um you've just come back from Washington
and I know that you've been talking to
policymakers about the specific issues
in fact the reason we're talking about
it tonight is because you tweeted no one
has a clue I I mean is that is it as
blunt as that that nobody really
understands it there is practically no
work being done on it
hi Christian you're you're spot on the
three biggest challenges right now with
policy makers are one this was
completely foreseeable there were some
of us in Washington talking about this
10 or 15 years ago policymakers weren't
paying attention and most of the think
tanks in Washington really failed to
start a conversation about the Practical
things that needed to be done to prepare
for the age of AI so we're behind the
ball from a policy making standpoint the
second thing I would emphasize what Evan
burfield just said there is a wave
coming and you can do two things when a
wave is coming you can get crushed by it
or you can ride the wave and to use
another analogy right now the discussion
in Washington is about whether to put
the genie back in the bottle or not that
shouldn't be the discussion it should be
what three wishes should we ask the
genie and that's the discussion that
should be had about how to handle Ai and
use it for good purposes and finally the
other problem is policy makers are not
thinking two steps forward on the
chessboard it's AI right now but in in
this decade AI is going to be
supercharged by other Technologies like
Quantum Computing that are going to give
machines genuine human-like emotion what
are we doing to prepare for that we
should be having that conversation now
there needs to be institutions in
Washington that focus on that so Jack
we've had a discussion about that in
this country and the UK government has
decided that it doesn't need a dedicated
UK regulator for AI so who's overseeing
it
that's a very good question I mean uh I
I was on stage last night with the
chancellor Jeremy Hunt and I asked him
about this you know he's the guy in
charge of the UK economy
um and he was really quite dismissive
he's he in in the sense that he said you
know this is something that is going to
happen and we have always embraced new
technologies in this country and we
should do so again uh it's full steam
ahead was the phrase that he used
um you know he was very very positive he
did he did not want to talk about the
possibility that people would lose their
jobs because of this technology he only
saw it as a purely positive thing and he
was not keen to talk about the way it
should be regulated now you know I'm no
expert on this still if I'm a politics
guy but what I do know is that is how
Westminster works and how um political
systems work and I can tell you now and
you'll know this Christian there is no
way our political system is set up to
deal with this challenge absolutely no
chance the speed at which decisions are
made in Westminster and I suspect in
other major political centers is far too
slow to cope with the pace at which this
technology is coming the policy makers
do not understand it at all this is just
something that is going to wash over us
and we're going to have to cross our
fingers you know the UK government put
out a white paper which is what they
call their draft strategy on AI the
other day I mean just the very name of
it white paper tells you how old school
this is yeah you know it's out of date
already and that thing took you know
years for them to put together
um we just don't have the sort of nimble
small system smart thinking people set
up to deal with this and I'd be very
surprised if that's different in the US
or indeed in many of the other big Power
centers well they clearly don't
understand it Professor Marcus do they
call you in to try and get you to
explain it to them
that I was talking to people in the U.S
and Canadian government yesterday I've
been called a lot lately
um I think there is an awareness that
people don't quite know what to do and
they are increasingly turning to me
um and also turning to all of my you
know academic colleagues and so forth so
I think that there's at least a
recognition and people know what they
don't know I do think that the UK white
paper saying that you won't have a
central office of AI is certainly for
all the reasons that were kind of
implicitly just said which is
um the government is going to be
ill-equipped to deal with the speed of
this and if you just leave it to 20
different Regulatory Agencies Each of
which don't have expertise you're asking
for trouble you're asking for a lack of
coordination and it's just not realistic
that all of those agencies are going to
be up on things so there needs to be I
think at least some Central oversight I
think the United States should consider
a cabinet level AI officer and you
should consider something comparable
um you need some people maybe like a G7
then we have a G7 meeting of foreign
ministers we need a G7 meeting of of
IAAI ministers is that is that
effectively what you're saying well I
mean I'm calling for something similar
which is a global organization uh kind
of like the IMF or an international
atomic energy agency
um where you have a lot of experts you
have a lot of people in government you
have a lot of people in uh the companies
and yeah you have regular meetings
you're like well this week the new thing
is this thing called this is a real
example called Auto gbt where you have
ai's training other AIS what do we do
about that how big a threat is it is it
a small threat big threat like if you
have a research arm then you can say
let's do some experiments here and try
to figure out what the limits are right
now instead you have like 193 countries
maybe some of them have read the news
about this major news Discovery some of
them haven't even aren't even aware of
it and there's like no coordination here
that just can't be the right way you're
nodding Evan because this is the key
issue it's miles as miles discussed it's
not human competitive intelligence it's
it's what happens after AI gets smarter
than humor intelligence right amazing
but you know I I can't go to a
conference I I actually live in
Washington DC most the time I can't go
to a dinner or a conference a meeting
without the word I being discussed and
they're all talking about chat GPT and
Gary's right is that even Auto gppt
there was a an experiment run last week
called chaos GPT where they took a
neutered version of Auto GPT and told it
to go out and figure out the most
efficient way to destroy Humanity it was
a it was sort of a test and it's set to
work doing it there's there's a lot of
this stuff is moving incredibly fast and
figuring out how you can educate policy
makers about how to mitigate regulate
bring transparency to some of those
threats while not preventing what can be
breathtaking advances in
um how we live our lives in much more
fulfilling and purposeful ways and
Society I think that's that's a lot of
the trick here to Echo Jack's point
though about white papers and the way
you know government moves I I tend to
agree you know miles may be more
optimistic than I am but I tend to agree
I think a lot of the big changes that
are going to need to happen probably
won't happen until uh there's some sort
of provoking event some sort of Crisis I
don't think that though prevents us from
starting to have the conversations at
least the Way Washington tends to work
at least you want to have the the policy
container the framework the ideas ready
some sort of consensus being built so
that when the opportunity presents
itself kind of like a a VC who sees a
great startup right when the opportunity
presents itself you're ready to jump on
it you're ready to move forward and I
think that that has to be happening
right now
go ahead
the opportunity that I see right now is
to build some Global governance I think
you have the governments are afraid of
the technology companies the technology
companies are afraid the governments are
going to shut them down as they did in
Italy and this means everybody has some
incentive to go to the table that's rare
and I think we should be seizing that
opportunity right now to try to do
something coherent that is dynamic
enough to cope with the speed of the
change to take advantage of the good
things and and to avoid the bad things
but we need that coordination now and we
can't just leave this to the usual
mechanisms it's just too slow miles one
of the more worrying things that you
said was that speaking McCarthy was
looking at an AI for for republicans and
you know one of the experiences we have
of recent years is that the Russians
were able to interfere in a democracy
and who knows arguably it's been debated
whether they were able to change some of
the results through what they were
putting onto the internet I mean we're
into a whole new ball game for democracy
if AI can put out misinformation and
propaganda
it's not if it's how much and when it's
going to happen probably in West 2024
election wow yeah there's no question I
mean this this coming election cycle in
the United States it's a big concern for
election security authorities it should
be but I yeah I got to go back to what
the other panelists said in order to
respond to it effectively we've got to
start with education and right now I
mean I've tried to brief policy makers
on this it's like explaining particle
physics to a chocolate chip cookie I
mean there's just not recognition about
what's happening if I was President Joe
Biden right now I will put I would put
the entire cabinet on Air Force One I
would fly them to Silicon Valley and we
would spend the week educate educating
them about what's happening because
there aren't just these security
implications for the elections as Evan
notes there's also really positive
implications I mean there's the ability
to address major health care problems
hunger homelessness and to do it in real
time and we are missing some
opportunities by policy makers not being
educated on the subject but of course
security has to come first and in order
to protect elections or anything else
it's got to start with you know policy
makers becoming technologists being
educated
fascinating conversation we're gonna
have to leave it there Evan burfield
Gary Marcus thank you very much indeed
for joining us
تصفح المزيد من مقاطع الفيديو ذات الصلة
Les dangers de l'intelligence artificielle : entrevue avec Yoshua Bengio
The Urgent Risks of Runaway AI — and What to Do about Them | Gary Marcus | TED
The AI series with Maria Ressa: An introduction | Studio B: Unscripted
Is a ban on AI technology good or harmful? | 60 Minutes Australia
ChatGPT: Artificial Intelligence, chatbots and a world of unknowns | 60 Minutes
The AI Dilemma: Navigating the road ahead with Tristan Harris
5.0 / 5 (0 votes)