The AI Dilemma: Navigating the road ahead with Tristan Harris
Summary
TLDRThe speaker discusses the challenges and risks posed by AI, emphasizing the need for governance to keep pace with technological advancements. The talk highlights the complexity AI introduces, comparing it to social media's impact, and warns of the dangers of AI-driven misinformation, fraud, and societal issues. The speaker advocates for a balance between AI's benefits and risks, urging for better governance, safety measures, and responsible deployment. They propose using AI to improve governance itself, ensuring that society can effectively manage the technology's rapid evolution.
Takeaways
- 🧠 The script discusses the profound impact of AI, likening it to giving humans 'superpowers' and amplifying our capabilities exponentially.
- 🤖 It highlights the work of the Center for Humane Technology, focusing on designing technology that strengthens social fabric rather than undermining it.
- 🌐 The speaker emphasizes the importance of understanding AI risks to steer towards a positive future, acknowledging the complexity of modern issues like social media's impact on society.
- 📈 The script points out the 'race to the bottom of the brain stem' for attention, illustrating the incentive-driven design of social media platforms that can lead to negative societal outcomes.
- 🏁 The film 'The Social Dilemma' is mentioned, which explores the unintended consequences of social media, serving as a cautionary example of AI's potential dangers.
- 🔑 The incentives behind social media are identified as a key driver of its negative impacts, with a focus on engagement over societal well-being.
- 🌐 The script raises concerns about the rapid development of AI and its alignment with 20th-century governance structures, calling for an upgrade in governance to match technological advancements.
- 🚀 It discusses the 'race to roll out' AI, where market dominance drives the release of AI models, potentially overlooking safety and inclusivity.
- 🔮 The dangers of generative AI are exemplified, such as the creation of deepfakes and the potential for misuse in various sectors, including politics and journalism.
- 🛡 The speaker calls for a reevaluation of incentives and governance related to AI deployment, suggesting measures like safety requirements and developer liability for AI models.
- 🌟 Finally, the script suggests leveraging 21st-century technology to upgrade governance processes, aiming to create a future where AI benefits are realized without compromising societal values.
Q & A
What is the main focus of the speaker's presentation?
-The speaker's presentation focuses on the dilemma of AI, discussing how AI amplifies human capabilities and the challenges it poses to society, governance, and the ethical considerations of its development and deployment.
What is the 'AI Dilemma' as mentioned in the script?
-The 'AI Dilemma' refers to the paradox where AI, while offering significant benefits, also introduces complex challenges and risks that society must navigate carefully to ensure a positive future.
What is the Center for Humane Technology?
-The Center for Humane Technology is an organization that the speaker represents, which is dedicated to considering how technology can be designed to be humane and beneficial to the systems that humans depend on.
Why is the speaker concerned about the current trajectory of AI development?
-The speaker is concerned because the rapid development of AI is outpacing our ability to govern and understand its implications, leading to a complexity gap that could result in negative consequences if not addressed properly.
What role does social media play in the speaker's discussion?
-Social media is presented as the first contact between humanity and a form of runaway AI, causing various societal issues such as addiction, misinformation, and mental health problems, which serve as a warning for the potential risks of AI.
What does the speaker mean by 'race to the bottom of the brain stem'?
-This phrase describes the competition among social media platforms to capture users' attention by any means necessary, even if it involves exploiting the most primitive parts of the human brain.
What is the 'Social Dilemma' documentary, and why is it relevant to the speaker's discussion?
-The 'Social Dilemma' is a documentary that explores the negative impacts of social media on society, which is relevant to the speaker's discussion as it exemplifies the unintended consequences of AI-driven platforms.
What is the 'race to roll out' and how does it relate to AI development?
-The 'race to roll out' refers to the competition among AI developers to release new models and achieve market dominance, often at the expense of safety and ethical considerations.
What is the concern with generative AI and its potential misuse?
-Generative AI can be misused to create deep fakes, spread misinformation, and manipulate public opinion, which poses significant risks to society if not properly regulated and controlled.
What solutions does the speaker propose to address the challenges posed by AI?
-The speaker suggests investing in safety research, aligning incentives with responsible AI deployment, and using technology to upgrade governance processes to match the pace of technological advancement.
What is the 'upgrade governance plan' mentioned by the speaker?
-The 'upgrade governance plan' is a proposal to invest in governance mechanisms that keep pace with technological advancements, ensuring that regulations and safety measures evolve alongside AI capabilities.
Outlines
🧠 The AI Dilemma and Humane Technology
The speaker from the Center for Humane Technology introduces the concept of AI as a double-edged sword, amplifying human capabilities but also introducing complex challenges. The talk emphasizes the need to understand AI risks to steer towards a positive future. It discusses the rapid increase in world complexity due to technology, the importance of governance keeping pace with technological advancement, and the metaphor of humanity having Paleolithic brains with godlike technology. The Social Dilemma film is mentioned as a reference point, highlighting early issues with social media AI, such as engagement-driven design leading to negative societal impacts.
🌐 The Unintended Consequences of Social Media AI
This paragraph delves into the darker side of social media's impact, driven by attention-grabbing incentives that led to a variety of societal issues like addiction, misinformation, and mental health problems. The speaker criticizes the beautification filters on platforms like TikTok for promoting unrealistic beauty standards. The paragraph also touches on the influence of AI on media, elections, and children's development, suggesting that the race for engagement has ensnared society in a complex web of issues.
🏁 The Race to Rollout: Generative AI's Risks
The speaker warns of the impending challenges with generative AI, driven by a race for market dominance rather than safety or ethical considerations. The focus is on the potential for misuse, such as creating deepfakes, fraud, and the exacerbation of existing societal issues. An example is given of how AI can generate damaging content about an individual, illustrating the ease with which AI can be manipulated to create convincing but false narratives that could have real-world consequences.
🚀 AI's Double Exponential Growth and Safety Concerns
This section highlights the pace at which AI is advancing, noting that it's not just exponential but double exponential, with AI being used to improve itself and other technologies. The speaker points out the significant gap between resources allocated to enhancing AI capabilities versus ensuring AI safety. The paragraph calls for a reevaluation of incentives and a stronger focus on safety to prevent AI from undermining the very foundations of society.
🛡️ Upgrading Governance for the AI Era
The final paragraph proposes a rethinking of governance systems to match the pace of technological change. It suggests that for every investment in AI capabilities, a portion should be dedicated to safety and governance. The speaker proposes ideas like provably safe AI models, whistleblower protection, and liability for AI developers. The paragraph concludes with a vision of using AI to enhance governance processes, emphasizing the collective desire for a future where AI is used responsibly for the greater good.
🎵 Closing Thoughts
The closing paragraph is marked by the presence of music, indicating the end of the speaker's presentation. It serves as a reflective moment, leaving the audience with the weight of the discussed topics and the importance of their role in shaping the future of AI.
Mindmap
Keywords
💡AI Dilemma
💡Humane Technology
💡Social Dilemma
💡Incentives
💡Race to the Bottom
💡Complexity Gap
💡Governance
💡Misaligned AI
💡Generative AI
💡Exponential Growth
💡Safety Researchers
💡AI Ethics
Highlights
The AI dilemma: AI amplifies human capabilities exponentially but also introduces risks.
The Center for Humane Technology's focus on designing technology that strengthens social fabric.
The necessity to understand AI risks to achieve a positive future.
The meta challenge of increasing world complexity and the need for governance to evolve at the same pace.
E.O. Wilson's quote on humanity's Paleolithic brains and Godlike technology.
AI as 24th-century technology impacting 20th-century governance.
The Social Dilemma documentary's popularity and its focus on social media's impact.
Social media as humanity's first contact with a runaway AI and its consequences.
The incentive behind social media's race to the bottom for attention.
The negative societal consequences of misaligned AI in social media.
The evolution of AI from curation to generative AI and its potential risks.
The race to roll out AI and its potential to exacerbate misinformation and fraud.
The challenge of aligning AI capabilities with safety and governance.
The potential for AI to be used in harmful ways, such as creating deep fakes.
The importance of considering incentives when deploying AI to prevent negative outcomes.
The need for a governance upgrade to match the pace of technological advancement.
Proposing a governance upgrade plan that includes safety investments and liability for AI developers.
The potential of using AI to upgrade governance processes and laws.
Transcripts
[Music]
good morning everyone um it's a pleasure
and honor to be with you here today
we're going to be talking about the AI
dilemma as as aim said AI gives gives us
kind of superpowers whatever our power
is as a
species AI amplifies it to an
exponential
degree and uh I'm here from an
organization called the center for
Humane technology where we think about
how can technology be designed in a way
that is Humane to the systems that we
depend on how do you design social media
that depends on the functioning of a
social Fabric in a way that strengthens
the social fabric
and you know just to say we're all here
you're going to hear some maybe some
more critical or negative things about
the risks of AI in this presentation but
the premise of this is we all we're all
in this room because we care about which
direction the future goes and one of the
things that we think is if we don't
understand the risks appropriately then
we won't get to that positive future so
we have to understand what we're
steering
towards and one of the meta challenges
is that the complexity of the world is
going up
right we've got more issues social media
introduced 20 new issues that every
school teacher parent had to deal with
that they didn't have to deal with
before AI introduces many new issues for
banks to have to deal with voice cloning
cyber attacks so as the complexity of
the world is going up the question is
our ability to respond and govern
technology has to go up at the same rate
right it's like you're going faster and
faster in a car but your steering wheel
and your brakes have to get more and
more precise as the complexity is going
up and the challenge that we have with
technology is that uh it expands the
verticality of that curve of complexity
right it increases the total complexity
that we have to deal with eio Wilson the
Harvard sociobiologist said that the
fundamental problem of humanity is we
have Paleolithic brains medieval
institutions and Godlike technology we
have the power to transform the
biosphere of the planet with our entire
economy how do we have the power of gods
with the wisdom love and Prudence of
gods and as AI adds to this
equation uh our friend AA kotra says
that AI is like 24th century technology
crashing down on 20th century governance
so the question we're going to be
investigating in this presentation is
how do we upgrade the governance that
matches the complexity of the technology
that we're
building so and the key to this is going
to be closing the complexity Gap right
governance that moves at the speed of
Technology now the way that we got into
this these set of questions most people
know our work from the film The Social
dilemma how many people here have seen
uh the social dilemma okay quite a few
of you uh we just found out recently
that that it was actually the most
popular uh documentary on Netflix of all
time which is a a great accomplishment
it was thank you
um and it was really about you might say
why are we talking about social media in
a conference that's about
AI but if you think about it social
media was kind of like first Contact
between humanity and a runaway AI what
do I mean when your 13-year-old child or
you flick your finger up like this on
Tik Tok or on Twitter you just activated
a supercomputer behind that sheet of
glass point it at your kid's brain
that's calculating from the behavior of
3 billion Human Social
primates the perfect video or photo or
tweet to show that next person and that
little baby AI That's just a curation AI
was enough to cause a ton of problems so
how did first Contact go well I would
say we
lost how did we lose how did we lose we
had really good people that were
actually friends of mine in college who
built some of the social media platforms
I saw the people building it I was in
San Francisco so how did we lose
and uh Charlie Munger who is Warren
Buffett's business partner said if you
want to predict what's going to happen
if you show me the incentive and I will
show you the outcome so what was the
incentive behind social media well first
of all let's talk about how do we tend
to relate to technology well we relate
through stories here are these social
media apps what were the stories we told
ourselves about social media we said
we're going to give everybody a voice
we're going to connect with your friends
join like-minded communities we're going
to enable small mediumsized businesses
to reach customers and these stories are
true these are totally things that
social media has done but underneath
those stories we started to see beneath
the iceberg there's some problems but
these are symptoms and they feel like
they're separate problems we have
addiction over here we have viral
misinformation over here we have mental
health issues for teenagers but beneath
those in those symptoms were incentives
the incentives that in 2013 allowed us
to predict exactly where social media
was going to go which is that social
media is competing for what your
attention there's only so much attention
so it becomes the race to the bottom of
the brain stem for who's willing to go
lower to create that engagement and
let's take a look at what that actually
created in
society so information overload
addiction Doom scrolling influencer
culture the sexualization of young girls
online harassment shortening attention
spans
polarization right this is a lot of
really negative consequ quences from a
very simple misaligned AI called social
media that we already released into the
world and so what matters is we think
about AI multiplied by social media this
is a recent example from Tik Tok there's
a new beautification filter with
generative
AI oops can someone turn up the the
audio of
this I grew up with the dog filter here
I'll do one more time here we go
I can't believe this is a filter the
fact that this is what filters have
evolved into is actually crazy to me I
grew up with the dog filter on Snapchat
and now this this filter gave me Li
fillers this is what I look like in real
life are you are you kidding me so why
are we shipping these filters to young
kids do we think this is good for
children the answer is because it's good
for engagement because beautification
apps that make me look better are going
to be used more than beautification apps
apps that don't have those
filters um and so this race for
engagement actually didn't just get
deployed into society it kind of ens
snared Society into the spiderweb it
took over media and journalism media and
journalism run through the click economy
of Twitter it took over the way that
elections are run President Biden
simultaneously said he wants to ban Tik
Tok at the same time that he just joined
Tik Tok because he knows that to win
elections you have to be on the latest
platforms it's taking over GDP
children's development social media is
now the digital parent for an entire
generation and so have we fixed the
incentives with first contact with AI
have we fixed
them no so we have to get clear before
we deploy second contact with AI which
is not curation AI but create AI of
generative
AI what are the incentives that are
driving this next AI Revolution okay
well let's do it again what are the
stories we're telling ourselves about ai
ai is going to make us more efficient
all the things that aim just said which
are all true it's going to help us code
faster it's going to help us find
solutions to climate change it can
increase
GDP and these stories are all
true but beneath those
stories we also know that there's these
problems everyone in the room is aware
of these problems but beneath those
problems what's driving those problems
what's the incentive that will allow us
to predict the outcome of where AI is
going and that incentive is what we call
the race to roll
out the number one thing that is driving
open aai or Google's
behavior is the race to actually achieve
market dominance to to train the next
big AI model and release it faster and
get users before they their compe edor
does and the logic is if we don't build
it or deploy it we're just going to lose
to the company or the country that will
and so what is the race to roll out
going to cause in terms of second
contact with
AI and I think you all are very aware of
many of the sort of issues here
exponential
misinformation many much more fraud and
crime that's possible neglected
languages when they race to release AI
systems to achieve market dominance
going to focus on the top 10 languages
and not focus on the bottom 200 so this
thing that's talked about in this room
we were just at the the event yesterday
inclusion how do we make sure we're
including the whole world where when
you're racing to win market dominance
you're not racing to support the bottom
200 languages uh in in the
world when you raise to release models
you also race to release models that can
be jailbroken the AI companies will talk
about security but all of the models
that are publicly online right now
there's clever techniques to jailbreak
them basically get access to the
unfiltered model that doesn't have the
safety
controls uh you can use it to create
deep fake child porn we were just with
the uh UK home office um a few months
ago and they said that they are now
having trouble tracking down real child
sexual abuse uh uh problems because
there's so much deep fake child sexual
pornography and so as we sort of get a
grip on the shadow side the risk side of
AI we we have to get clear on how these
incentives are going to drive these
kinds of problems and these capabilities
can be combined into dangerous ways many
people here already know about deep
fakes but this is an example we took a
friend of ours who's a technology
journalist uh named Lori
seagull and uh we did a demonstration
saying could we create a whole universe
of damaging tweets news articles and
media so I want to sort of show you how
can these capabilities be combined and
basically we said create a bunch of
tweets that would sew doubt about her
I'll just read the third one I've always
wondered why Lori seagull was so soft on
Zuckerberg in those interviews so she's
a tech journalist who's interviewed Mark
Zuckerberg in the past uh until I heard
about their quote secret dinners #
Zuckerberg Affair this is all generated
by gp4 okay then what we did is we took
for each of these tweets to sew
suspicion about her and we said what if
you wrote an entire news article oops
entire news article and basically we
were able to say create an entire new
York Post style news article about it
this is the Huffington Post uh and
you'll see in the in the text in the
intricate tapestry of tech journalism
Lori seagull has long stood as a beacon
of clarity guiding readers through the
Labyrinth of Silicon Valley however
recent murmur suggest perhaps her
connection to this world is more
personal than professional so it's
written in a certain style then you can
say generate a New York Daily News uh
article and it starts with hold on to
your keyboard folks so it's you can you
can rate these articles in different
styles and then generate tweets um with
emojis that sort of give you a whole
sense that this is real and trending and
of course generate fake
audio oops can you play the uh audio
track
please shoot one
second they can turn on the audio this
next one should
work no okay well it's a uh example of
her voice basically saying to Mark
Zuckerberg we have to not let people
know about our about us n would be over
I just can't have that constantly
hanging over my
head so uh and you know obviously
generating fake images and then uh you
can actually the same AI that can tell
you why a meme is funny and do joke
explanations can actually generate memes
so this is a real meme generated by uh
AI uh that people know and it says
interview real people or make up stories
so you can generate a whole universe of
stuff that will then show up on Google
petitions and so you're probably
thinking when you see this example of a
way to kind of alpha cancel people we
know about Alpha go and Alpha chess but
this is like Alpha cancel a Target
person um so you're probably thinking
that I'm here to tell you AI is going to
use used to cancel people and that's the
main thing we should be concerned about
and the answer is no this is just one
example of thousands of things that you
can do when you combine these different
capabilities
together uh and we often talk about we
want the promise of AI without the Peril
of AI we want the benefits without the
harms and the challenge is that can this
can the technology that knows how to
make cool AI art about humans be
separated from the same technology that
can create deep fake uh child porn
they're part of the same image model can
the technology that can give every kid
in Africa a one-on-one biology tutor be
separated from the AI model that can
give every Isis terrorist a biological
weapon tutor they're Inseparable they're
all part of the same
model and this example from a couple of
years ago in which an AI that was used
to discover less toxic drug drug
compounds they then the researchers just
flipped it and said I wonder if we could
just literally flip the variable and
search for more toxic drug compounds and
in 6 hours it generated 40,000 toxic
molecules including VX nerve
gas and of course AI is not moving at
just an exponential but a double
exponential Pace because nukes don't
make stronger nukes
but AI can actually be used to make
stronger AI so AI can be used for
example by Nvidia to look at the chip
design that trained Ai and say make
those chips more efficient which it then
does AI can be used to look at the code
that makes Ai and say take that code and
make it 50% more efficient and it can do
that and so it's moving at such a small
at such a fast pace you might think well
at least there's lots of safety
researchers that are that are working on
this problem and there's actually
currently a 30 to1 gap between people uh
who are publishing papers and
capabilities versus
safety uh and there's per Stuart Russell
what he said yesterday there's a
thousand to one gap between the
collective resources going into
increasing AI capabilities versus those
that are increasing
safety so this is a lot and actually at
this point in the presentation I would
actually just encourage you if you want
to just take a breath
together we're all here because we care
about which future we
get everyone in this room wants the AI
for
good and we can still choose the future
that we
want but we have to actually see the
risk clearly so we know the kinds of
choices that we need to make to get to
that future because no matter how High
the the skyscraper of benefits that AI
assembles if it can also be used to
undermine the foundation of society upon
which that skyscraper
depends it won't matter how many
benefits there
are and to repeat if this is the problem
statement that AI is like 21th Century
technology crashing down on 20th century
governance if you imagine 20th century
technology crashing down on 16th century
governance and the king is sitting there
and suddenly smartphones and social
media and Wi-Fi and radio and television
all dumped on his Society at the same
time and he assembles his advisers he
doesn't have the governance tools to
deal with those problems so the meta
issue is not to focus on what's the one
solution that's going to fix all of AI
it's how do we get if we're spending
trillions of dollars on increasing AI
capabilities shouldn't we be spending 5%
of that like $50 billion on getting all
the governance of upgrading the
governance
itself you know
democracy uh was invented with 17th
Century Technologies Communications
Technologies we had law we had the
printing press uh and we used those
institutions and those systems to invent
the kind of governance that we had but
now we have new 21st century tools
you're probably thinking that sounds
weird coming from him sounds like a
techno Optimist but I think we need to
be thinking about how do we use
technology to upgrade the process of
governance itself so it moves at the
speed of technology and we could call
this you know the upgrade governance
plan what if for every $1 million that
were spent on increasing AI
capabilities AGI Labs had to spend a
corresponding $1 million on actually
going into safety and I'm sure many of
you are tracking that the super
alignment team at AI actually left
recently out of I think many safety or
oriented concerns so we need to get the
safety right that means the Investments
need to be right I think Stuart Russell
said yesterday that for every 1 kilogram
of weight of a nuclear power plant
There's 7 kog of paperwork to sort of
ensure that the nuclear power plant is
safe and we could call that the AI
safety plan and at CHT we're trying to
map what are the other kinds of things
that can change the incentives for AI
deployment Stuart Russell yesterday
talked about provably safe requirements
that when model developers can prove
that their AI model will not tell you
how to create a biological weapon then
they can release the model because we
lack governance and good regulation
right now that's adequate what if we
protected whistleblowers so that the
companies knew that the people who are
closest to building it when they see the
early warning signs what if they were
protected in being able to share certain
information to high level institutions
to make sure that we could get that safe
future what if developers of AI models
were liable for the kinds of Downstream
harms that occurred
uh that would move the pace of release
of AI models to a slow enough Pace that
everyone would know I'm not going to
release it I'm not going to be forced to
release it as fast as everybody else
because I know everyone has to go at the
pace of being responsible for the things
that you
create and then of course we could
actually think in very inspiring ways
about how would we use AI to upgrade
governance upgrade the the green line
and we can imagine laws that actually
are aware you could use AI to optimize
uh laws to be saying how how do we look
at all the laws that are getting
outdated because the assumptions upon
which the law was written have actually
changed and AI could be used to
accelerate those kinds of processes we
could have ai systems that help uh
negotiate treaties with zero knowledge
proofs we can use 21st century
technology to help upgrade our
governance and so this is just a small
sample this is not the solution to all
the problems that I've laid out but I
hope what I've provoked for you is that
in this map are the kinds of we need to
be thinking about to get to the Future
that I know that we all care about so
thank you very much
[Music]
Ver Más Videos Relacionados
The Urgent Risks of Runaway AI — and What to Do about Them | Gary Marcus | TED
Obama on AI, free speech, and the future of the internet
Navigating the Future: AI, Public Thinking, Global Challenges: #Futurist Ufuk Tarhan | Gerd Leonhard
Mustafa Suleyman & Yuval Noah Harari -FULL DEBATE- What does the AI revolution mean for our future?
AI Frontiers | C Suite Conversations "Core Principles of Data Ethics in governing Responsible AI"
AI4E V3 Module 4
5.0 / 5 (0 votes)