How to Govern AI — Even If It’s Hard to Predict | Helen Toner | TED
Summary
TLDRThe speaker addresses the widespread confusion surrounding artificial intelligence, noting that even experts lack a complete understanding of its inner workings. They emphasize the importance of understanding AI for its governance and future development. The talk explores the challenges of defining intelligence and the limitations in predicting AI's trajectory. The speaker suggests focusing on AI interpretability research and adaptability in policy-making, advocating for transparency, measurement, and incident reporting to navigate AI's impact effectively.
Takeaways
- 🤖 There's a widespread lack of understanding of AI, even among experts, which complicates predicting its future capabilities and governance.
- 🧠 The definition of intelligence is not agreed upon, leading to varied expectations and challenges in AI development and governance.
- 🚀 AI's rapid advancement has outpaced our ability to fully comprehend its internal workings, often referred to as 'black boxes'.
- 🔍 'AI interpretability' is an emerging research area aiming to demystify AI's complex processes and enhance understanding.
- 🌐 The lack of consensus on AI's goals and roadmaps makes it difficult to govern and predict its trajectory.
- 👥 Empowering non-experts to participate in AI governance is crucial, as those affected by technology should have a say in its application.
- 🛠️ Policymakers should focus on adaptability in AI governance, acknowledging the uncertainty and fostering flexibility to respond to AI's evolution.
- 📊 Investment in measuring AI capabilities is essential for understanding and governing AI effectively.
- 🔒 Transparency from AI companies, including mandatory disclosure and third-party auditing, is necessary for proper oversight.
- 📈 Incident reporting mechanisms can provide valuable data, similar to how plane crashes and cyberattacks are documented, to learn and improve AI safety.
Q & A
Why do both non-experts and experts often express a lack of understanding of AI?
-Both non-experts and experts express a lack of understanding of AI because there are serious limits to how much we know about how AI systems work internally. This is unusual as normally the people building a new technology understand it inside and out.
How does the lack of understanding of AI affect our ability to govern it?
-Without a deep understanding of AI, it's difficult to predict what AI will be able to do next or even what it can do now, which is one of the biggest hurdles we face in figuring out how to govern AI.
What is the significance of the speaker's experience working on AI policy and governance?
-The speaker's experience working on AI policy and governance for about eight years, first in San Francisco and now in Washington, DC, provides an inside look at how governments are managing AI technology and offers insights into the industry's approach to AI.
Why is it challenging to define intelligence in the context of AI?
-Defining intelligence in the context of AI is challenging because different experts have completely different intuitions about what lies at the heart of intelligence, such as problem-solving, learning and adaptation, emotions, or having a physical body.
What is the confusion surrounding the terms 'narrow AI' and 'general AI'?
-The confusion arises because the traditional distinction between narrow AI, trained for one specific task, and general AI, capable of doing everything a human could do, does not accurately represent the capabilities of AI systems like ChatGPT, which are general purpose but not as capable as humans in all tasks.
How do deep neural networks contribute to the difficulty in understanding AI?
-Deep neural networks, the main kind of AI being built today, are described as a black box because when we look inside, we find millions to trillions of numbers that are difficult to interpret, making it hard for experts to understand what's going on.
What is the speaker's first piece of advice for governing AI that we struggle to understand?
-The speaker's first piece of advice is not to be intimidated by the technology or the people building it. AI systems can be confusing but are not magical, and progress in 'AI interpretability' is helping to make sense of the complex numbers within AI systems.
Why is adaptability important in policymaking for AI?
-Adaptability is important in policymaking for AI because it allows for a clear view of where the technology is and where it's going, and having plans in place for different scenarios helps navigate the twists and turns of AI progress.
What are some concrete steps that can be taken to improve governance of AI?
-Concrete steps include investing in the ability to measure AI systems' capabilities, requiring AI companies to share information and allow external audits, and setting up incident reporting mechanisms to collect data on real-world AI issues.
How can the public contribute to the future of AI despite the uncertainty in the field?
-The public can contribute to the future of AI by advocating for policies that provide a clear picture of how the technology is changing and then pushing for the futures they want, as they are not just data sources but users, workers, and citizens.
Outlines
🤖 The Complexity and Uncertainty of AI Understanding
The speaker begins by highlighting the widespread confusion about artificial intelligence (AI), noting that even experts admit to not fully understanding it. This is unusual as typically those developing a technology have a deep understanding of its inner workings. The speaker emphasizes the importance of understanding AI, as it is a technology that is significantly reshaping our world. The lack of understanding poses challenges for predicting AI's future capabilities and its current applications. The speaker also discusses the difficulty in defining intelligence, which leads to varied expectations about AI's trajectory. The script mentions the evolving terminology around 'narrow AI' and 'general AI,' using ChatGPT as an example that doesn't fit neatly into either category. The complexity of deep neural networks, described as 'black boxes,' further complicates the understanding of AI, as they involve vast numbers that are challenging to interpret.
🔍 Strategies for Navigating AI's Uncertainties
The speaker offers two key ideas for addressing the challenges of understanding and governing AI. The first is a call to not be intimidated by the technology or its creators. While AI can be complex, it is not beyond comprehension, and progress is being made in the field of 'AI interpretability' to demystify its operations. The speaker encourages a broader participation in AI governance, arguing that those affected by technology should have a say in its application. The second idea is to focus on adaptability rather than certainty in policy-making. The speaker suggests that instead of rigid regulations, there should be flexible policies that allow for clear visibility and responsive measures as AI evolves. This includes investing in measurement capabilities, requiring transparency from AI companies, and establishing incident reporting mechanisms to learn from real-world applications. The speaker notes that some of these ideas are already being implemented in various locations and emphasizes the importance of having a clear view of AI's progress and the ability to respond effectively.
🌟 The Potential and Responsibility in AI's Future
In the final paragraph, the speaker discusses the vast potential of AI, which extends beyond current applications like language translation and protein structure prediction. The speaker envisions future AI systems that could revolutionize energy production, agriculture, and many other sectors. The speaker emphasizes that everyone has a stake in AI's development, as users, workers, and citizens, and that we should not wait for complete clarity or consensus to shape AI's future. Instead, the speaker advocates for the implementation of policies that provide a clear understanding of AI's evolution and enable us to actively participate in steering its direction. The speaker concludes by acknowledging the uncertainty and disagreement in the AI field but also the reality that AI is already impacting our lives, and it is imperative to engage in shaping its future.
Mindmap
Keywords
💡Artificial Intelligence (AI)
💡Expertise
💡Governance
💡Interpretability
💡Black Box
💡Narrow AI
💡General AI
💡Adaptability
💡Measurement
💡Incident Reporting
💡Citizen Engagement
Highlights
Experts and non-experts alike often express a lack of understanding of AI.
AI is reshaping the world, yet we have limited understanding of its inner workings.
The difficulty in understanding AI hinders our ability to predict its future capabilities.
Governing AI is challenging due to the lack of consensus on what constitutes intelligence.
AI's definitional ambiguity leads to varied expectations about its development.
The distinction between narrow and general AI is becoming blurred with advancements like ChatGPT.
Deep neural networks are often described as 'black boxes' due to the complexity of their inner workings.
AI interpretability research is making progress in demystifying the 'black box' of neural networks.
Technologists should not be the sole deciders of AI's direction; affected parties should have a say.
Policymaking for AI should focus on adaptability rather than striving for certainty.
Investing in AI measurement capabilities is crucial for understanding its potential impacts.
AI companies should be required to share information about their systems and allow external audits.
Incident reporting mechanisms for AI can help collect data and improve future outcomes.
Policies like measurement, disclosure, and incident reporting can provide clarity on AI's trajectory.
AI's potential is vast, extending beyond current applications to transformative technologies.
The public has a significant role in shaping AI's future through policies and advocacy.
Transcripts
When I talk to people about artificial intelligence,
something I hear a lot from non-experts is “I don’t understand AI.”
But when I talk to experts, a funny thing happens.
They say, “I don’t understand AI, and neither does anyone else.”
This is a pretty strange state of affairs.
Normally, the people building a new technology
understand how it works inside and out.
But for AI, a technology that's radically reshaping the world around us,
that's not so.
Experts do know plenty about how to build and run AI systems, of course.
But when it comes to how they work on the inside,
there are serious limits to how much we know.
And this matters because without deeply understanding AI,
it's really difficult for us to know what it will be able to do next,
or even what it can do now.
And the fact that we have such a hard time understanding
what's going on with the technology and predicting where it will go next,
is one of the biggest hurdles we face in figuring out how to govern AI.
But AI is already all around us,
so we can't just sit around and wait for things to become clearer.
We have to forge some kind of path forward anyway.
I've been working on these AI policy and governance issues
for about eight years,
First in San Francisco, now in Washington, DC.
Along the way, I've gotten an inside look
at how governments are working to manage this technology.
And inside the industry, I've seen a thing or two as well.
So I'm going to share a couple of ideas
for what our path to governing AI could look like.
But first, let's talk about what actually makes AI so hard to understand
and predict.
One huge challenge in building artificial "intelligence"
is that no one can agree on what it actually means
to be intelligent.
This is a strange place to be in when building a new tech.
When the Wright brothers started experimenting with planes,
they didn't know how to build one,
but everyone knew what it meant to fly.
With AI on the other hand,
different experts have completely different intuitions
about what lies at the heart of intelligence.
Is it problem solving?
Is it learning and adaptation,
are emotions,
or having a physical body somehow involved?
We genuinely don't know.
But different answers lead to radically different expectations
about where the technology is going and how fast it'll get there.
An example of how we're confused is how we used to talk
about narrow versus general AI.
For a long time, we talked in terms of two buckets.
A lot of people thought we should just be dividing between narrow AI,
trained for one specific task,
like recommending the next YouTube video,
versus artificial general intelligence, or AGI,
that could do everything a human could do.
We thought of this distinction, narrow versus general,
as a core divide between what we could build in practice
and what would actually be intelligent.
But then a year or two ago, along came ChatGPT.
If you think about it,
you know, is it narrow AI, trained for one specific task?
Or is it AGI and can do everything a human can do?
Clearly the answer is neither.
It's certainly general purpose.
It can code, write poetry,
analyze business problems, help you fix your car.
But it's a far cry from being able to do everything
as well as you or I could do it.
So it turns out this idea of generality
doesn't actually seem to be the right dividing line
between intelligent and not.
And this kind of thing
is a huge challenge for the whole field of AI right now.
We don't have any agreement on what we're trying to build
or on what the road map looks like from here.
We don't even clearly understand the AI systems that we have today.
Why is that?
Researchers sometimes describe deep neural networks,
the main kind of AI being built today,
as a black box.
But what they mean by that is not that it's inherently mysterious
and we have no way of looking inside the box.
The problem is that when we do look inside,
what we find are millions,
billions or even trillions of numbers
that get added and multiplied together in a particular way.
What makes it hard for experts to know what's going on
is basically just, there are too many numbers,
and we don't yet have good ways of teasing apart what they're all doing.
There's a little bit more to it than that, but not a lot.
So how do we govern this technology
that we struggle to understand and predict?
I'm going to share two ideas.
One for all of us and one for policymakers.
First, don't be intimidated.
Either by the technology itself
or by the people and companies building it.
On the technology,
AI can be confusing, but it's not magical.
There are some parts of AI systems we do already understand well,
and even the parts we don't understand won't be opaque forever.
An area of research known as “AI interpretability”
has made quite a lot of progress in the last few years
in making sense of what all those billions of numbers are doing.
One team of researchers, for example,
found a way to identify different parts of a neural network
that they could dial up or dial down
to make the AI's answers happier or angrier,
more honest,
more Machiavellian, and so on.
If we can push forward this kind of research further,
then five or 10 years from now,
we might have a much clearer understanding of what's going on
inside the so-called black box.
And when it comes to those building the technology,
technologists sometimes act as though
if you're not elbows deep in the technical details,
then you're not entitled to an opinion on what we should do with it.
Expertise has its place, of course,
but history shows us how important it is
that the people affected by a new technology
get to play a role in shaping how we use it.
Like the factory workers in the 20th century who fought for factory safety,
or the disability advocates
who made sure the world wide web was accessible.
You don't have to be a scientist or engineer to have a voice.
(Applause)
Second, we need to focus on adaptability, not certainty.
A lot of conversations about how to make policy for AI
get bogged down in fights between, on the one side,
people saying, "We have to regulate AI really hard right now
because it's so risky."
And on the other side, people saying,
“But regulation will kill innovation, and those risks are made up anyway.”
But the way I see it,
it’s not just a choice between slamming on the brakes
or hitting the gas.
If you're driving down a road with unexpected twists and turns,
then two things that will help you a lot
are having a clear view out the windshield
and an excellent steering system.
In AI, this means having a clear picture of where the technology is
and where it's going,
and having plans in place for what to do in different scenarios.
Concretely, this means things like investing in our ability to measure
what AI systems can do.
This sounds nerdy, but it really matters.
Right now, if we want to figure out
whether an AI can do something concerning,
like hack critical infrastructure
or persuade someone to change their political beliefs,
our methods of measuring that are rudimentary.
We need better.
We should also be requiring AI companies,
especially the companies building the most advanced AI systems,
to share information about what they're building,
what their systems can do
and how they're managing risks.
And they should have to let in external AI auditors to scrutinize their work
so that the companies aren't just grading their own homework.
(Applause)
A final example of what this can look like
is setting up incident reporting mechanisms,
so that when things do go wrong in the real world,
we have a way to collect data on what happened
and how we can fix it next time.
Just like the data we collect on plane crashes and cyber attacks.
None of these ideas are mine,
and some of them are already starting to be implemented in places like Brussels,
London, even Washington.
But the reason I'm highlighting these ideas,
measurement, disclosure, incident reporting,
is that they help us navigate progress in AI
by giving us a clearer view out the windshield.
If AI is progressing fast in dangerous directions,
these policies will help us see that.
And if everything is going smoothly, they'll show us that too,
and we can respond accordingly.
What I want to leave you with
is that it's both true that there's a ton of uncertainty
and disagreement in the field of AI.
And that companies are already building and deploying AI
all over the place anyway in ways that affect all of us.
Left to their own devices,
it looks like AI companies might go in a similar direction
to social media companies,
spending most of their resources on building web apps
and for users' attention.
And by default, it looks like the enormous power of more advanced AI systems
might stay concentrated in the hands of a small number of companies,
or even a small number of individuals.
But AI's potential goes so far beyond that.
AI already lets us leap over language barriers
and predict protein structures.
More advanced systems could unlock clean, limitless fusion energy
or revolutionize how we grow food
or 1,000 other things.
And we each have a voice in what happens.
We're not just data sources,
we are users,
we're workers,
we're citizens.
So as tempting as it might be,
we can't wait for clarity or expert consensus
to figure out what we want to happen with AI.
AI is already happening to us.
What we can do is put policies in place
to give us as clear a picture as we can get
of how the technology is changing,
and then we can get in the arena and push for futures we actually want.
Thank you.
(Applause)
Посмотреть больше похожих видео
Summit Fernando Díaz Chief Learning and Technology Office Mentu GEF 2024
Education in the age of AI (Artificial Intelligence) | Dale Lane | TEDxWinchester
Luciano Floridi | I veri rischi e le grandi opportunità dell’Intelligenza Artificiale
Mustafa Suleyman on The Coming Wave of AI, with Zanny Minton Beddoes
Barbara Gallavotti | Che cosa pensa l'Intelligenza artificiale
AI Solutions for Teachers and Professors
5.0 / 5 (0 votes)