How to Govern AI — Even If It’s Hard to Predict | Helen Toner | TED

TED
1 May 202411:17

Summary

TLDRThe speaker addresses the widespread confusion surrounding artificial intelligence, noting that even experts lack a complete understanding of its inner workings. They emphasize the importance of understanding AI for its governance and future development. The talk explores the challenges of defining intelligence and the limitations in predicting AI's trajectory. The speaker suggests focusing on AI interpretability research and adaptability in policy-making, advocating for transparency, measurement, and incident reporting to navigate AI's impact effectively.

Takeaways

  • 🤖 There's a widespread lack of understanding of AI, even among experts, which complicates predicting its future capabilities and governance.
  • 🧠 The definition of intelligence is not agreed upon, leading to varied expectations and challenges in AI development and governance.
  • 🚀 AI's rapid advancement has outpaced our ability to fully comprehend its internal workings, often referred to as 'black boxes'.
  • 🔍 'AI interpretability' is an emerging research area aiming to demystify AI's complex processes and enhance understanding.
  • 🌐 The lack of consensus on AI's goals and roadmaps makes it difficult to govern and predict its trajectory.
  • 👥 Empowering non-experts to participate in AI governance is crucial, as those affected by technology should have a say in its application.
  • 🛠️ Policymakers should focus on adaptability in AI governance, acknowledging the uncertainty and fostering flexibility to respond to AI's evolution.
  • 📊 Investment in measuring AI capabilities is essential for understanding and governing AI effectively.
  • 🔒 Transparency from AI companies, including mandatory disclosure and third-party auditing, is necessary for proper oversight.
  • 📈 Incident reporting mechanisms can provide valuable data, similar to how plane crashes and cyberattacks are documented, to learn and improve AI safety.

Q & A

  • Why do both non-experts and experts often express a lack of understanding of AI?

    -Both non-experts and experts express a lack of understanding of AI because there are serious limits to how much we know about how AI systems work internally. This is unusual as normally the people building a new technology understand it inside and out.

  • How does the lack of understanding of AI affect our ability to govern it?

    -Without a deep understanding of AI, it's difficult to predict what AI will be able to do next or even what it can do now, which is one of the biggest hurdles we face in figuring out how to govern AI.

  • What is the significance of the speaker's experience working on AI policy and governance?

    -The speaker's experience working on AI policy and governance for about eight years, first in San Francisco and now in Washington, DC, provides an inside look at how governments are managing AI technology and offers insights into the industry's approach to AI.

  • Why is it challenging to define intelligence in the context of AI?

    -Defining intelligence in the context of AI is challenging because different experts have completely different intuitions about what lies at the heart of intelligence, such as problem-solving, learning and adaptation, emotions, or having a physical body.

  • What is the confusion surrounding the terms 'narrow AI' and 'general AI'?

    -The confusion arises because the traditional distinction between narrow AI, trained for one specific task, and general AI, capable of doing everything a human could do, does not accurately represent the capabilities of AI systems like ChatGPT, which are general purpose but not as capable as humans in all tasks.

  • How do deep neural networks contribute to the difficulty in understanding AI?

    -Deep neural networks, the main kind of AI being built today, are described as a black box because when we look inside, we find millions to trillions of numbers that are difficult to interpret, making it hard for experts to understand what's going on.

  • What is the speaker's first piece of advice for governing AI that we struggle to understand?

    -The speaker's first piece of advice is not to be intimidated by the technology or the people building it. AI systems can be confusing but are not magical, and progress in 'AI interpretability' is helping to make sense of the complex numbers within AI systems.

  • Why is adaptability important in policymaking for AI?

    -Adaptability is important in policymaking for AI because it allows for a clear view of where the technology is and where it's going, and having plans in place for different scenarios helps navigate the twists and turns of AI progress.

  • What are some concrete steps that can be taken to improve governance of AI?

    -Concrete steps include investing in the ability to measure AI systems' capabilities, requiring AI companies to share information and allow external audits, and setting up incident reporting mechanisms to collect data on real-world AI issues.

  • How can the public contribute to the future of AI despite the uncertainty in the field?

    -The public can contribute to the future of AI by advocating for policies that provide a clear picture of how the technology is changing and then pushing for the futures they want, as they are not just data sources but users, workers, and citizens.

Outlines

00:00

🤖 The Complexity and Uncertainty of AI Understanding

The speaker begins by highlighting the widespread confusion about artificial intelligence (AI), noting that even experts admit to not fully understanding it. This is unusual as typically those developing a technology have a deep understanding of its inner workings. The speaker emphasizes the importance of understanding AI, as it is a technology that is significantly reshaping our world. The lack of understanding poses challenges for predicting AI's future capabilities and its current applications. The speaker also discusses the difficulty in defining intelligence, which leads to varied expectations about AI's trajectory. The script mentions the evolving terminology around 'narrow AI' and 'general AI,' using ChatGPT as an example that doesn't fit neatly into either category. The complexity of deep neural networks, described as 'black boxes,' further complicates the understanding of AI, as they involve vast numbers that are challenging to interpret.

05:02

🔍 Strategies for Navigating AI's Uncertainties

The speaker offers two key ideas for addressing the challenges of understanding and governing AI. The first is a call to not be intimidated by the technology or its creators. While AI can be complex, it is not beyond comprehension, and progress is being made in the field of 'AI interpretability' to demystify its operations. The speaker encourages a broader participation in AI governance, arguing that those affected by technology should have a say in its application. The second idea is to focus on adaptability rather than certainty in policy-making. The speaker suggests that instead of rigid regulations, there should be flexible policies that allow for clear visibility and responsive measures as AI evolves. This includes investing in measurement capabilities, requiring transparency from AI companies, and establishing incident reporting mechanisms to learn from real-world applications. The speaker notes that some of these ideas are already being implemented in various locations and emphasizes the importance of having a clear view of AI's progress and the ability to respond effectively.

10:04

🌟 The Potential and Responsibility in AI's Future

In the final paragraph, the speaker discusses the vast potential of AI, which extends beyond current applications like language translation and protein structure prediction. The speaker envisions future AI systems that could revolutionize energy production, agriculture, and many other sectors. The speaker emphasizes that everyone has a stake in AI's development, as users, workers, and citizens, and that we should not wait for complete clarity or consensus to shape AI's future. Instead, the speaker advocates for the implementation of policies that provide a clear understanding of AI's evolution and enable us to actively participate in steering its direction. The speaker concludes by acknowledging the uncertainty and disagreement in the AI field but also the reality that AI is already impacting our lives, and it is imperative to engage in shaping its future.

Mindmap

Keywords

💡Artificial Intelligence (AI)

Artificial Intelligence refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is the central theme, with discussions on its current capabilities and the challenges of understanding its inner workings. The speaker highlights the paradox that even experts in AI often claim not to fully understand it, which underscores the complexity and the current state of AI development.

💡Expertise

Expertise, in the context of the video, refers to the specialized knowledge and skill that experts in the field of AI possess. It is mentioned that while experts understand how to build and run AI systems, there are limits to their understanding of the internal mechanisms, which is crucial for predicting AI's capabilities and future developments.

💡Governance

Governance in the video pertains to the frameworks and policies that guide the development and use of AI. The speaker emphasizes the difficulty in governing AI due to the lack of deep understanding and the rapid pace of its evolution. It suggests that creating effective governance is a pressing issue as AI becomes more integrated into society.

💡Interpretability

AI interpretability is the ability to explain the workings of AI systems in understandable terms. The video mentions this as a field of research that has made progress in demystifying the 'black box' nature of AI, which is crucial for enhancing trust and control over AI systems. It's highlighted as a path to better understanding AI's decision-making processes.

💡Black Box

The term 'black box' in the video is used metaphorically to describe AI systems whose internal processes are not easily understood or observable. It refers to the complexity and opacity of deep neural networks, which are composed of vast numbers that are processed in ways that are currently difficult to interpret.

💡Narrow AI

Narrow AI, also known as weak AI, is AI designed to perform a narrow task without human-like cognition. The video discusses how this concept is being challenged as AI systems like ChatGPT blur the lines between narrow and general AI, as they can perform a variety of tasks beyond a single function.

💡General AI

General AI, also known as strong AI or artificial general intelligence (AGI), refers to AI systems with the ability to understand, learn, and apply knowledge across a broad range of tasks at a human level. The video points out the confusion around this concept, as current AI systems do not fully meet the criteria of generality.

💡Adaptability

Adaptability in the video is discussed as a key approach to policy-making for AI. It suggests that instead of seeking rigid regulations or laissez-faire attitudes, policies should be flexible and responsive to the changing landscape of AI development, allowing for adjustments as new information and capabilities emerge.

💡Measurement

Measurement in the context of the video refers to the need for robust methods to assess the capabilities and impacts of AI systems. It is highlighted as a foundational element for policy-making, as it provides the data necessary to understand AI's potential risks and benefits, and to guide regulatory responses.

💡Incident Reporting

Incident reporting in the video is proposed as a mechanism to collect data when AI systems cause harm or malfunction. It is likened to how data is collected after plane crashes or cyber attacks, suggesting that such reporting would help in understanding and preventing future AI-related incidents.

💡Citizen Engagement

Citizen engagement is emphasized in the video as a critical component in shaping the future of AI. It suggests that the public, as users and citizens, should have a voice in AI policy and governance, drawing parallels to historical examples of workers and advocates influencing technological developments for the betterment of society.

Highlights

Experts and non-experts alike often express a lack of understanding of AI.

AI is reshaping the world, yet we have limited understanding of its inner workings.

The difficulty in understanding AI hinders our ability to predict its future capabilities.

Governing AI is challenging due to the lack of consensus on what constitutes intelligence.

AI's definitional ambiguity leads to varied expectations about its development.

The distinction between narrow and general AI is becoming blurred with advancements like ChatGPT.

Deep neural networks are often described as 'black boxes' due to the complexity of their inner workings.

AI interpretability research is making progress in demystifying the 'black box' of neural networks.

Technologists should not be the sole deciders of AI's direction; affected parties should have a say.

Policymaking for AI should focus on adaptability rather than striving for certainty.

Investing in AI measurement capabilities is crucial for understanding its potential impacts.

AI companies should be required to share information about their systems and allow external audits.

Incident reporting mechanisms for AI can help collect data and improve future outcomes.

Policies like measurement, disclosure, and incident reporting can provide clarity on AI's trajectory.

AI's potential is vast, extending beyond current applications to transformative technologies.

The public has a significant role in shaping AI's future through policies and advocacy.

Transcripts

play00:03

When I talk to people about artificial intelligence,

play00:07

something I hear a lot from non-experts is “I don’t understand AI.”

play00:13

But when I talk to experts, a funny thing happens.

play00:16

They say, “I don’t understand AI, and neither does anyone else.”

play00:21

This is a pretty strange state of affairs.

play00:24

Normally, the people building a new technology

play00:28

understand how it works inside and out.

play00:31

But for AI, a technology that's radically reshaping the world around us,

play00:36

that's not so.

play00:37

Experts do know plenty about how to build and run AI systems, of course.

play00:42

But when it comes to how they work on the inside,

play00:45

there are serious limits to how much we know.

play00:48

And this matters because without deeply understanding AI,

play00:52

it's really difficult for us to know what it will be able to do next,

play00:56

or even what it can do now.

play00:59

And the fact that we have such a hard time understanding

play01:02

what's going on with the technology and predicting where it will go next,

play01:06

is one of the biggest hurdles we face in figuring out how to govern AI.

play01:12

But AI is already all around us,

play01:15

so we can't just sit around and wait for things to become clearer.

play01:19

We have to forge some kind of path forward anyway.

play01:24

I've been working on these AI policy and governance issues

play01:27

for about eight years,

play01:28

First in San Francisco, now in Washington, DC.

play01:32

Along the way, I've gotten an inside look

play01:35

at how governments are working to manage this technology.

play01:39

And inside the industry, I've seen a thing or two as well.

play01:45

So I'm going to share a couple of ideas

play01:49

for what our path to governing AI could look like.

play01:53

But first, let's talk about what actually makes AI so hard to understand

play01:57

and predict.

play01:59

One huge challenge in building artificial "intelligence"

play02:03

is that no one can agree on what it actually means

play02:06

to be intelligent.

play02:09

This is a strange place to be in when building a new tech.

play02:12

When the Wright brothers started experimenting with planes,

play02:15

they didn't know how to build one,

play02:17

but everyone knew what it meant to fly.

play02:21

With AI on the other hand,

play02:23

different experts have completely different intuitions

play02:26

about what lies at the heart of intelligence.

play02:29

Is it problem solving?

play02:31

Is it learning and adaptation,

play02:34

are emotions,

play02:36

or having a physical body somehow involved?

play02:39

We genuinely don't know.

play02:41

But different answers lead to radically different expectations

play02:45

about where the technology is going and how fast it'll get there.

play02:50

An example of how we're confused is how we used to talk

play02:53

about narrow versus general AI.

play02:55

For a long time, we talked in terms of two buckets.

play02:59

A lot of people thought we should just be dividing between narrow AI,

play03:03

trained for one specific task,

play03:05

like recommending the next YouTube video,

play03:08

versus artificial general intelligence, or AGI,

play03:12

that could do everything a human could do.

play03:15

We thought of this distinction, narrow versus general,

play03:18

as a core divide between what we could build in practice

play03:22

and what would actually be intelligent.

play03:25

But then a year or two ago, along came ChatGPT.

play03:31

If you think about it,

play03:33

you know, is it narrow AI, trained for one specific task?

play03:36

Or is it AGI and can do everything a human can do?

play03:41

Clearly the answer is neither.

play03:42

It's certainly general purpose.

play03:44

It can code, write poetry,

play03:47

analyze business problems, help you fix your car.

play03:51

But it's a far cry from being able to do everything

play03:54

as well as you or I could do it.

play03:56

So it turns out this idea of generality

play03:58

doesn't actually seem to be the right dividing line

play04:01

between intelligent and not.

play04:04

And this kind of thing

play04:05

is a huge challenge for the whole field of AI right now.

play04:08

We don't have any agreement on what we're trying to build

play04:11

or on what the road map looks like from here.

play04:13

We don't even clearly understand the AI systems that we have today.

play04:18

Why is that?

play04:19

Researchers sometimes describe deep neural networks,

play04:22

the main kind of AI being built today,

play04:24

as a black box.

play04:26

But what they mean by that is not that it's inherently mysterious

play04:29

and we have no way of looking inside the box.

play04:33

The problem is that when we do look inside,

play04:35

what we find are millions,

play04:38

billions or even trillions of numbers

play04:41

that get added and multiplied together in a particular way.

play04:45

What makes it hard for experts to know what's going on

play04:47

is basically just, there are too many numbers,

play04:50

and we don't yet have good ways of teasing apart what they're all doing.

play04:54

There's a little bit more to it than that, but not a lot.

play04:58

So how do we govern this technology

play05:01

that we struggle to understand and predict?

play05:04

I'm going to share two ideas.

play05:06

One for all of us and one for policymakers.

play05:10

First, don't be intimidated.

play05:14

Either by the technology itself

play05:16

or by the people and companies building it.

play05:20

On the technology,

play05:21

AI can be confusing, but it's not magical.

play05:24

There are some parts of AI systems we do already understand well,

play05:27

and even the parts we don't understand won't be opaque forever.

play05:31

An area of research known as “AI interpretability”

play05:34

has made quite a lot of progress in the last few years

play05:38

in making sense of what all those billions of numbers are doing.

play05:42

One team of researchers, for example,

play05:44

found a way to identify different parts of a neural network

play05:48

that they could dial up or dial down

play05:50

to make the AI's answers happier or angrier,

play05:54

more honest,

play05:55

more Machiavellian, and so on.

play05:58

If we can push forward this kind of research further,

play06:01

then five or 10 years from now,

play06:03

we might have a much clearer understanding of what's going on

play06:06

inside the so-called black box.

play06:10

And when it comes to those building the technology,

play06:13

technologists sometimes act as though

play06:14

if you're not elbows deep in the technical details,

play06:18

then you're not entitled to an opinion on what we should do with it.

play06:22

Expertise has its place, of course,

play06:24

but history shows us how important it is

play06:26

that the people affected by a new technology

play06:29

get to play a role in shaping how we use it.

play06:32

Like the factory workers in the 20th century who fought for factory safety,

play06:37

or the disability advocates

play06:39

who made sure the world wide web was accessible.

play06:42

You don't have to be a scientist or engineer to have a voice.

play06:48

(Applause)

play06:53

Second, we need to focus on adaptability, not certainty.

play06:59

A lot of conversations about how to make policy for AI

play07:02

get bogged down in fights between, on the one side,

play07:05

people saying, "We have to regulate AI really hard right now

play07:08

because it's so risky."

play07:10

And on the other side, people saying,

play07:12

“But regulation will kill innovation, and those risks are made up anyway.”

play07:16

But the way I see it,

play07:17

it’s not just a choice between slamming on the brakes

play07:20

or hitting the gas.

play07:22

If you're driving down a road with unexpected twists and turns,

play07:26

then two things that will help you a lot

play07:28

are having a clear view out the windshield

play07:31

and an excellent steering system.

play07:34

In AI, this means having a clear picture of where the technology is

play07:39

and where it's going,

play07:40

and having plans in place for what to do in different scenarios.

play07:44

Concretely, this means things like investing in our ability to measure

play07:49

what AI systems can do.

play07:51

This sounds nerdy, but it really matters.

play07:54

Right now, if we want to figure out

play07:56

whether an AI can do something concerning,

play07:58

like hack critical infrastructure

play08:01

or persuade someone to change their political beliefs,

play08:05

our methods of measuring that are rudimentary.

play08:08

We need better.

play08:10

We should also be requiring AI companies,

play08:12

especially the companies building the most advanced AI systems,

play08:16

to share information about what they're building,

play08:19

what their systems can do

play08:21

and how they're managing risks.

play08:23

And they should have to let in external AI auditors to scrutinize their work

play08:29

so that the companies aren't just grading their own homework.

play08:33

(Applause)

play08:38

A final example of what this can look like

play08:40

is setting up incident reporting mechanisms,

play08:44

so that when things do go wrong in the real world,

play08:46

we have a way to collect data on what happened

play08:49

and how we can fix it next time.

play08:51

Just like the data we collect on plane crashes and cyber attacks.

play08:57

None of these ideas are mine,

play08:58

and some of them are already starting to be implemented in places like Brussels,

play09:03

London, even Washington.

play09:06

But the reason I'm highlighting these ideas,

play09:08

measurement, disclosure, incident reporting,

play09:12

is that they help us navigate progress in AI

play09:15

by giving us a clearer view out the windshield.

play09:18

If AI is progressing fast in dangerous directions,

play09:22

these policies will help us see that.

play09:25

And if everything is going smoothly, they'll show us that too,

play09:28

and we can respond accordingly.

play09:33

What I want to leave you with

play09:35

is that it's both true that there's a ton of uncertainty

play09:39

and disagreement in the field of AI.

play09:42

And that companies are already building and deploying AI

play09:46

all over the place anyway in ways that affect all of us.

play09:52

Left to their own devices,

play09:53

it looks like AI companies might go in a similar direction

play09:56

to social media companies,

play09:58

spending most of their resources on building web apps

play10:01

and for users' attention.

play10:04

And by default, it looks like the enormous power of more advanced AI systems

play10:08

might stay concentrated in the hands of a small number of companies,

play10:12

or even a small number of individuals.

play10:15

But AI's potential goes so far beyond that.

play10:18

AI already lets us leap over language barriers

play10:21

and predict protein structures.

play10:23

More advanced systems could unlock clean, limitless fusion energy

play10:28

or revolutionize how we grow food

play10:30

or 1,000 other things.

play10:32

And we each have a voice in what happens.

play10:35

We're not just data sources,

play10:37

we are users,

play10:39

we're workers,

play10:41

we're citizens.

play10:43

So as tempting as it might be,

play10:46

we can't wait for clarity or expert consensus

play10:51

to figure out what we want to happen with AI.

play10:54

AI is already happening to us.

play10:57

What we can do is put policies in place

play11:00

to give us as clear a picture as we can get

play11:03

of how the technology is changing,

play11:06

and then we can get in the arena and push for futures we actually want.

play11:11

Thank you.

play11:12

(Applause)

Rate This

5.0 / 5 (0 votes)

Related Tags
Artificial IntelligenceAI GovernanceExpert InsightsTechnology EthicsPredictive ChallengesAI UncertaintyPolicymakingInnovationRegulationFuture Trends