How to Govern AI — Even If It’s Hard to Predict | Helen Toner | TED

TED
1 May 202411:17

Summary

TLDRThe speaker addresses the widespread confusion surrounding artificial intelligence, noting that even experts lack a complete understanding of its inner workings. They emphasize the importance of understanding AI for its governance and future development. The talk explores the challenges of defining intelligence and the limitations in predicting AI's trajectory. The speaker suggests focusing on AI interpretability research and adaptability in policy-making, advocating for transparency, measurement, and incident reporting to navigate AI's impact effectively.

Takeaways

  • 🤖 There's a widespread lack of understanding of AI, even among experts, which complicates predicting its future capabilities and governance.
  • 🧠 The definition of intelligence is not agreed upon, leading to varied expectations and challenges in AI development and governance.
  • 🚀 AI's rapid advancement has outpaced our ability to fully comprehend its internal workings, often referred to as 'black boxes'.
  • 🔍 'AI interpretability' is an emerging research area aiming to demystify AI's complex processes and enhance understanding.
  • 🌐 The lack of consensus on AI's goals and roadmaps makes it difficult to govern and predict its trajectory.
  • 👥 Empowering non-experts to participate in AI governance is crucial, as those affected by technology should have a say in its application.
  • 🛠️ Policymakers should focus on adaptability in AI governance, acknowledging the uncertainty and fostering flexibility to respond to AI's evolution.
  • 📊 Investment in measuring AI capabilities is essential for understanding and governing AI effectively.
  • 🔒 Transparency from AI companies, including mandatory disclosure and third-party auditing, is necessary for proper oversight.
  • 📈 Incident reporting mechanisms can provide valuable data, similar to how plane crashes and cyberattacks are documented, to learn and improve AI safety.

Q & A

  • Why do both non-experts and experts often express a lack of understanding of AI?

    -Both non-experts and experts express a lack of understanding of AI because there are serious limits to how much we know about how AI systems work internally. This is unusual as normally the people building a new technology understand it inside and out.

  • How does the lack of understanding of AI affect our ability to govern it?

    -Without a deep understanding of AI, it's difficult to predict what AI will be able to do next or even what it can do now, which is one of the biggest hurdles we face in figuring out how to govern AI.

  • What is the significance of the speaker's experience working on AI policy and governance?

    -The speaker's experience working on AI policy and governance for about eight years, first in San Francisco and now in Washington, DC, provides an inside look at how governments are managing AI technology and offers insights into the industry's approach to AI.

  • Why is it challenging to define intelligence in the context of AI?

    -Defining intelligence in the context of AI is challenging because different experts have completely different intuitions about what lies at the heart of intelligence, such as problem-solving, learning and adaptation, emotions, or having a physical body.

  • What is the confusion surrounding the terms 'narrow AI' and 'general AI'?

    -The confusion arises because the traditional distinction between narrow AI, trained for one specific task, and general AI, capable of doing everything a human could do, does not accurately represent the capabilities of AI systems like ChatGPT, which are general purpose but not as capable as humans in all tasks.

  • How do deep neural networks contribute to the difficulty in understanding AI?

    -Deep neural networks, the main kind of AI being built today, are described as a black box because when we look inside, we find millions to trillions of numbers that are difficult to interpret, making it hard for experts to understand what's going on.

  • What is the speaker's first piece of advice for governing AI that we struggle to understand?

    -The speaker's first piece of advice is not to be intimidated by the technology or the people building it. AI systems can be confusing but are not magical, and progress in 'AI interpretability' is helping to make sense of the complex numbers within AI systems.

  • Why is adaptability important in policymaking for AI?

    -Adaptability is important in policymaking for AI because it allows for a clear view of where the technology is and where it's going, and having plans in place for different scenarios helps navigate the twists and turns of AI progress.

  • What are some concrete steps that can be taken to improve governance of AI?

    -Concrete steps include investing in the ability to measure AI systems' capabilities, requiring AI companies to share information and allow external audits, and setting up incident reporting mechanisms to collect data on real-world AI issues.

  • How can the public contribute to the future of AI despite the uncertainty in the field?

    -The public can contribute to the future of AI by advocating for policies that provide a clear picture of how the technology is changing and then pushing for the futures they want, as they are not just data sources but users, workers, and citizens.

Outlines

plate

このセクションは有料ユーザー限定です。 アクセスするには、アップグレードをお願いします。

今すぐアップグレード

Mindmap

plate

このセクションは有料ユーザー限定です。 アクセスするには、アップグレードをお願いします。

今すぐアップグレード

Keywords

plate

このセクションは有料ユーザー限定です。 アクセスするには、アップグレードをお願いします。

今すぐアップグレード

Highlights

plate

このセクションは有料ユーザー限定です。 アクセスするには、アップグレードをお願いします。

今すぐアップグレード

Transcripts

plate

このセクションは有料ユーザー限定です。 アクセスするには、アップグレードをお願いします。

今すぐアップグレード
Rate This

5.0 / 5 (0 votes)

関連タグ
Artificial IntelligenceAI GovernanceExpert InsightsTechnology EthicsPredictive ChallengesAI UncertaintyPolicymakingInnovationRegulationFuture Trends
英語で要約が必要ですか?