AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

TED
6 Nov 202310:19

Summary

TLDRThe speaker, an AI researcher, addresses the growing concerns around the societal impacts of AI models. She highlights the urgent need to measure and mitigate AI's environmental costs, tackle issues like data privacy and consent, and address inherent biases that can perpetuate stereotypes and discrimination. By creating transparent tools and disclosing information, the speaker advocates for a future where AI models are more trustworthy, sustainable, and less likely to cause harm. The overarching message is that instead of obsessing over hypothetical existential risks, we should focus on addressing AI's tangible impacts on people and the planet right now.

Takeaways

  • 🌍 AI models contribute to climate change and environmental issues due to their high energy consumption and carbon emissions during training and deployment.
  • 📘 AI models use artwork, books, and other creative works for training without consent from the creators, raising copyright concerns.
  • ⚖️ AI models can encode biases and stereotypes, leading to discrimination against certain communities when deployed in real-world scenarios.
  • 🔍 Transparency and tools are needed to understand and measure the impacts of AI models, such as their environmental footprint, use of copyrighted data, and biases.
  • 🛡️ Initiatives like CodeCarbon, Have I Been Trained?, and Stable Bias Explorer aim to provide tools for measuring and mitigating AI's impacts.
  • 💻 Companies and developers should prioritize choosing more sustainable, unbiased, and ethically-trained AI models based on measurable impacts.
  • 📋 Legislation and governance mechanisms are required to regulate AI's deployment in society and protect against harmful impacts.
  • 🙋‍♀️ Users should have access to information about AI models' impacts to make informed choices about which models to trust and use.
  • 🌱 Rather than focusing solely on hypothetical future risks, immediate action is needed to address AI's current, tangible impacts on society and the environment.
  • 🤝 Collective effort from researchers, companies, policymakers, and users is required to steer AI's development in an ethical and responsible direction.

Q & A

  • What was the strangest email the AI researcher received?

    -The AI researcher received an email from a random stranger saying that their work in AI is going to end humanity.

  • What are some of the negative headlines surrounding AI that the researcher mentioned?

    -The researcher mentioned headlines about a chatbot advising someone to divorce their wife, and an AI meal planner app proposing a recipe containing chlorine gas.

  • What are some of the current tangible impacts of AI that the researcher discussed?

    -The researcher discussed the environmental impact of training AI models, the use of copyrighted art and books without consent for training data, and the potential for AI models to discriminate against certain communities.

  • What is the Bloom model, and what did the researcher's study find about its environmental impact?

    -Bloom is the first open large language model like ChatGPT, created with an emphasis on ethics, transparency, and consent. The researcher's study found that training Bloom used as much energy as 30 homes in a year and emitted 25 tons of carbon dioxide.

  • What tool did the researcher help create to measure the environmental impact of AI training?

    -The researcher helped create CodeCarbon, a tool that runs in parallel to AI training code and estimates the energy consumption and carbon emissions.

  • What is the purpose of the "Have I Been Trained?" tool developed by Spawning.ai?

    -The "Have I Been Trained?" tool allows users to search massive datasets to see if their images or text have been used for training AI models without their consent.

  • What did the researcher find when searching for images of herself in the LAION-5B dataset?

    -The researcher found some images of herself from events she had spoken at, but also many images of other women named Sasha, including bikini models.

  • What tool did the researcher create to explore bias in image generation models?

    -The researcher created the Stable Bias Explorer, which allows users to explore the bias of image generation models through the lens of professions.

  • What did the researcher's tool find regarding the representation of gender and race in professions generated by AI models?

    -The tool found significant over-representation of whiteness and masculinity across 150 professions, even when compared to real-world labor statistics.

  • What was the researcher's response to the email claiming that their work would destroy humanity?

    -The researcher responded that focusing on AI's future existential risks is a distraction from its current, tangible impacts, and that the work should be focused on reducing these impacts now.

Outlines

plate

Этот раздел доступен только подписчикам платных тарифов. Пожалуйста, перейдите на платный тариф для доступа.

Перейти на платный тариф

Mindmap

plate

Этот раздел доступен только подписчикам платных тарифов. Пожалуйста, перейдите на платный тариф для доступа.

Перейти на платный тариф

Keywords

plate

Этот раздел доступен только подписчикам платных тарифов. Пожалуйста, перейдите на платный тариф для доступа.

Перейти на платный тариф

Highlights

plate

Этот раздел доступен только подписчикам платных тарифов. Пожалуйста, перейдите на платный тариф для доступа.

Перейти на платный тариф

Transcripts

plate

Этот раздел доступен только подписчикам платных тарифов. Пожалуйста, перейдите на платный тариф для доступа.

Перейти на платный тариф
Rate This

5.0 / 5 (0 votes)

Связанные теги
AI EthicsEnvironmental ImpactConsentBiasTransparencyResponsible AIToolsAwarenessSocietyFuture
Вам нужно краткое изложение на английском?