Tech CEO Shows Shocking Deepfake Of Kari Lake At Hearing On AI Impact On Elections

Forbes Breaking News
18 Apr 202408:40

TLDRRidel Gupta, a tech entrepreneur and founder of Deep Media, addresses the impact of deepfakes on society and elections during a hearing. He explains that deepfakes are synthetically manipulated AI-generated content that can mislead or harm, emphasizing their growing quality and affordability. Gupta outlines the importance of understanding the technology behind deepfakes, including Transformer architecture, Generative Adversarial Networks (GANs), and diffusion models. He warns of the potential for political manipulation and societal disruption due to the inability to distinguish real from fake content. Gupta advocates for a collaborative approach involving government, AI companies, platforms, journalists, and deepfake detection companies to combat the issue. He highlights Deep Media's efforts in assisting media outlets and being part of initiatives aimed at detecting and labeling real and fake content. Gupta concludes by demonstrating how AI can detect deepfakes, showcasing a high-quality deepfake video of Kari Lake and emphasizing the need for ongoing vigilance and technological advancement to stay ahead in the battle against deepfakes.

Takeaways

  • 💡 The speaker, Riddhiman Gupta, is a tech entrepreneur focused on addressing the deepfake problem since founding Deep Media in 2017.
  • 🔍 A deepfake is defined as an AI-manipulated image, audio, or video with the potential to harm or mislead, but does not include text.
  • 🧠 The human mind is particularly susceptible to being influenced by image, audio, and video, making society vulnerable to deepfakes.
  • 🤖 Three fundamental technologies underpinning generative AI are the Transformer, Generative Adversarial Network (GAN), and diffusion models.
  • 💻 Deepfakes are becoming increasingly realistic, cheap to produce (as low as 1 cent per minute), and could constitute up to 90% of online content by 2030.
  • ⚖️ Deepfakes have already impacted elections, with manipulated videos of political figures used for political assassination or to sway public opinion.
  • 🌐 The larger threat of deepfakes lies in their potential to erode trust in genuine content, leading to plausible deniability and misinformation.
  • 🤝 Gupta emphasizes the need for a collaborative approach involving government, generative AI companies, platforms, journalists, and deepfake detection companies to solve the issue.
  • 🛡️ Deep Media has been instrumental in helping journalists detect deepfakes and is part of various initiatives aimed at addressing the problem, including DARPA's Semafor and AI Force programs.
  • 📈 The company's platform aims to deliver scalable solutions for detecting deepfakes across various media types, maintaining a low rate of false positives and negatives.
  • 👀 An AI's perspective on media is demonstrated through visual and audio examples, highlighting the technology's ability to analyze and learn from data.
  • 🌟 The highest quality deepfakes are created using proprietary generative models, showcasing the ongoing cat-and-mouse game between deepfake creation and detection.

Q & A

  • What is the primary concern expressed by the speaker about deepfakes?

    -The primary concern is that deepfakes have the potential to dismantle society by hijacking the human mind's trust in image, audio, and video content, leading to a crisis of trust and plausible deniability where anything could be claimed as fake.

  • What are the three fundamental technologies the speaker wants the legislators to keep in mind when discussing generative AI?

    -The three fundamental technologies are the Transformer, which is a type of architecture; the Generative Adversarial Network (GAN); and the Diffusion Model.

  • How does the speaker describe the current state of deepfake technology in terms of quality and cost?

    -The speaker describes the current state of deepfake technology as being of high quality with the cost to produce such content dropping significantly, from about 10 cents per minute to potentially 1 cent per minute.

  • What are the potential impacts of deepfakes on political elections as mentioned in the transcript?

    -The impacts include political assassination through fake videos of candidates in compromising situations, manipulating public opinion by showing politicians in a false but favorable light, and creating a scenario where politicians can deny any negative content as being a deepfake.

  • What solution approach does the speaker propose to combat the deepfake problem?

    -The speaker proposes a collaborative solution involving government stakeholders, generative AI companies, platforms, investigative journalists, local journalists, and deepfake detection companies. These groups need to work together to develop and adopt technologies that can effectively detect and label real and fake content.

  • What is the significance of the speaker's mention of the 'tragedy of the commons' in the context of deepfakes?

    -The 'tragedy of the commons' refers to a situation where individual users acting in their own self-interest deplete a shared resource, in this case, the trust in media content. The speaker suggests that by properly legislating deepfakes, the negative externalities such as fraud and misinformation can be internalized, leading to a healthier AI ecosystem.

  • How does the speaker's company, Deep Media, contribute to the detection of deepfakes?

    -Deep Media contributes by developing technology that can detect deepfakes with a very low rate of false positives and negatives. They work with journalists and news organizations, are part of initiatives like DARPA's Semafor and AI Force program, and collaborate with other companies in the Content Authority initiative to label and authenticate content.

  • What is the role of generative AI technology in both creating and detecting deepfakes?

    -Generative AI technology is used to create high-quality deepfakes, but it is also kept internally at Deep Media to train their detectors. This dual use of the technology allows Deep Media to stay ahead in the cat-and-mouse game of deepfake creation and detection.

  • What is the importance of correctly identifying real content as not fake in the context of deepfake detection?

    -It is critical to maintain the integrity of real content to avoid false accusations and to preserve public trust. A low false positive rate ensures that genuine content is not mistakenly labeled as a deepfake, which could otherwise lead to censorship and abuse of detection technology.

  • How does the speaker illustrate the AI's perspective in detecting deepfakes?

    -The speaker uses visual aids, such as graphs representing an AI's analysis of a person's voice, to show how AI detects deepfakes by identifying key points on a person's face and analyzing the patterns in audio data.

  • What is the potential future scenario the speaker warns about regarding the prevalence of deepfakes?

    -The speaker warns of a future where deepfakes could constitute up to 90% of the content on online platforms by 2030, which could lead to widespread misinformation and a complete loss of trust in digital media.

  • What is the speaker's view on the role of AI in society, contrasting the Terminator scenario with a more likely outcome?

    -The speaker believes that contrary to the Terminator scenario where AI is a direct threat, a more likely outcome is a society resembling George Orwell's '1984', where AI is used to manipulate and control through misinformation and deepfakes.

Outlines

00:00

💡 Introduction to Deep Fakes and Their Impact

Ridel Gupta, the founder of Deep Media, introduces himself as a tech-savvy entrepreneur with a focus on machine learning and generative AI. He explains the concept of deep fakes, which are AI-manipulated images, audio, or video created to deceive or harm. Gupta emphasizes the rapid advancement and decreasing cost of creating deep fakes, which poses a significant threat to society as it can lead to political manipulation and misinformation. He outlines the importance of understanding the underlying technologies such as Transformer models, Generative Adversarial Networks (GANs), and diffusion models. Gupta also highlights the need for collaboration among various stakeholders, including government, media, and AI companies, to address the deep fake problem effectively.

05:01

🛠️ Solutions to the Deep Fake Challenge

Gupta discusses the potential solutions to the deep fake problem, emphasizing the role of free market dynamics and proper legislation in addressing the issue. He believes that deep fakes represent a market failure and that with the right approach, AI can be a force for good. Gupta presents a vision where different sectors, including big tech platforms, work together to adopt technology that can detect and mitigate the spread of deep fakes. He showcases examples of how AI perceives and analyzes media, including audio, to differentiate between real and fake content. Gupta also demonstrates the capabilities of Deep Media's technology in detecting deep fakes, including a high-quality example that was correctly identified by their system. He concludes by reiterating the importance of a collaborative approach and offers to answer any questions, positioning himself as a resource for policy makers seeking technical insights on the subject.

Mindmap

Keywords

💡Deepfake

A deepfake refers to a synthetically manipulated image, audio, or video created using artificial intelligence that can be used to deceive or mislead. In the context of the video, deepfakes pose a significant threat to society by potentially disrupting trust in media and causing political harm. An example from the script is the mention of deepfakes of political figures like President Biden, Trump, and Hillary Clinton, which were used to manipulate public opinion.

💡Generative AI

Generative AI is a branch of artificial intelligence that involves the creation of new content, such as images, videos, or music, that did not exist before. It is the technology behind deepfakes. The video emphasizes the rapid advancement and potential societal impact of generative AI, highlighting its ability to create highly convincing fake content.

💡Transformer

A Transformer is a type of AI architecture that is fundamental to generative AI. It is used for processing sequential data and is a key component in the creation of deepfakes. The script mentions Transformers as one of the three fundamental technologies that generative AI is based on.

💡Generative Adversarial Network (GAN)

A GAN is a type of AI algorithm that consists of two parts, a generator and a discriminator, which compete against each other to improve the quality of generated content. GANs are crucial in the creation of deepfakes, as they enable the generation of highly realistic synthetic media. The script identifies GANs as one of the core technologies behind generative AI.

💡Diffusion Model

A diffusion model is a type of generative model used in AI that has been increasingly employed in creating deepfakes. It is one of the three key technologies mentioned in the script, contributing to the advancement of generative AI and the creation of increasingly convincing synthetic media.

💡Compute Resources

Compute resources refer to the hardware and software capabilities required to perform complex computations, such as those needed for training AI models like deepfakes. The script highlights that creating deepfakes requires massive amounts of compute resources, indicating the scale and complexity of the technology.

💡Data

Data is a crucial component in the creation of deepfakes, as AI models need large datasets to learn and generate convincing synthetic content. The script mentions the need for massive amounts of data, implying the use of numerous identities to train the AI models.

💡Political Assassination

In the context of the video, political assassination refers to the use of deepfakes to harm a political figure's reputation or credibility. The script provides examples of deepfakes used to create false narratives about political figures, which can influence public opinion and election outcomes.

💡Plausible Deniability

Plausible deniability is the concept where someone can claim that an action or event, such as a deepfake, was not their doing, thus avoiding responsibility. The video discusses how the prevalence of deepfakes can lead to a situation where politicians or individuals can deny the authenticity of real content, causing confusion and undermining trust.

💡Solutions Approach

A solutions approach in the context of the video refers to the proactive development and implementation of strategies to counteract the negative effects of deepfakes. The speaker advocates for a collaborative effort between various stakeholders, including government, AI companies, and platforms, to develop and adopt technologies that can detect and mitigate the impact of deepfakes.

💡Negative Externality

A negative externality is an unintended negative consequence that affects a third party who is not directly involved in an economic transaction. In the script, the speaker describes the harm caused by deepfakes, such as fraud and misinformation, as a negative externality that needs to be addressed through proper legislation and regulation.

Highlights

Ridel Gupta, founder of Deep Media, shares his insights on the impact of deep fakes on society and elections.

Deep fakes are synthetically manipulated AI images, audio, or video that can mislead or harm.

Gupta emphasizes the importance of understanding the technology behind deep fakes, including the Transformer architecture, GANs, and diffusion models.

The cost of producing deep fakes is rapidly decreasing, with video costs dropping from 10 cents to potentially 1 cent per minute.

By 2030, it's estimated that up to 90% of online content could be deep fakes.

Deep fakes have already impacted elections, with manipulated videos of political figures causing significant issues.

Gupta discusses the potential for deep fakes to be used for political assassination or to make politicians seem more relatable.

The real threat of deep fakes lies in their ability to undermine trust in real content, leading to plausible deniability and potential misuse.

Deep Media is working on solutions to detect deep fakes and requires collaboration from various stakeholders, including government and AI companies.

Gupta highlights the importance of labeling real and fake content to combat the spread of deep fakes.

Deep Media has assisted journalists from CNN, Washington Post, and Forbes in detecting and reporting on deep fakes.

The company is part of the DARPA Semafor and AI Force program aimed at solving the deep fake problem.

Gupta believes in the free market and that AI can be used for good, but deep fakes represent a market failure.

Deep Media uses its own generative AI technology to train detectors and set the gold standard for deep fake detection.

An example of a high-quality deep fake featuring Kari Lake is presented, demonstrating the capabilities of current technology.

Gupta stresses the need for staying ahead in the cat-and-mouse game between deep fake creation and detection.

The presentation concludes with an invitation for policymakers to engage in solutions from a tech and entrepreneurial perspective.