Why you shouldn't believe the AI extinction lie

The Hated One
3 May 202410:30

Summary

TLDRThe video script discusses the manipulation behind the push for AI regulation by powerful corporations. It argues that the portrayal of AI as an existential threat is exaggerated and used to justify strict licensing and control over AI development, favoring large tech companies. The script highlights the importance of open-source AI, which allows for public scrutiny and equitable access, and calls for support for open-source projects to democratize AI. It also urges viewers to engage politically to protect open-source principles in AI legislation.

Takeaways

  • 🧩 There's a push to treat AI as an existential threat, akin to nuclear war or a global pandemic, to justify urgent and exclusive control by a select few 'good guys'.
  • 🛡️ Despite the fear-mongering, the script suggests accelerating AI development but keeping it under the control of a few, to prevent it from falling into the wrong hands.
  • 🗳️ A conflict is emerging between those who want AI to be tightly controlled and those advocating for open and accessible AI for all, with the latter being the more righteous cause according to the script.
  • 💡 The year 2023 marked a significant surge in AI's mainstream presence with the release of GPT-4 and a massive lobbying effort by big tech to influence AI regulation.
  • 💸 Big tech and other industries spent $1 billion on lobbying in 2023, a year that saw a dramatic increase in AI-lobbying organizations, aiming to shape AI regulation to their advantage.
  • 📋 The lobbying led to a bipartisan bill in the US Senate that proposed federal regulation of AI, requiring companies to register, seek licenses, and be monitored by federal agencies.
  • 🚫 Such regulation would likely end open-source AI, as companies would be unwilling to grant open access to models that could hold them liable for misuse.
  • 🕊️ Open-source AI, which allows free use and distribution of software, is under threat from the proposed licensing regime, which favors large, closed, proprietary models.
  • 🤔 The script questions the validity of claims that superintelligent AI is possible or that it will continue to increase indefinitely, suggesting that current AI models are nearing a ceiling.
  • 💼 There's an ideological push by a billionaire group promoting AI as an extinction risk, which is criticized as a means to influence policy and maintain control over AI development.
  • 🔍 A counter-argument is presented by scientists and researchers, including Professor Andrew Ng, who oppose the fear tactics used by big tech and advocate for open-source AI.
  • 🌐 The script calls for public support of open-source AI, political activism to protect open-source principles, and participation in shaping AI legislation to prevent monopolization by a few powerful entities.

Q & A

  • What is the main concern expressed about the development of AI in the transcript?

    -The main concern is that there is a powerful motivation to consider AI as an existential threat, similar to nuclear war or a global pandemic, and that there is a conflict arising between those who want to keep AI closed off and tightly controlled versus those who want it to be open and accessible to all.

  • What was the significant event in 2023 regarding AI mentioned in the transcript?

    -In 2023, AI exploded into the mainstream with the release of GPT-4, which led to chatbots, generative images, and AI videos flooding the Internet.

  • How did big tech companies respond to the rise of AI in 2023?

    -Big tech companies pooled hundreds of organizations in a massive lobbying campaign to the US federal government, with the number of AI-lobbying organizations increasing from 158 in 2022 to 450 in 2023.

  • What was the outcome of the lobbying efforts by big tech companies in 2023?

    -The lobbying efforts resulted in a bipartisan bill proposed in the US Senate that would have the federal government regulate artificial intelligence nationwide, creating a new authority that any company developing AI would have to register with and seek a license from.

  • What is the potential impact of the proposed AI regulation on startups and open source AI?

    -The proposed regulation could mark the end of open source AI, as it would be difficult for new startups to comply with a strict licensing regime, and no one would want to give open access to their AI model that could hold them liable for abuse.

  • What is the definition of 'open source' as mentioned in the transcript?

    -Open source means that anyone could use or distribute software freely without the author's permission.

  • What role did the Future of Life Institute play in the narrative around AI?

    -The Future of Life Institute is an organization that has been involved in promoting the idea that AI poses an existential threat, and it has been associated with high-profile figures like Elon Musk in calling for a pause on AI development.

  • What is the counter-argument to the idea that AI is an existential threat?

    -The counter-argument is that there is no proof or consensus that future superintelligence is possible, and that current AI models are reaching a ceiling due to limitations in training data and increasing computational costs.

  • What is the role of billionaire philanthropies in the push for AI regulation?

    -Billionaire philanthropies are bankrolling research, YouTube content, and news coverage that pushes the idea of AI as an extinction risk, influencing governments to focus on future hypothetical threats while trusting them with the development of 'good' AI.

  • What is the stance of Professor Andrew Ng on the proposed AI regulation and its impact on open source AI?

    -Professor Andrew Ng rejects the idea that AI could pose an extinction-level threat and believes that big tech is using fear to damage open source AI, as open source would mean anyone would have open access to the technology.

  • What is the solution proposed in the transcript to prevent big tech from monopolizing AI development?

    -The solution proposed is to support open source AI projects that democratize access to artificial intelligence, sign letters and petitions calling for recognition and protection of open source principles, and participate in the political process to ensure legislation does not kill open source.

  • What is the significance of the leaked Google engineer document mentioned in the transcript?

    -The leaked document reveals that both Google and OpenAI are losing the AI arms race to open source, which has developed smaller scale models more appropriate for end users at a lower cost, suggesting that competing with open source is a losing battle.

Outlines

00:00

🤖 AI as an Existential Threat and Lobbying Efforts

The paragraph discusses the narrative that AI is a significant existential threat, akin to nuclear war or a global pandemic, which some believe should be met with urgency. However, it argues against halting AI development, suggesting that it should be accelerated, but controlled by 'good guys' to prevent misuse. The speaker, 'the Hated One', criticizes the manipulation of public opinion to favor powerful corporations. The year 2023 is highlighted as a turning point for AI's mainstream presence, with GPT-4's release. The paragraph details an unprecedented lobbying campaign by big tech, which included a diverse range of industries, resulting in significant spending to influence AI regulation in their favor. The goal was to create a federal authority that would oversee AI development through licensing and monitoring, which critics argue would stifle innovation and open-source AI, benefiting only large corporations.

05:02

🚀 The Battle for Open AI and the Role of Billionaires

This paragraph delves into the debate surrounding open and closed AI development. It describes the opposition between those who advocate for strict control over AI and those who support open access. The narrative suggests that powerful entities are pushing for control over AI through legislation and fearmongering about its potential risks. The speaker points out that there is no consensus on the possibility of a superintelligent AI and criticizes the idea that current AI models could indefinitely increase in intelligence. The paragraph also exposes a billionaire-backed movement that promotes AI as an extinction-level threat while simultaneously advocating for trust in their own AI development. The situation is framed as a case of regulatory capture, where big tech and aligned groups have stepped into a power vacuum to shape policy in their favor.

10:02

🛡️ The Fight for Open Source AI and Public Participation

The final paragraph emphasizes the importance of open source AI and the threat posed by big tech's lobbying efforts to monopolize the field. It mentions a leaked document from a Google engineer acknowledging that open source AI is gaining ground due to its focus on smaller, more user-appropriate models. The paragraph highlights an open letter from the Mozilla Foundation, signed by influential figures, advocating for the opening of AI's source code and science. The letter calls for open access and public oversight, which contrasts with big tech's push for proprietary models. The speaker encourages viewers to support open source AI and to participate in political advocacy to protect open source principles, suggesting that public involvement is crucial in shaping legislation that could impact the future of AI development.

Mindmap

Keywords

💡AI

AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video's context, AI is portrayed as a powerful technology that some view as an existential threat, while others see it as a tool for advancement. The script discusses the potential dangers of AI and the debate over regulating its development.

💡Existential threat

An existential threat is a danger or risk that poses the potential to completely destroy or nullify something, often used to describe threats to human existence. The video suggests that there is a powerful motivation to consider AI as an existential threat, likening it to the urgency of nuclear war or a global pandemic, to justify strict control and regulation.

💡Regulation

Regulation refers to the rules and directives made and maintained by an authority. In the script, the big tech companies are said to lobby for AI regulation that suits their interests, which would involve licensing, monitoring, and liability for AI models. This is part of their strategy to control AI development and maintain a competitive advantage.

💡Open source AI

Open source AI denotes AI models and software whose source code is made available for anyone to use, modify, and distribute freely. The video argues that open source AI could be stifled by the proposed regulations, which would favor big tech companies with proprietary models and hinder the democratization of AI technology.

💡Lobbying

Lobbying is the act of attempting to influence decisions made by officials in government, typically legislators. The script describes an unprecedented lobbying effort by big tech companies to shape AI regulation in their favor, spending $1 billion to influence lawmakers and promote their interests.

💡Proprietary models

Proprietary models refer to products or technologies that are owned by a company and are not shared with others. In the context of AI, proprietary models are AI systems that are not open to the public and are controlled by a single entity. The video suggests that strict regulation could lead to a monopoly of AI by a few companies with proprietary models.

💡Fearmongering

Fearmongering is the act of deliberately spreading fear or alarm to manipulate people's emotions or actions. The video claims that big tech companies are using fearmongering tactics to convince policymakers and the public that AI poses an existential threat, thereby justifying their push for strict regulation.

💡OpenAI

OpenAI is a research laboratory that develops AI technologies with the stated goal of ensuring that AI's benefits are as widely and evenly distributed as possible. However, the script implies that OpenAI, along with other big tech companies, may be more focused on maintaining control over AI development rather than promoting open access.

💡Regulatory capture

Regulatory capture is a form of government failure that occurs when a regulatory agency, intended to act in the public interest, instead advances the commercial or political concerns of the industry or sector it is supposed to be regulating. The video suggests that big tech and effective altruists have filled a power vacuum in AI regulation, leading to a situation where they are shaping policies to serve their interests.

💡Public scrutiny and accountability

Public scrutiny and accountability refer to the process by which the actions and decisions of individuals or organizations are examined and questioned by the public to ensure transparency and responsibility. The video emphasizes the importance of open source AI for allowing public oversight, which can lead to greater trust and participation in the development and use of AI technologies.

💡Grassroots movement

A grassroots movement is one that is initiated and controlled by the people at the local level, rather than by centralized or hierarchical organizations. The video calls for a grassroots effort to support open source AI and to make sure that the development and regulation of AI technology are inclusive and democratic.

Highlights

There is a powerful motivation to keep you thinking that AI is an existential threat.

We should accelerate AI development as fast as we can, as long as it’s controlled by the good guys.

A conflict is arising between those who want to keep AI closed off and those who want to leave it open and accessible.

In 2023, AI exploded into the mainstream with GPT-4, chatbots, generative images, and AI videos.

Big tech pooled hundreds of organizations in a massive campaign to lobby the US federal government, spending $1 billion on lobbying.

A bipartisan bill proposed in the US Senate would regulate AI nationwide, creating a new authority that companies developing AI would need to register with and seek a license from.

Strict licensing regimes would mark the end of open-source AI because nobody would want to give open access to their AI models that could hold them liable for abuse.

Both sides claim they are doing it for humanity, but only one of them is right.

There is no proof or consensus that future superintelligence is possible at all.

AI models are already reaching a ceiling in terms of training data and computational costs.

Billionaire philanthropies are pushing the idea of AI as an extinction risk to persuade governments to focus on future hypothetical threats.

The open-source community is getting ahead by focusing on smaller-scale models more appropriate for the end user.

The Mozilla Foundation's open letter calls for opening up the source code and science of artificial intelligence.

Open source allows public scrutiny and accountability, enabling everyone to participate in making it better.

We need to wage this battle politically to protect open source principles with public access and oversight.

Transcripts

play00:00

There is a powerful motivation  to keep you thinking that AI  

play00:02

is an existential threat. [0] That we should treat it with the  

play00:05

same level of urgency as a nuclear  war or a global pandemic. [1]  

play00:08

And yet, we shouldn’t stop developing  AI. We should accelerate it as fast  

play00:12

as we can. As long as it’s the select few  good guys that get to control it. We must  

play00:17

not let AI fall into wrong hands. [2] But there is a growing opposition to this  

play00:23

movement. A conflict is arising. There is  those that want to keep AI closed off and  

play00:28

tightly controlled and those that want to  leave it open and accessible to all. Even  

play00:32

though both sides claim they are doing for  humanity, only of them is right. This is  

play00:36

an arms race for who gets to dominate AI  development and who will be left out.  

play00:41

I am the Hated One, and I make explainer  essays like this one, so far still without  

play00:45

any sponsors… or money… or friends… So let  me show you how you are being manipulated  

play00:50

into handing over all of AI technology to  some of the most powerful corporations.  

play00:55

2023 was the year when AI exploded into  the mainstream. GPT-4 was just released,  

play01:01

with chatbots, generative images,  and AI videos flooding the Internet  

play01:04

like a hurricane of sensationalism. But behind closed doors the big tech pooled  

play01:10

hundreds of organizations in a massive campaign  to lobby the US federal government. The world  

play01:15

has never seen such an organized lobbying effort.  The number of AI-lobbying orgs spiked from 158 in  

play01:21

2022 to 450 in 2023. Which didn’t just include the  usual big tech culprits, but chip makers like AMD,  

play01:29

media moguls like Disney, or big pharma like  AstraZeneca. In total, they spent $1 billion  

play01:36

on lobbying. $1 billion that among other things  went to persuading law makers to get AI regulated  

play01:42

precisely how they wanted it. [3] [12] So what did they want? Well, all of that  

play01:46

lobbying culminated in a bipartisan bill proposed  in the US Senate. The bill would have the federal  

play01:51

government regulate artificial intelligence  nation-wide. It would create a new authority that  

play01:56

any company developing AI would have to register  to and seek a license from. License is just a  

play02:01

different word for permission. They would have  to be monitored and audited by federal agencies  

play02:06

and they would be held liable for any harm caused  by the use of their AI models. [4] [5] [6] [14]  

play02:10

Which, you’d have to ask yourself – how  many new startups would have the funds to  

play02:14

comply with such a strict licensing regime?  This would mark the end of open source AI,  

play02:18

because nobody would want to give anyone  open access to their AI model that could  

play02:22

hold them liable for abuse. Only big, closed,  proprietary models would survive this. [7] [4]  

play02:27

Oh, I am gonna be mentioning open source  quite a lot. Open source simply means that  

play02:31

anyone could use or distribute software freely  without the author’s permission. Yeah, we could  

play02:36

have had that, they just chose not to. [8] How could this legislation proposal even be  

play02:41

crafted? Well, it wasn’t by accident. The two US  senators proposing the bill had multiple hearings  

play02:46

with OpenAI, Microsoft and Anthropic, the biggest  players in the industry. Their witness testimonies  

play02:51

lead to the drafting of the bill, which was  later endorsed by an obscure organization  

play02:56

called Future of Life Institute. [5, 6] [10] You’ve never heard of them before, but you’ve  

play02:59

heard of Elon Musk signing a letter calling for a  6-month pause on AI development or the world would  

play03:05

end. That was the Future of Life Institute. [0] Of course, nobody actually paused AI development.  

play03:10

Everyone who signed the letter went back to their  work developing AI faster than ever before. [11]  

play03:14

But governments should totally “step  in and institute a moratorium”. Sam  

play03:18

Altman didn’t even sign this letter. [9] But he signed another one from another obscure  

play03:22

org called Center for AI Safety. This one came  with a simple statement – “Mitigating the risk  

play03:27

of extinction from AI should be a global  priority alongside other societal-scale  

play03:30

risks such as pandemics and nuclear war”. [1] Both of these stunts were picked up by media,  

play03:35

which served well after years of conditioning that  AI poses an existential threat, far greater than  

play03:41

anything else. But what’s the implication  of treating AI with the same level dread as  

play03:59

literal nukes? You gotta prevent proliferation.  You can’t allow free and open access. Only a  

play04:09

handful players should be allowed development of  this technology and they should keep it closed,  

play04:13

confidential and proprietary. You can’t allow  this technology falls into wrong hands. [7] [13]  

play04:19

Which makes sense, if they are right. But they  are not right. First of all, there is actually no  

play04:23

proof or consensus that future superintelligence  is possible at all. This is the first predicament  

play04:28

of an AI apocalypse. But it has no merit. [14b] Yes, there is a non-zero chance it could happen,  

play04:35

just like there is non-zero chance  we could get invaded by aliens. [13]  

play04:39

The second premise claims that AI will continue  increasing its intelligence indefinitely. This  

play04:43

is false. Our current AI models are already  reaching a ceiling – training data is running  

play04:47

out. Quantity-wise, there is actually little  space left for scaling. AI generated data  

play04:52

eventually becomes poisonous to the point  models deteriorate. Computation is becoming  

play04:56

increasingly more expensive both in terms  of operational costs and resource costs. AI  

play05:01

would need major scientific breakthroughs  in order to significantly increase its  

play05:05

intelligence from where it is now. [14b] [15] The conclusion is therefore also false. It’s not  

play05:09

certain at all that AI will come close to human  intelligence levels, not to mention surpassing  

play05:14

them. We shouldn’t act as though AI is a threat  greater than a pandemic or climate change.  

play05:19

There is actually a powerful billionaire group  that is bankrolling research, YouTube content,  

play05:23

and news coverage pushing the idea of AI as an  extinction risk. It’s an ideological monolith  

play05:28

of the longtermist effective altruism movement,  which is a rabbit hole so deep this will be its  

play05:33

own video so let me know if you want it. But to give you an abridged version,  

play05:37

billionaire philanthropies are paying fellows  and staffers working closely with governments,  

play05:46

orders. They are working to persuade them to  focus on future hypothetical threats of AI,  

play05:50

while simultaneously trust them that the AI they  are developing will be the good guy. Fear AI  

play06:00

except when its our AI. Then you should trust us  completely. What we have here is a fantastic case  

play06:05

of regulatory capture. Nobody from regulation had  enough expertise so this organized group of big  

play06:10

tech companies and effective altruists stepped in  to fill the power vacuum. [4] [12] [16] [17] [18]  

play06:14

But there is a growing number of those that stand  strongly against all of this. They say that if we  

play06:19

do this, if we allow licensing and strict  regulation like the big tech lobbies for,  

play06:23

it will be the end of open access to this  technology. It will lock almost everyone out  

play06:27

of AI development and will leave only the few  powerful incumbents in the game with closely  

play06:32

guarded proprietary AI models. Open source  alternatives that could be distributed for  

play06:36

free would be regulated out of existence. [4] Professor Andrew Ng is someone you don’t see  

play06:41

sensational headlines about too often. But he is  a key figure. He is the one that taught Sam Altman  

play06:46

from OpenAI and he stood behind AI projects  of Google, Baidu and Amazon. And now he says  

play06:52

that the big tech is fearmongering policy makers  into drafting legislation that would kill their  

play06:56

competition. He rejects this idea that AI could  pose an extinction-level threat and he thinks the  

play07:01

big tech is using fear to damage open source AI.  Because open source would mean anyone would have  

play07:06

open access to this technology. [7] [13] [21] His not alone in this thought. There is a growing  

play07:10

counter-faction of scientists and researchers  that are also calling out the big tech’s true  

play07:14

motivation. They too argue that this is just  an attempt to hijack regulation to cement  

play07:19

incumbent AI companies and to focus policies on  future existential dangers instead of addressing  

play07:24

current and immediate problems. They warn that  the licensing regime the big tech is calling for  

play07:29

would monopolize AI development as they would be  the only ones able to accommodate it. [19] [4]  

play07:33

So what is the solution then? How can we prevent  this small group of the most powerful companies  

play07:38

in the world to capture AI market for themselves? There is this secret document written internally  

play07:42

by a Google engineer that leaked online. The  engineer says that both Google and OpenAI are  

play07:47

losing the AI arms race to a third faction.  This third faction being open source.  

play07:52

This document is a beautiful read of a terrified  mind that realized they’ve been doing AI wrong  

play07:57

all along. It details how Google slept at wheel  while the open source community got way ahead of  

play08:02

the game by focusing on smaller scale models more  appropriate for the end user. He lists multiple  

play08:08

open source AI projects that do what Google’s  or OpenAI’s large models do with a comparable  

play08:13

quality but at a lower cost. How these open  source projects solved the scaling problem  

play08:17

with better quality data and how competing  with open source is a losing battle. I love  

play08:22

every single word of this letter. And this  is where your contribution steps in. [15]  

play08:29

There is an open letter from the Mozilla  Foundation, the guys that make Firefox,  

play08:33

that calls for opening up the source code and  science of artificial intelligence. This letter  

play08:37

was signed by Andrew Ng, of course, but also by  Jimmy Wales, the founder of Wikipedia, by folks  

play08:42

from Creative Commons, the Electronic Frontier  Foundation, the Linux Foundation, academia,  

play08:48

and even a few souls from the big tech. [20] This letter got next to zero coverage in the  

play08:52

media. But it’s clear this is what the big  tech fears and wants to prevent with their  

play08:57

lobbying power. They don’t want regulators to  realize that open source AI might be better,  

play09:01

more equitable and safer in the long term. Open  source takes power and control away from the top  

play09:06

players and gives it to anyone with a laptop. Open  source allows public scrutiny and accountability.  

play09:12

It’s what allows researchers, experts, journalists  and users to audit, question and verify what’s  

play09:17

going on. This is what can earn people’s trust  because it allows everyone to participate in  

play09:23

making it better rather than just trusting a  selection of executives to do what’s best for  

play09:27

humanity after they serve their shareholders. This is where you can play a role. You can sign  

play09:31

this letter too. And you can also support open  source projects that work on democratizing access  

play09:36

to artificial intelligence. Rather than paying for  premium subscriptions for proprietary AI models,  

play09:41

use and donate to open source ones instead.  There is tons of them available. By using them,  

play09:45

you are taking control of this technology,  you are protecting your privacy and are  

play09:49

enabling everyone to benefit equally. We also need to wage this battle politically.  

play09:53

Open source is a grassroots movement and when  crucial legislation is being crafted we need  

play09:57

to let our voices be heard. The government  has the power to craft legislation that can  

play10:01

kill open source. They are already doing in  the US and Europe. It’s important that you  

play10:05

take a stance whenever your state or country  is making decisions about this. Sign letters  

play10:09

and petitions that call for recognition  and protection of open source principles  

play10:13

with public access and oversight. [22] There is tons more that I gotta cover about  

play10:17

this. Rabbit holes that reveal the true power of  billionaire lobby. For now, if you like what I do,  

play10:21

please support me on Patreon and watch another  one of my videos. I have no sponsors and my  

play10:25

ad income doesn’t pay for my work so I am  dependent on your support. Thank you.

Rate This

5.0 / 5 (0 votes)

Etiquetas Relacionadas
AI Arms RaceOpen SourceProprietary ModelsRegulatory LobbyingBig Tech InfluenceAI RegulationExistential ThreatOpen Source AITech MonopolyPolicy MakingAI Development
¿Necesitas un resumen en inglés?