Why you shouldn't believe the AI extinction lie
Summary
TLDRThe video script discusses the manipulation behind the push for AI regulation by powerful corporations. It argues that the portrayal of AI as an existential threat is exaggerated and used to justify strict licensing and control over AI development, favoring large tech companies. The script highlights the importance of open-source AI, which allows for public scrutiny and equitable access, and calls for support for open-source projects to democratize AI. It also urges viewers to engage politically to protect open-source principles in AI legislation.
Takeaways
- 🧩 There's a push to treat AI as an existential threat, akin to nuclear war or a global pandemic, to justify urgent and exclusive control by a select few 'good guys'.
- 🛡️ Despite the fear-mongering, the script suggests accelerating AI development but keeping it under the control of a few, to prevent it from falling into the wrong hands.
- 🗳️ A conflict is emerging between those who want AI to be tightly controlled and those advocating for open and accessible AI for all, with the latter being the more righteous cause according to the script.
- 💡 The year 2023 marked a significant surge in AI's mainstream presence with the release of GPT-4 and a massive lobbying effort by big tech to influence AI regulation.
- 💸 Big tech and other industries spent $1 billion on lobbying in 2023, a year that saw a dramatic increase in AI-lobbying organizations, aiming to shape AI regulation to their advantage.
- 📋 The lobbying led to a bipartisan bill in the US Senate that proposed federal regulation of AI, requiring companies to register, seek licenses, and be monitored by federal agencies.
- 🚫 Such regulation would likely end open-source AI, as companies would be unwilling to grant open access to models that could hold them liable for misuse.
- 🕊️ Open-source AI, which allows free use and distribution of software, is under threat from the proposed licensing regime, which favors large, closed, proprietary models.
- 🤔 The script questions the validity of claims that superintelligent AI is possible or that it will continue to increase indefinitely, suggesting that current AI models are nearing a ceiling.
- 💼 There's an ideological push by a billionaire group promoting AI as an extinction risk, which is criticized as a means to influence policy and maintain control over AI development.
- 🔍 A counter-argument is presented by scientists and researchers, including Professor Andrew Ng, who oppose the fear tactics used by big tech and advocate for open-source AI.
- 🌐 The script calls for public support of open-source AI, political activism to protect open-source principles, and participation in shaping AI legislation to prevent monopolization by a few powerful entities.
Q & A
What is the main concern expressed about the development of AI in the transcript?
-The main concern is that there is a powerful motivation to consider AI as an existential threat, similar to nuclear war or a global pandemic, and that there is a conflict arising between those who want to keep AI closed off and tightly controlled versus those who want it to be open and accessible to all.
What was the significant event in 2023 regarding AI mentioned in the transcript?
-In 2023, AI exploded into the mainstream with the release of GPT-4, which led to chatbots, generative images, and AI videos flooding the Internet.
How did big tech companies respond to the rise of AI in 2023?
-Big tech companies pooled hundreds of organizations in a massive lobbying campaign to the US federal government, with the number of AI-lobbying organizations increasing from 158 in 2022 to 450 in 2023.
What was the outcome of the lobbying efforts by big tech companies in 2023?
-The lobbying efforts resulted in a bipartisan bill proposed in the US Senate that would have the federal government regulate artificial intelligence nationwide, creating a new authority that any company developing AI would have to register with and seek a license from.
What is the potential impact of the proposed AI regulation on startups and open source AI?
-The proposed regulation could mark the end of open source AI, as it would be difficult for new startups to comply with a strict licensing regime, and no one would want to give open access to their AI model that could hold them liable for abuse.
What is the definition of 'open source' as mentioned in the transcript?
-Open source means that anyone could use or distribute software freely without the author's permission.
What role did the Future of Life Institute play in the narrative around AI?
-The Future of Life Institute is an organization that has been involved in promoting the idea that AI poses an existential threat, and it has been associated with high-profile figures like Elon Musk in calling for a pause on AI development.
What is the counter-argument to the idea that AI is an existential threat?
-The counter-argument is that there is no proof or consensus that future superintelligence is possible, and that current AI models are reaching a ceiling due to limitations in training data and increasing computational costs.
What is the role of billionaire philanthropies in the push for AI regulation?
-Billionaire philanthropies are bankrolling research, YouTube content, and news coverage that pushes the idea of AI as an extinction risk, influencing governments to focus on future hypothetical threats while trusting them with the development of 'good' AI.
What is the stance of Professor Andrew Ng on the proposed AI regulation and its impact on open source AI?
-Professor Andrew Ng rejects the idea that AI could pose an extinction-level threat and believes that big tech is using fear to damage open source AI, as open source would mean anyone would have open access to the technology.
What is the solution proposed in the transcript to prevent big tech from monopolizing AI development?
-The solution proposed is to support open source AI projects that democratize access to artificial intelligence, sign letters and petitions calling for recognition and protection of open source principles, and participate in the political process to ensure legislation does not kill open source.
What is the significance of the leaked Google engineer document mentioned in the transcript?
-The leaked document reveals that both Google and OpenAI are losing the AI arms race to open source, which has developed smaller scale models more appropriate for end users at a lower cost, suggesting that competing with open source is a losing battle.
Outlines
🤖 AI as an Existential Threat and Lobbying Efforts
The paragraph discusses the narrative that AI is a significant existential threat, akin to nuclear war or a global pandemic, which some believe should be met with urgency. However, it argues against halting AI development, suggesting that it should be accelerated, but controlled by 'good guys' to prevent misuse. The speaker, 'the Hated One', criticizes the manipulation of public opinion to favor powerful corporations. The year 2023 is highlighted as a turning point for AI's mainstream presence, with GPT-4's release. The paragraph details an unprecedented lobbying campaign by big tech, which included a diverse range of industries, resulting in significant spending to influence AI regulation in their favor. The goal was to create a federal authority that would oversee AI development through licensing and monitoring, which critics argue would stifle innovation and open-source AI, benefiting only large corporations.
🚀 The Battle for Open AI and the Role of Billionaires
This paragraph delves into the debate surrounding open and closed AI development. It describes the opposition between those who advocate for strict control over AI and those who support open access. The narrative suggests that powerful entities are pushing for control over AI through legislation and fearmongering about its potential risks. The speaker points out that there is no consensus on the possibility of a superintelligent AI and criticizes the idea that current AI models could indefinitely increase in intelligence. The paragraph also exposes a billionaire-backed movement that promotes AI as an extinction-level threat while simultaneously advocating for trust in their own AI development. The situation is framed as a case of regulatory capture, where big tech and aligned groups have stepped into a power vacuum to shape policy in their favor.
🛡️ The Fight for Open Source AI and Public Participation
The final paragraph emphasizes the importance of open source AI and the threat posed by big tech's lobbying efforts to monopolize the field. It mentions a leaked document from a Google engineer acknowledging that open source AI is gaining ground due to its focus on smaller, more user-appropriate models. The paragraph highlights an open letter from the Mozilla Foundation, signed by influential figures, advocating for the opening of AI's source code and science. The letter calls for open access and public oversight, which contrasts with big tech's push for proprietary models. The speaker encourages viewers to support open source AI and to participate in political advocacy to protect open source principles, suggesting that public involvement is crucial in shaping legislation that could impact the future of AI development.
Mindmap
Keywords
💡AI
💡Existential threat
💡Regulation
💡Open source AI
💡Lobbying
💡Proprietary models
💡Fearmongering
💡OpenAI
💡Regulatory capture
💡Public scrutiny and accountability
💡Grassroots movement
Highlights
There is a powerful motivation to keep you thinking that AI is an existential threat.
We should accelerate AI development as fast as we can, as long as it’s controlled by the good guys.
A conflict is arising between those who want to keep AI closed off and those who want to leave it open and accessible.
In 2023, AI exploded into the mainstream with GPT-4, chatbots, generative images, and AI videos.
Big tech pooled hundreds of organizations in a massive campaign to lobby the US federal government, spending $1 billion on lobbying.
A bipartisan bill proposed in the US Senate would regulate AI nationwide, creating a new authority that companies developing AI would need to register with and seek a license from.
Strict licensing regimes would mark the end of open-source AI because nobody would want to give open access to their AI models that could hold them liable for abuse.
Both sides claim they are doing it for humanity, but only one of them is right.
There is no proof or consensus that future superintelligence is possible at all.
AI models are already reaching a ceiling in terms of training data and computational costs.
Billionaire philanthropies are pushing the idea of AI as an extinction risk to persuade governments to focus on future hypothetical threats.
The open-source community is getting ahead by focusing on smaller-scale models more appropriate for the end user.
The Mozilla Foundation's open letter calls for opening up the source code and science of artificial intelligence.
Open source allows public scrutiny and accountability, enabling everyone to participate in making it better.
We need to wage this battle politically to protect open source principles with public access and oversight.
Transcripts
There is a powerful motivation to keep you thinking that AI
is an existential threat. [0] That we should treat it with the
same level of urgency as a nuclear war or a global pandemic. [1]
And yet, we shouldn’t stop developing AI. We should accelerate it as fast
as we can. As long as it’s the select few good guys that get to control it. We must
not let AI fall into wrong hands. [2] But there is a growing opposition to this
movement. A conflict is arising. There is those that want to keep AI closed off and
tightly controlled and those that want to leave it open and accessible to all. Even
though both sides claim they are doing for humanity, only of them is right. This is
an arms race for who gets to dominate AI development and who will be left out.
I am the Hated One, and I make explainer essays like this one, so far still without
any sponsors… or money… or friends… So let me show you how you are being manipulated
into handing over all of AI technology to some of the most powerful corporations.
2023 was the year when AI exploded into the mainstream. GPT-4 was just released,
with chatbots, generative images, and AI videos flooding the Internet
like a hurricane of sensationalism. But behind closed doors the big tech pooled
hundreds of organizations in a massive campaign to lobby the US federal government. The world
has never seen such an organized lobbying effort. The number of AI-lobbying orgs spiked from 158 in
2022 to 450 in 2023. Which didn’t just include the usual big tech culprits, but chip makers like AMD,
media moguls like Disney, or big pharma like AstraZeneca. In total, they spent $1 billion
on lobbying. $1 billion that among other things went to persuading law makers to get AI regulated
precisely how they wanted it. [3] [12] So what did they want? Well, all of that
lobbying culminated in a bipartisan bill proposed in the US Senate. The bill would have the federal
government regulate artificial intelligence nation-wide. It would create a new authority that
any company developing AI would have to register to and seek a license from. License is just a
different word for permission. They would have to be monitored and audited by federal agencies
and they would be held liable for any harm caused by the use of their AI models. [4] [5] [6] [14]
Which, you’d have to ask yourself – how many new startups would have the funds to
comply with such a strict licensing regime? This would mark the end of open source AI,
because nobody would want to give anyone open access to their AI model that could
hold them liable for abuse. Only big, closed, proprietary models would survive this. [7] [4]
Oh, I am gonna be mentioning open source quite a lot. Open source simply means that
anyone could use or distribute software freely without the author’s permission. Yeah, we could
have had that, they just chose not to. [8] How could this legislation proposal even be
crafted? Well, it wasn’t by accident. The two US senators proposing the bill had multiple hearings
with OpenAI, Microsoft and Anthropic, the biggest players in the industry. Their witness testimonies
lead to the drafting of the bill, which was later endorsed by an obscure organization
called Future of Life Institute. [5, 6] [10] You’ve never heard of them before, but you’ve
heard of Elon Musk signing a letter calling for a 6-month pause on AI development or the world would
end. That was the Future of Life Institute. [0] Of course, nobody actually paused AI development.
Everyone who signed the letter went back to their work developing AI faster than ever before. [11]
But governments should totally “step in and institute a moratorium”. Sam
Altman didn’t even sign this letter. [9] But he signed another one from another obscure
org called Center for AI Safety. This one came with a simple statement – “Mitigating the risk
of extinction from AI should be a global priority alongside other societal-scale
risks such as pandemics and nuclear war”. [1] Both of these stunts were picked up by media,
which served well after years of conditioning that AI poses an existential threat, far greater than
anything else. But what’s the implication of treating AI with the same level dread as
literal nukes? You gotta prevent proliferation. You can’t allow free and open access. Only a
handful players should be allowed development of this technology and they should keep it closed,
confidential and proprietary. You can’t allow this technology falls into wrong hands. [7] [13]
Which makes sense, if they are right. But they are not right. First of all, there is actually no
proof or consensus that future superintelligence is possible at all. This is the first predicament
of an AI apocalypse. But it has no merit. [14b] Yes, there is a non-zero chance it could happen,
just like there is non-zero chance we could get invaded by aliens. [13]
The second premise claims that AI will continue increasing its intelligence indefinitely. This
is false. Our current AI models are already reaching a ceiling – training data is running
out. Quantity-wise, there is actually little space left for scaling. AI generated data
eventually becomes poisonous to the point models deteriorate. Computation is becoming
increasingly more expensive both in terms of operational costs and resource costs. AI
would need major scientific breakthroughs in order to significantly increase its
intelligence from where it is now. [14b] [15] The conclusion is therefore also false. It’s not
certain at all that AI will come close to human intelligence levels, not to mention surpassing
them. We shouldn’t act as though AI is a threat greater than a pandemic or climate change.
There is actually a powerful billionaire group that is bankrolling research, YouTube content,
and news coverage pushing the idea of AI as an extinction risk. It’s an ideological monolith
of the longtermist effective altruism movement, which is a rabbit hole so deep this will be its
own video so let me know if you want it. But to give you an abridged version,
billionaire philanthropies are paying fellows and staffers working closely with governments,
orders. They are working to persuade them to focus on future hypothetical threats of AI,
while simultaneously trust them that the AI they are developing will be the good guy. Fear AI
except when its our AI. Then you should trust us completely. What we have here is a fantastic case
of regulatory capture. Nobody from regulation had enough expertise so this organized group of big
tech companies and effective altruists stepped in to fill the power vacuum. [4] [12] [16] [17] [18]
But there is a growing number of those that stand strongly against all of this. They say that if we
do this, if we allow licensing and strict regulation like the big tech lobbies for,
it will be the end of open access to this technology. It will lock almost everyone out
of AI development and will leave only the few powerful incumbents in the game with closely
guarded proprietary AI models. Open source alternatives that could be distributed for
free would be regulated out of existence. [4] Professor Andrew Ng is someone you don’t see
sensational headlines about too often. But he is a key figure. He is the one that taught Sam Altman
from OpenAI and he stood behind AI projects of Google, Baidu and Amazon. And now he says
that the big tech is fearmongering policy makers into drafting legislation that would kill their
competition. He rejects this idea that AI could pose an extinction-level threat and he thinks the
big tech is using fear to damage open source AI. Because open source would mean anyone would have
open access to this technology. [7] [13] [21] His not alone in this thought. There is a growing
counter-faction of scientists and researchers that are also calling out the big tech’s true
motivation. They too argue that this is just an attempt to hijack regulation to cement
incumbent AI companies and to focus policies on future existential dangers instead of addressing
current and immediate problems. They warn that the licensing regime the big tech is calling for
would monopolize AI development as they would be the only ones able to accommodate it. [19] [4]
So what is the solution then? How can we prevent this small group of the most powerful companies
in the world to capture AI market for themselves? There is this secret document written internally
by a Google engineer that leaked online. The engineer says that both Google and OpenAI are
losing the AI arms race to a third faction. This third faction being open source.
This document is a beautiful read of a terrified mind that realized they’ve been doing AI wrong
all along. It details how Google slept at wheel while the open source community got way ahead of
the game by focusing on smaller scale models more appropriate for the end user. He lists multiple
open source AI projects that do what Google’s or OpenAI’s large models do with a comparable
quality but at a lower cost. How these open source projects solved the scaling problem
with better quality data and how competing with open source is a losing battle. I love
every single word of this letter. And this is where your contribution steps in. [15]
There is an open letter from the Mozilla Foundation, the guys that make Firefox,
that calls for opening up the source code and science of artificial intelligence. This letter
was signed by Andrew Ng, of course, but also by Jimmy Wales, the founder of Wikipedia, by folks
from Creative Commons, the Electronic Frontier Foundation, the Linux Foundation, academia,
and even a few souls from the big tech. [20] This letter got next to zero coverage in the
media. But it’s clear this is what the big tech fears and wants to prevent with their
lobbying power. They don’t want regulators to realize that open source AI might be better,
more equitable and safer in the long term. Open source takes power and control away from the top
players and gives it to anyone with a laptop. Open source allows public scrutiny and accountability.
It’s what allows researchers, experts, journalists and users to audit, question and verify what’s
going on. This is what can earn people’s trust because it allows everyone to participate in
making it better rather than just trusting a selection of executives to do what’s best for
humanity after they serve their shareholders. This is where you can play a role. You can sign
this letter too. And you can also support open source projects that work on democratizing access
to artificial intelligence. Rather than paying for premium subscriptions for proprietary AI models,
use and donate to open source ones instead. There is tons of them available. By using them,
you are taking control of this technology, you are protecting your privacy and are
enabling everyone to benefit equally. We also need to wage this battle politically.
Open source is a grassroots movement and when crucial legislation is being crafted we need
to let our voices be heard. The government has the power to craft legislation that can
kill open source. They are already doing in the US and Europe. It’s important that you
take a stance whenever your state or country is making decisions about this. Sign letters
and petitions that call for recognition and protection of open source principles
with public access and oversight. [22] There is tons more that I gotta cover about
this. Rabbit holes that reveal the true power of billionaire lobby. For now, if you like what I do,
please support me on Patreon and watch another one of my videos. I have no sponsors and my
ad income doesn’t pay for my work so I am dependent on your support. Thank you.
تصفح المزيد من مقاطع الفيديو ذات الصلة
GitHub's Devin Competitor, Sam Altman Talks GPT-5 and AGI, Amazon Q, Rabbit R1 Hacked (AI News)
Exploring GaiaNet: The Future of Decentralized AI | The Perfect Blend of Web 3.0 and AI
Why Tech Leaders want to build AI "Superintelligence": Aspirational or Creepy and Cultish?
The MOST Useful AI Skills in 2024
BREAKING: LLaMA 405b is here! Open-source is now FRONTIER!
Ana Rosca argues that human control and regulation of AI prevents it from being a threat (6/8)
5.0 / 5 (0 votes)