Why you shouldn't believe the AI extinction lie

The Hated One
3 May 202410:30

Summary

TLDRThe video script discusses the manipulation behind the push for AI regulation by powerful corporations. It argues that the portrayal of AI as an existential threat is exaggerated and used to justify strict licensing and control over AI development, favoring large tech companies. The script highlights the importance of open-source AI, which allows for public scrutiny and equitable access, and calls for support for open-source projects to democratize AI. It also urges viewers to engage politically to protect open-source principles in AI legislation.

Takeaways

  • šŸ§© There's a push to treat AI as an existential threat, akin to nuclear war or a global pandemic, to justify urgent and exclusive control by a select few 'good guys'.
  • šŸ›”ļø Despite the fear-mongering, the script suggests accelerating AI development but keeping it under the control of a few, to prevent it from falling into the wrong hands.
  • šŸ—³ļø A conflict is emerging between those who want AI to be tightly controlled and those advocating for open and accessible AI for all, with the latter being the more righteous cause according to the script.
  • šŸ’” The year 2023 marked a significant surge in AI's mainstream presence with the release of GPT-4 and a massive lobbying effort by big tech to influence AI regulation.
  • šŸ’ø Big tech and other industries spent $1 billion on lobbying in 2023, a year that saw a dramatic increase in AI-lobbying organizations, aiming to shape AI regulation to their advantage.
  • šŸ“‹ The lobbying led to a bipartisan bill in the US Senate that proposed federal regulation of AI, requiring companies to register, seek licenses, and be monitored by federal agencies.
  • šŸš« Such regulation would likely end open-source AI, as companies would be unwilling to grant open access to models that could hold them liable for misuse.
  • šŸ•Šļø Open-source AI, which allows free use and distribution of software, is under threat from the proposed licensing regime, which favors large, closed, proprietary models.
  • šŸ¤” The script questions the validity of claims that superintelligent AI is possible or that it will continue to increase indefinitely, suggesting that current AI models are nearing a ceiling.
  • šŸ’¼ There's an ideological push by a billionaire group promoting AI as an extinction risk, which is criticized as a means to influence policy and maintain control over AI development.
  • šŸ” A counter-argument is presented by scientists and researchers, including Professor Andrew Ng, who oppose the fear tactics used by big tech and advocate for open-source AI.
  • šŸŒ The script calls for public support of open-source AI, political activism to protect open-source principles, and participation in shaping AI legislation to prevent monopolization by a few powerful entities.

Q & A

  • What is the main concern expressed about the development of AI in the transcript?

    -The main concern is that there is a powerful motivation to consider AI as an existential threat, similar to nuclear war or a global pandemic, and that there is a conflict arising between those who want to keep AI closed off and tightly controlled versus those who want it to be open and accessible to all.

  • What was the significant event in 2023 regarding AI mentioned in the transcript?

    -In 2023, AI exploded into the mainstream with the release of GPT-4, which led to chatbots, generative images, and AI videos flooding the Internet.

  • How did big tech companies respond to the rise of AI in 2023?

    -Big tech companies pooled hundreds of organizations in a massive lobbying campaign to the US federal government, with the number of AI-lobbying organizations increasing from 158 in 2022 to 450 in 2023.

  • What was the outcome of the lobbying efforts by big tech companies in 2023?

    -The lobbying efforts resulted in a bipartisan bill proposed in the US Senate that would have the federal government regulate artificial intelligence nationwide, creating a new authority that any company developing AI would have to register with and seek a license from.

  • What is the potential impact of the proposed AI regulation on startups and open source AI?

    -The proposed regulation could mark the end of open source AI, as it would be difficult for new startups to comply with a strict licensing regime, and no one would want to give open access to their AI model that could hold them liable for abuse.

  • What is the definition of 'open source' as mentioned in the transcript?

    -Open source means that anyone could use or distribute software freely without the author's permission.

  • What role did the Future of Life Institute play in the narrative around AI?

    -The Future of Life Institute is an organization that has been involved in promoting the idea that AI poses an existential threat, and it has been associated with high-profile figures like Elon Musk in calling for a pause on AI development.

  • What is the counter-argument to the idea that AI is an existential threat?

    -The counter-argument is that there is no proof or consensus that future superintelligence is possible, and that current AI models are reaching a ceiling due to limitations in training data and increasing computational costs.

  • What is the role of billionaire philanthropies in the push for AI regulation?

    -Billionaire philanthropies are bankrolling research, YouTube content, and news coverage that pushes the idea of AI as an extinction risk, influencing governments to focus on future hypothetical threats while trusting them with the development of 'good' AI.

  • What is the stance of Professor Andrew Ng on the proposed AI regulation and its impact on open source AI?

    -Professor Andrew Ng rejects the idea that AI could pose an extinction-level threat and believes that big tech is using fear to damage open source AI, as open source would mean anyone would have open access to the technology.

  • What is the solution proposed in the transcript to prevent big tech from monopolizing AI development?

    -The solution proposed is to support open source AI projects that democratize access to artificial intelligence, sign letters and petitions calling for recognition and protection of open source principles, and participate in the political process to ensure legislation does not kill open source.

  • What is the significance of the leaked Google engineer document mentioned in the transcript?

    -The leaked document reveals that both Google and OpenAI are losing the AI arms race to open source, which has developed smaller scale models more appropriate for end users at a lower cost, suggesting that competing with open source is a losing battle.

Outlines

00:00

šŸ¤– AI as an Existential Threat and Lobbying Efforts

The paragraph discusses the narrative that AI is a significant existential threat, akin to nuclear war or a global pandemic, which some believe should be met with urgency. However, it argues against halting AI development, suggesting that it should be accelerated, but controlled by 'good guys' to prevent misuse. The speaker, 'the Hated One', criticizes the manipulation of public opinion to favor powerful corporations. The year 2023 is highlighted as a turning point for AI's mainstream presence, with GPT-4's release. The paragraph details an unprecedented lobbying campaign by big tech, which included a diverse range of industries, resulting in significant spending to influence AI regulation in their favor. The goal was to create a federal authority that would oversee AI development through licensing and monitoring, which critics argue would stifle innovation and open-source AI, benefiting only large corporations.

05:02

šŸš€ The Battle for Open AI and the Role of Billionaires

This paragraph delves into the debate surrounding open and closed AI development. It describes the opposition between those who advocate for strict control over AI and those who support open access. The narrative suggests that powerful entities are pushing for control over AI through legislation and fearmongering about its potential risks. The speaker points out that there is no consensus on the possibility of a superintelligent AI and criticizes the idea that current AI models could indefinitely increase in intelligence. The paragraph also exposes a billionaire-backed movement that promotes AI as an extinction-level threat while simultaneously advocating for trust in their own AI development. The situation is framed as a case of regulatory capture, where big tech and aligned groups have stepped into a power vacuum to shape policy in their favor.

10:02

šŸ›”ļø The Fight for Open Source AI and Public Participation

The final paragraph emphasizes the importance of open source AI and the threat posed by big tech's lobbying efforts to monopolize the field. It mentions a leaked document from a Google engineer acknowledging that open source AI is gaining ground due to its focus on smaller, more user-appropriate models. The paragraph highlights an open letter from the Mozilla Foundation, signed by influential figures, advocating for the opening of AI's source code and science. The letter calls for open access and public oversight, which contrasts with big tech's push for proprietary models. The speaker encourages viewers to support open source AI and to participate in political advocacy to protect open source principles, suggesting that public involvement is crucial in shaping legislation that could impact the future of AI development.

Mindmap

Keywords

šŸ’”AI

AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video's context, AI is portrayed as a powerful technology that some view as an existential threat, while others see it as a tool for advancement. The script discusses the potential dangers of AI and the debate over regulating its development.

šŸ’”Existential threat

An existential threat is a danger or risk that poses the potential to completely destroy or nullify something, often used to describe threats to human existence. The video suggests that there is a powerful motivation to consider AI as an existential threat, likening it to the urgency of nuclear war or a global pandemic, to justify strict control and regulation.

šŸ’”Regulation

Regulation refers to the rules and directives made and maintained by an authority. In the script, the big tech companies are said to lobby for AI regulation that suits their interests, which would involve licensing, monitoring, and liability for AI models. This is part of their strategy to control AI development and maintain a competitive advantage.

šŸ’”Open source AI

Open source AI denotes AI models and software whose source code is made available for anyone to use, modify, and distribute freely. The video argues that open source AI could be stifled by the proposed regulations, which would favor big tech companies with proprietary models and hinder the democratization of AI technology.

šŸ’”Lobbying

Lobbying is the act of attempting to influence decisions made by officials in government, typically legislators. The script describes an unprecedented lobbying effort by big tech companies to shape AI regulation in their favor, spending $1 billion to influence lawmakers and promote their interests.

šŸ’”Proprietary models

Proprietary models refer to products or technologies that are owned by a company and are not shared with others. In the context of AI, proprietary models are AI systems that are not open to the public and are controlled by a single entity. The video suggests that strict regulation could lead to a monopoly of AI by a few companies with proprietary models.

šŸ’”Fearmongering

Fearmongering is the act of deliberately spreading fear or alarm to manipulate people's emotions or actions. The video claims that big tech companies are using fearmongering tactics to convince policymakers and the public that AI poses an existential threat, thereby justifying their push for strict regulation.

šŸ’”OpenAI

OpenAI is a research laboratory that develops AI technologies with the stated goal of ensuring that AI's benefits are as widely and evenly distributed as possible. However, the script implies that OpenAI, along with other big tech companies, may be more focused on maintaining control over AI development rather than promoting open access.

šŸ’”Regulatory capture

Regulatory capture is a form of government failure that occurs when a regulatory agency, intended to act in the public interest, instead advances the commercial or political concerns of the industry or sector it is supposed to be regulating. The video suggests that big tech and effective altruists have filled a power vacuum in AI regulation, leading to a situation where they are shaping policies to serve their interests.

šŸ’”Public scrutiny and accountability

Public scrutiny and accountability refer to the process by which the actions and decisions of individuals or organizations are examined and questioned by the public to ensure transparency and responsibility. The video emphasizes the importance of open source AI for allowing public oversight, which can lead to greater trust and participation in the development and use of AI technologies.

šŸ’”Grassroots movement

A grassroots movement is one that is initiated and controlled by the people at the local level, rather than by centralized or hierarchical organizations. The video calls for a grassroots effort to support open source AI and to make sure that the development and regulation of AI technology are inclusive and democratic.

Highlights

There is a powerful motivation to keep you thinking that AI is an existential threat.

We should accelerate AI development as fast as we can, as long as itā€™s controlled by the good guys.

A conflict is arising between those who want to keep AI closed off and those who want to leave it open and accessible.

In 2023, AI exploded into the mainstream with GPT-4, chatbots, generative images, and AI videos.

Big tech pooled hundreds of organizations in a massive campaign to lobby the US federal government, spending $1 billion on lobbying.

A bipartisan bill proposed in the US Senate would regulate AI nationwide, creating a new authority that companies developing AI would need to register with and seek a license from.

Strict licensing regimes would mark the end of open-source AI because nobody would want to give open access to their AI models that could hold them liable for abuse.

Both sides claim they are doing it for humanity, but only one of them is right.

There is no proof or consensus that future superintelligence is possible at all.

AI models are already reaching a ceiling in terms of training data and computational costs.

Billionaire philanthropies are pushing the idea of AI as an extinction risk to persuade governments to focus on future hypothetical threats.

The open-source community is getting ahead by focusing on smaller-scale models more appropriate for the end user.

The Mozilla Foundation's open letter calls for opening up the source code and science of artificial intelligence.

Open source allows public scrutiny and accountability, enabling everyone to participate in making it better.

We need to wage this battle politically to protect open source principles with public access and oversight.

Transcripts

play00:00

There is a powerful motivationĀ  to keep you thinking that AIĀ Ā 

play00:02

is an existential threat. [0] That we should treat it with theĀ Ā 

play00:05

same level of urgency as a nuclearĀ  war or a global pandemic. [1] Ā 

play00:08

And yet, we shouldnā€™t stop developingĀ  AI. We should accelerate it as fastĀ Ā 

play00:12

as we can. As long as itā€™s the select fewĀ  good guys that get to control it. We mustĀ Ā 

play00:17

not let AI fall into wrong hands. [2] But there is a growing opposition to thisĀ Ā 

play00:23

movement. A conflict is arising. There isĀ  those that want to keep AI closed off andĀ Ā 

play00:28

tightly controlled and those that want toĀ  leave it open and accessible to all. EvenĀ Ā 

play00:32

though both sides claim they are doing forĀ  humanity, only of them is right. This isĀ Ā 

play00:36

an arms race for who gets to dominate AIĀ  development and who will be left out. Ā 

play00:41

I am the Hated One, and I make explainerĀ  essays like this one, so far still withoutĀ Ā 

play00:45

any sponsorsā€¦ or moneyā€¦ or friendsā€¦ So letĀ  me show you how you are being manipulatedĀ Ā 

play00:50

into handing over all of AI technology toĀ  some of the most powerful corporations. Ā 

play00:55

2023 was the year when AI exploded intoĀ  the mainstream. GPT-4 was just released,Ā Ā 

play01:01

with chatbots, generative images,Ā  and AI videos flooding the InternetĀ Ā 

play01:04

like a hurricane of sensationalism. But behind closed doors the big tech pooledĀ Ā 

play01:10

hundreds of organizations in a massive campaignĀ  to lobby the US federal government. The worldĀ Ā 

play01:15

has never seen such an organized lobbying effort.Ā  The number of AI-lobbying orgs spiked from 158 inĀ Ā 

play01:21

2022 to 450 in 2023. Which didnā€™t just include theĀ  usual big tech culprits, but chip makers like AMD,Ā Ā 

play01:29

media moguls like Disney, or big pharma likeĀ  AstraZeneca. In total, they spent $1 billionĀ Ā 

play01:36

on lobbying. $1 billion that among other thingsĀ  went to persuading law makers to get AI regulatedĀ Ā 

play01:42

precisely how they wanted it. [3] [12] So what did they want? Well, all of thatĀ Ā 

play01:46

lobbying culminated in a bipartisan bill proposedĀ  in the US Senate. The bill would have the federalĀ Ā 

play01:51

government regulate artificial intelligenceĀ  nation-wide. It would create a new authority thatĀ Ā 

play01:56

any company developing AI would have to registerĀ  to and seek a license from. License is just aĀ Ā 

play02:01

different word for permission. They would haveĀ  to be monitored and audited by federal agenciesĀ Ā 

play02:06

and they would be held liable for any harm causedĀ  by the use of their AI models. [4] [5] [6] [14] Ā 

play02:10

Which, youā€™d have to ask yourself ā€“ howĀ  many new startups would have the funds toĀ Ā 

play02:14

comply with such a strict licensing regime?Ā  This would mark the end of open source AI,Ā Ā 

play02:18

because nobody would want to give anyoneĀ  open access to their AI model that couldĀ Ā 

play02:22

hold them liable for abuse. Only big, closed,Ā  proprietary models would survive this. [7] [4] Ā 

play02:27

Oh, I am gonna be mentioning open sourceĀ  quite a lot. Open source simply means thatĀ Ā 

play02:31

anyone could use or distribute software freelyĀ  without the authorā€™s permission. Yeah, we couldĀ Ā 

play02:36

have had that, they just chose not to. [8] How could this legislation proposal even beĀ Ā 

play02:41

crafted? Well, it wasnā€™t by accident. The two USĀ  senators proposing the bill had multiple hearingsĀ Ā 

play02:46

with OpenAI, Microsoft and Anthropic, the biggestĀ  players in the industry. Their witness testimoniesĀ Ā 

play02:51

lead to the drafting of the bill, which wasĀ  later endorsed by an obscure organizationĀ Ā 

play02:56

called Future of Life Institute. [5, 6] [10] Youā€™ve never heard of them before, but youā€™veĀ Ā 

play02:59

heard of Elon Musk signing a letter calling for aĀ  6-month pause on AI development or the world wouldĀ Ā 

play03:05

end. That was the Future of Life Institute. [0] Of course, nobody actually paused AI development.Ā Ā 

play03:10

Everyone who signed the letter went back to theirĀ  work developing AI faster than ever before. [11] Ā 

play03:14

But governments should totally ā€œstepĀ  in and institute a moratoriumā€. SamĀ Ā 

play03:18

Altman didnā€™t even sign this letter. [9] But he signed another one from another obscureĀ Ā 

play03:22

org called Center for AI Safety. This one cameĀ  with a simple statement ā€“ ā€œMitigating the riskĀ Ā 

play03:27

of extinction from AI should be a globalĀ  priority alongside other societal-scaleĀ Ā 

play03:30

risks such as pandemics and nuclear warā€. [1] Both of these stunts were picked up by media,Ā Ā 

play03:35

which served well after years of conditioning thatĀ  AI poses an existential threat, far greater thanĀ Ā 

play03:41

anything else. But whatā€™s the implicationĀ  of treating AI with the same level dread asĀ Ā 

play03:59

literal nukes? You gotta prevent proliferation.Ā  You canā€™t allow free and open access. Only aĀ Ā 

play04:09

handful players should be allowed development ofĀ  this technology and they should keep it closed,Ā Ā 

play04:13

confidential and proprietary. You canā€™t allowĀ  this technology falls into wrong hands. [7] [13] Ā 

play04:19

Which makes sense, if they are right. But theyĀ  are not right. First of all, there is actually noĀ Ā 

play04:23

proof or consensus that future superintelligenceĀ  is possible at all. This is the first predicamentĀ Ā 

play04:28

of an AI apocalypse. But it has no merit. [14b] Yes, there is a non-zero chance it could happen,Ā Ā 

play04:35

just like there is non-zero chanceĀ  we could get invaded by aliens. [13] Ā 

play04:39

The second premise claims that AI will continueĀ  increasing its intelligence indefinitely. ThisĀ Ā 

play04:43

is false. Our current AI models are alreadyĀ  reaching a ceiling ā€“ training data is runningĀ Ā 

play04:47

out. Quantity-wise, there is actually littleĀ  space left for scaling. AI generated dataĀ Ā 

play04:52

eventually becomes poisonous to the pointĀ  models deteriorate. Computation is becomingĀ Ā 

play04:56

increasingly more expensive both in termsĀ  of operational costs and resource costs. AIĀ Ā 

play05:01

would need major scientific breakthroughsĀ  in order to significantly increase itsĀ Ā 

play05:05

intelligence from where it is now. [14b] [15] The conclusion is therefore also false. Itā€™s notĀ Ā 

play05:09

certain at all that AI will come close to humanĀ  intelligence levels, not to mention surpassingĀ Ā 

play05:14

them. We shouldnā€™t act as though AI is a threatĀ  greater than a pandemic or climate change. Ā 

play05:19

There is actually a powerful billionaire groupĀ  that is bankrolling research, YouTube content,Ā Ā 

play05:23

and news coverage pushing the idea of AI as anĀ  extinction risk. Itā€™s an ideological monolithĀ Ā 

play05:28

of the longtermist effective altruism movement,Ā  which is a rabbit hole so deep this will be itsĀ Ā 

play05:33

own video so let me know if you want it. But to give you an abridged version,Ā Ā 

play05:37

billionaire philanthropies are paying fellowsĀ  and staffers working closely with governments,Ā Ā 

play05:46

orders. They are working to persuade them toĀ  focus on future hypothetical threats of AI,Ā Ā 

play05:50

while simultaneously trust them that the AI theyĀ  are developing will be the good guy. Fear AIĀ Ā 

play06:00

except when its our AI. Then you should trust usĀ  completely. What we have here is a fantastic caseĀ Ā 

play06:05

of regulatory capture. Nobody from regulation hadĀ  enough expertise so this organized group of bigĀ Ā 

play06:10

tech companies and effective altruists stepped inĀ  to fill the power vacuum. [4] [12] [16] [17] [18] Ā 

play06:14

But there is a growing number of those that standĀ  strongly against all of this. They say that if weĀ Ā 

play06:19

do this, if we allow licensing and strictĀ  regulation like the big tech lobbies for,Ā Ā 

play06:23

it will be the end of open access to thisĀ  technology. It will lock almost everyone outĀ Ā 

play06:27

of AI development and will leave only the fewĀ  powerful incumbents in the game with closelyĀ Ā 

play06:32

guarded proprietary AI models. Open sourceĀ  alternatives that could be distributed forĀ Ā 

play06:36

free would be regulated out of existence. [4] Professor Andrew Ng is someone you donā€™t seeĀ Ā 

play06:41

sensational headlines about too often. But he isĀ  a key figure. He is the one that taught Sam AltmanĀ Ā 

play06:46

from OpenAI and he stood behind AI projectsĀ  of Google, Baidu and Amazon. And now he saysĀ Ā 

play06:52

that the big tech is fearmongering policy makersĀ  into drafting legislation that would kill theirĀ Ā 

play06:56

competition. He rejects this idea that AI couldĀ  pose an extinction-level threat and he thinks theĀ Ā 

play07:01

big tech is using fear to damage open source AI.Ā  Because open source would mean anyone would haveĀ Ā 

play07:06

open access to this technology. [7] [13] [21] His not alone in this thought. There is a growingĀ Ā 

play07:10

counter-faction of scientists and researchersĀ  that are also calling out the big techā€™s trueĀ Ā 

play07:14

motivation. They too argue that this is justĀ  an attempt to hijack regulation to cementĀ Ā 

play07:19

incumbent AI companies and to focus policies onĀ  future existential dangers instead of addressingĀ Ā 

play07:24

current and immediate problems. They warn thatĀ  the licensing regime the big tech is calling forĀ Ā 

play07:29

would monopolize AI development as they would beĀ  the only ones able to accommodate it. [19] [4] Ā 

play07:33

So what is the solution then? How can we preventĀ  this small group of the most powerful companiesĀ Ā 

play07:38

in the world to capture AI market for themselves? There is this secret document written internallyĀ Ā 

play07:42

by a Google engineer that leaked online. TheĀ  engineer says that both Google and OpenAI areĀ Ā 

play07:47

losing the AI arms race to a third faction.Ā  This third faction being open source. Ā 

play07:52

This document is a beautiful read of a terrifiedĀ  mind that realized theyā€™ve been doing AI wrongĀ Ā 

play07:57

all along. It details how Google slept at wheelĀ  while the open source community got way ahead ofĀ Ā 

play08:02

the game by focusing on smaller scale models moreĀ  appropriate for the end user. He lists multipleĀ Ā 

play08:08

open source AI projects that do what Googleā€™sĀ  or OpenAIā€™s large models do with a comparableĀ Ā 

play08:13

quality but at a lower cost. How these openĀ  source projects solved the scaling problemĀ Ā 

play08:17

with better quality data and how competingĀ  with open source is a losing battle. I loveĀ Ā 

play08:22

every single word of this letter. And thisĀ  is where your contribution steps in. [15] Ā 

play08:29

There is an open letter from the MozillaĀ  Foundation, the guys that make Firefox,Ā Ā 

play08:33

that calls for opening up the source code andĀ  science of artificial intelligence. This letterĀ Ā 

play08:37

was signed by Andrew Ng, of course, but also byĀ  Jimmy Wales, the founder of Wikipedia, by folksĀ Ā 

play08:42

from Creative Commons, the Electronic FrontierĀ  Foundation, the Linux Foundation, academia,Ā Ā 

play08:48

and even a few souls from the big tech. [20] This letter got next to zero coverage in theĀ Ā 

play08:52

media. But itā€™s clear this is what the bigĀ  tech fears and wants to prevent with theirĀ Ā 

play08:57

lobbying power. They donā€™t want regulators toĀ  realize that open source AI might be better,Ā Ā 

play09:01

more equitable and safer in the long term. OpenĀ  source takes power and control away from the topĀ Ā 

play09:06

players and gives it to anyone with a laptop. OpenĀ  source allows public scrutiny and accountability.Ā Ā 

play09:12

Itā€™s what allows researchers, experts, journalistsĀ  and users to audit, question and verify whatā€™sĀ Ā 

play09:17

going on. This is what can earn peopleā€™s trustĀ  because it allows everyone to participate inĀ Ā 

play09:23

making it better rather than just trusting aĀ  selection of executives to do whatā€™s best forĀ Ā 

play09:27

humanity after they serve their shareholders. This is where you can play a role. You can signĀ Ā 

play09:31

this letter too. And you can also support openĀ  source projects that work on democratizing accessĀ Ā 

play09:36

to artificial intelligence. Rather than paying forĀ  premium subscriptions for proprietary AI models,Ā Ā 

play09:41

use and donate to open source ones instead.Ā  There is tons of them available. By using them,Ā Ā 

play09:45

you are taking control of this technology,Ā  you are protecting your privacy and areĀ Ā 

play09:49

enabling everyone to benefit equally. We also need to wage this battle politically.Ā Ā 

play09:53

Open source is a grassroots movement and whenĀ  crucial legislation is being crafted we needĀ Ā 

play09:57

to let our voices be heard. The governmentĀ  has the power to craft legislation that canĀ Ā 

play10:01

kill open source. They are already doing inĀ  the US and Europe. Itā€™s important that youĀ Ā 

play10:05

take a stance whenever your state or countryĀ  is making decisions about this. Sign lettersĀ Ā 

play10:09

and petitions that call for recognitionĀ  and protection of open source principlesĀ Ā 

play10:13

with public access and oversight. [22] There is tons more that I gotta cover aboutĀ Ā 

play10:17

this. Rabbit holes that reveal the true power ofĀ  billionaire lobby. For now, if you like what I do,Ā Ā 

play10:21

please support me on Patreon and watch anotherĀ  one of my videos. I have no sponsors and myĀ Ā 

play10:25

ad income doesnā€™t pay for my work so I amĀ  dependent on your support. Thank you.

Rate This
ā˜…
ā˜…
ā˜…
ā˜…
ā˜…

5.0 / 5 (0 votes)

Related Tags
AI Arms RaceOpen SourceProprietary ModelsRegulatory LobbyingBig Tech InfluenceAI RegulationExistential ThreatOpen Source AITech MonopolyPolicy MakingAI Development