Elon Musk sues ChatGPT-maker OpenAI | BBC News

BBC News
1 Mar 202406:04

Summary

TLDRElon Musk is suing OpenAI, a company he helped found, alleging Microsoft has turned it into a subsidiary by investing billions. Musk left OpenAI in 2018 and warns unfettered AI could threaten humanity. There are questions around tech giants controlling powerful AI, given their poor regulation of social media. Governments are trying to govern AI but the pace of change may outstrip them. There are fears over weaponized AI like deepfakes influencing elections, and uncertainty whether governments have the tools to address this.

Takeaways

  • 😲 Elon Musk is suing OpenAI for breach of contract, accusing them of prioritizing profits over responsible AI development after Microsoft's investment
  • 😠 Musk says Microsoft has effectively turned OpenAI into a subsidiary, but OpenAI and Microsoft deny this
  • πŸ˜’ US regulators are investigating if Microsoft's OpenAI investment raises competition concerns
  • 😬 Musk warns unfettered AI could pose an existential threat to humanity
  • πŸ˜₯ Microsoft's acquisitions raise worries they are suffocating AI innovation space
  • 😣 Big tech's poor regulation of social media raises concerns about their ability to responsibly govern AI
  • 😰 AI technology is advancing faster than government regulation and oversight
  • πŸ˜“ A few big tech firms may soon have a monopoly on cutting edge AI due to compute and energy requirements
  • 😑 Deepfakes and AI could be weaponized to spread disinformation and interfere in elections
  • 😟 British government's ability to protect upcoming elections from AI threats raises concerns

Q & A

  • What is Elon Musk accusing OpenAI of?

    -Elon Musk is accusing OpenAI of putting profit before its founding principle of developing AI responsibly. He claims OpenAI has effectively become a subsidiary of Microsoft after Microsoft invested billions into the company.

  • What are the antitrust concerns regarding Microsoft's acquisition of AI companies?

    -Regulators are investigating if Microsoft's investments and acquisitions in the AI space, like their recent purchase of an AI company in France, are anti-competitive and could stifle innovation in the industry.

  • What are some of the issues caused by big tech companies self-regulating social media platforms?

    -Self-regulation has meant companies have not taken responsibility for issues like the well-being of users or the impact on political systems and democracy. This has left citizens and systems worse off.

  • How could AI governance by governments help address issues seen in social media regulation?

    -Governments are trying to regulate AI development with more urgency given the lessons learned from the hands-off approach taken for social media platforms. More assertive governance could help address emerging issues.

  • What are the concerns about the future consolidation of AI development?

    -The compute power and energy required to develop advanced AI may mean that only a few large, systemically important companies and governments are capable of working at the cutting edge. This could require more assertive governance.

  • What new AI disinformation threats exist for upcoming elections?

    -Advances in AI mean bots and fake personas can be created more efficiently to spread disinformation. There are also concerns about the rise of hard-to-detect deepfakes during election campaigns.

  • How prepared are governments for new AI-enabled disinformation campaigns?

    -There are concerns governments do not yet have the tools to properly address emerging disinformation threats enabled by advances in AI. The pace of technological change also outpaces regulatory and policy responses.

  • What previous government initiatives have aimed to regulate AI development?

    -The UK government previously hosted a global AI regulation summit, signalling an intent to lead in this policy area. However, concerns remain about whether policy is keeping pace with technological change.

  • What historical examples show the impact of uncontrolled new technologies?

    -The lack of Internet regulation allowed the emergence of issues like weaponized disinformation and threats to democracy. There are fears unchecked AI development could similarly have broad societal impacts.

  • What role do tech companies have in addressing AI-enabled disinformation?

    -Tech companies will need to respond quickly to new types of information manipulation as governments form their policy responses. Public-private cooperation will likely be necessary.

Outlines

00:00

πŸ˜• Elon Musk sues OpenAI over profit concerns

Elon Musk is suing OpenAI for breaching their contract by prioritizing profits over AI safety principles. Musk claims Microsoft has effectively turned OpenAI into a subsidiary by investing billions. OpenAI and Microsoft deny this but regulators are investigating. Musk warns unchecked AI could threaten humanity.

05:02

😟 Concerns over Microsoft dominating and restricting AI innovation

There are worries Microsoft is stifling AI innovation by acquiring emerging companies, already facing investigation over competition laws. Key questions need addressing around ensuring AI benefits rather than harms humanity. Lessons should be learned from lack of social media regulation.

Mindmap

Keywords

πŸ’‘Artificial intelligence

Artificial intelligence (AI) refers to computer systems and machines that can perform tasks that typically require human intelligence. The video discusses concerns over the unchecked development of powerful AI, which could potentially cause harm. For example, Elon Musk warns that the unfettered use of 'generative artificial intelligence' poses an existential threat.

πŸ’‘Governance

Governance refers to the rules, policies, and processes for directing and controlling AI. The video discusses the need for better governance and regulation of AI development, so it benefits humanity rather than causing harm. For example, governments have been slow to regulate social media companies, leading to negative impacts.

πŸ’‘Deepfakes

Deepfakes refer to highly convincing media (images, audio, video) that are artificially generated using AI to impersonate real people and say or do things they didn't actually say or do. The video warns that deepfakes could be used to manipulate elections by discrediting candidates.

πŸ’‘Disinformation

Disinformation refers to deliberately false or misleading information spread to deceive people. The video discusses concerns over AI being used to efficiently spread disinformation at scale to weaponize the information space and interfere in democratic processes like elections.

πŸ’‘Social media companies

Social media companies like Facebook, Twitter, and TikTok host user-generated content on their platforms. The video argues these companies have done a poor job self-regulating to protect user wellbeing and democratic systems, reinforcing the need to govern AI development.

πŸ’‘Business models

Business models refer to how companies generate value and revenue. The video suggests leading-edge AI may be dominated by big tech companies and governments because it requires massive computing resources, so their incentives and business models will shape governance.

πŸ’‘Tech bro

Tech bro is a critical term for wealthy technology company leaders. It suggests a detached, profit-driven mentality that fails to consider societal impacts. The video references UK Prime Minister Rishi Sunak's tech bro background as a concern regarding AI regulation.

πŸ’‘Crisis

Crisis refers to a destabilizing event that generates urgency to act. The video suggests governance often lags technology advancements until a crisis forces responsive policies. For example, the cybersecurity industry emerged after major hacks occurred.

πŸ’‘Regulation

Regulation refers to rules and directives imposed by government authorities to control the development of technology like AI. The video discusses different regulatory approaches but overall emphasizes the need for regulation to prevent harm.

πŸ’‘Existential threat

An existential threat represents danger so severe it could destroy or critically impair human existence. Elon Musk argues unfettered AI progress poses an existential threat to the future of humanity itself.

Highlights

Elon Musk is suing OpenAI for breach of contract

Musk says Microsoft has turned OpenAI into a subsidiary by plowing billions into it

Regulators are investigating the parameters of Microsoft's OpenAI investment

Annabelle raises concerns about large organizations creating revolutionary AI technology but not allowing others to access it

Questions need to be asked about whether AI will benefit or harm humanity

Governments have been very slow to regulate new technologies like the internet

Social media companies have done a horrible job self-regulating and democracy is worse off

AI technology is advancing much faster than governments can regulate

In 3-5 years, only a handful of big tech companies may have the resources to be at the cutting edge of AI

Assertive governance will be required if a few big techs control advanced AI

Senator Warner has evidence Russia can now use AI to create social media bots and fake personas much more efficiently

Deep fakes like the fake Biden robo call before New Hampshire primary will increase

The cyber security industry emerged after cyber attacks began; the same will likely happen with AI

The UK government aims to lead on AI regulation but may lack the tools to address issues like weaponized disinformation

The pace of AI change worries me and the government may not be able to address threats before the next election

Transcripts

play00:00

Elon Musk is suing open AI for breach of

play00:03

contract the billionaire entrepreneur

play00:04

says the US firm is now putting profit

play00:07

before its founding principle of

play00:08

developing AI responsibly Mr musk who

play00:11

helped set up the firm says Microsoft

play00:13

has plowed billions into open Ai and has

play00:15

in effect turned it into a subsidiary

play00:18

the two companies deny the claim but us

play00:20

Regulators are investigating the

play00:22

parameters of Microsoft's investment Mr

play00:24

mus left open AI in 2018 to set up his

play00:27

own rival he has warned before that the

play00:29

UN feted use of generative artificial

play00:32

intelligence could pose an existential

play00:35

threat um Microsoft just bought one of

play00:38

the uh emerging AI companies in France

play00:41

this week they're already under

play00:43

investigation Annabelle uh over uh

play00:46

competition laws do you think um

play00:49

Microsoft is starting to suffocate the

play00:51

space that there is for

play00:53

Innovation certainly I think it's

play00:55

problematic when it comes to artifici

play00:57

artificial intelligence it's an a field

play01:00

where we need as many contenders able to

play01:02

climb the ladder and fewer large

play01:05

organizations which are creating this

play01:08

revolutionary technology but pulling the

play01:10

ladder up behind them you know I think

play01:12

there are serious questions to be asked

play01:14

about Elon musk's

play01:15

motivations about um the world's richest

play01:19

man complaining about profit making over

play01:21

benefit to humanity which some may view

play01:24

as a little selfs serving that said I

play01:27

think that there are very serious

play01:29

questions that we need to ask at the

play01:30

government level across the world about

play01:33

whether AI is going to benefit Humanity

play01:36

or whether it is going to harm Mankind

play01:39

and if we have fears over the latter

play01:42

which certainly some politicians have

play01:44

voiced then how are they going to

play01:46

regulate it because in the UK at least

play01:49

they were very slow to regulate the

play01:51

internet and when they did

play01:54

eventually the on sa bill it was a very

play01:57

far wide reaching piece of legis ation

play02:00

which encompassed arguably far more she

play02:04

to without addressing the fundamental

play02:06

concerns that we have over safety online

play02:09

yeah I was I was about to ask you Ian

play02:12

whether actually maybe it's a better

play02:13

thing that the big companies are in

play02:15

charge of this and controlling the roll

play02:17

out of this extremely powerful

play02:19

technology but Annabelle makes a very

play02:20

good point that the big companies that

play02:22

have had charge of social media meta um

play02:27

uh you know all Twitter you know um Tik

play02:32

Tok these sort of big companies they've

play02:33

not done a very good job of that so so

play02:35

maybe maybe we'd be better going the

play02:37

other way they've done a horrible job of

play02:40

it uh social media has basically

play02:42

regulated itself which meant that it's

play02:44

had no interest in taking responsibility

play02:47

for the well-being of either the people

play02:50

that are on their platforms uh or the

play02:52

political systems that they operate in

play02:54

and democracy and our citizens are worse

play02:57

off as a consequence now when it comes

play02:59

to AI uh certainly the governments are

play03:02

trying to get a move on governance with

play03:05

much greater urgency in part because of

play03:07

the lessons that they are have learned

play03:09

uh from the hands off on social media

play03:11

and companies are trying with various uh

play03:14

degrees of success uh to cooperate with

play03:17

those governments but it's not at all

play03:19

clear uh that it's going to succeed the

play03:21

technology is moving a lot faster than

play03:23

the governments are uh and that means

play03:25

the business models are going to matter

play03:26

a lot more for the governance um I I do

play03:30

worry uh that when we think about the

play03:33

future of AI in 3 or 5 years time it's

play03:35

quite possible that with the amount of

play03:37

Comm compute required and the amount of

play03:40

energy required to run that compute that

play03:42

it's only going to be perhaps a very

play03:45

small number of systemically important

play03:47

companies maybe even working with

play03:49

governments that will be capable of

play03:51

operating at The Cutting Edge that that

play03:53

will require a much more assertive

play03:55

governance if that's where the

play03:57

technology goes just in terms maybe get

play03:59

a quick thought from both of you because

play04:01

we a bit pressed for time but just one

play04:03

um social media post that I saw from

play04:06

Miles Taylor one of our panelists this

play04:07

week he's been talking to Senator Mark

play04:09

Warner Ian and and he said that

play04:12

um they have a dossier in their hands

play04:15

that the Russians are now able to create

play04:17

these Bots and personas on social media

play04:20

using AI in a much more efficient much

play04:23

more dangerous way than they were able

play04:25

to in 2016 2020 the concern was that

play04:28

Senator Warner thinks that just not had

play04:30

a conversation really within Congress

play04:31

about how they going to stop that or how

play04:33

they going to tackle it that's right uh

play04:36

it's going to require a crisis first we

play04:37

already saw you know one deep fake with

play04:40

the robo call pretending to be Biden in

play04:42

the runup to the New Hampshire primary

play04:44

there's going to be a lot more of this

play04:46

and the companies are going to have to

play04:47

respond very very quickly with

play04:49

governments as things start to break

play04:52

that we had that with cyber and entire

play04:54

cyber security industry came up after

play04:56

things started to break that's going to

play04:57

have to happen on AI on about well what

play04:59

about this side because um I know that

play05:02

there is a a sort of intelligence-led

play05:05

approach to it MI5 I think are involved

play05:07

in it do are you convinced by what the

play05:09

government is saying to to protect the

play05:11

upcoming British

play05:13

election I certainly have concerns now

play05:16

the country at the moment is being run

play05:17

by Tech bro rishy sunak who has wanted

play05:20

to try and lead the world on Innova AI

play05:22

regulation he had a hosted a global

play05:25

Summit last Autumn to that effect but

play05:29

there are very serious concerns what

play05:31

really worries me is the pace of change

play05:34

it was 10 years ago that we were worried

play05:36

about the internet and its ability to be

play05:39

weaponized in order to spread

play05:41

disinformation to discredit certain

play05:43

candidates in an election now we have

play05:46

the rise of AI and as Ian says deep

play05:49

fakes which are going to convince

play05:51

members of the public at at a speed that

play05:54

may not be corrected before they go to

play05:56

the poll so certainly that there's a

play05:58

very real concern there and I'm not sure

play06:01

that the government has the tools yet to

play06:03

address it