Elon Musk sues ChatGPT-maker OpenAI | BBC News
Summary
TLDRElon Musk is suing OpenAI, a company he helped found, alleging Microsoft has turned it into a subsidiary by investing billions. Musk left OpenAI in 2018 and warns unfettered AI could threaten humanity. There are questions around tech giants controlling powerful AI, given their poor regulation of social media. Governments are trying to govern AI but the pace of change may outstrip them. There are fears over weaponized AI like deepfakes influencing elections, and uncertainty whether governments have the tools to address this.
Takeaways
- 😲 Elon Musk is suing OpenAI for breach of contract, accusing them of prioritizing profits over responsible AI development after Microsoft's investment
- 😠 Musk says Microsoft has effectively turned OpenAI into a subsidiary, but OpenAI and Microsoft deny this
- 😒 US regulators are investigating if Microsoft's OpenAI investment raises competition concerns
- 😬 Musk warns unfettered AI could pose an existential threat to humanity
- 😥 Microsoft's acquisitions raise worries they are suffocating AI innovation space
- 😣 Big tech's poor regulation of social media raises concerns about their ability to responsibly govern AI
- 😰 AI technology is advancing faster than government regulation and oversight
- 😓 A few big tech firms may soon have a monopoly on cutting edge AI due to compute and energy requirements
- 😡 Deepfakes and AI could be weaponized to spread disinformation and interfere in elections
- 😟 British government's ability to protect upcoming elections from AI threats raises concerns
Q & A
What is Elon Musk accusing OpenAI of?
-Elon Musk is accusing OpenAI of putting profit before its founding principle of developing AI responsibly. He claims OpenAI has effectively become a subsidiary of Microsoft after Microsoft invested billions into the company.
What are the antitrust concerns regarding Microsoft's acquisition of AI companies?
-Regulators are investigating if Microsoft's investments and acquisitions in the AI space, like their recent purchase of an AI company in France, are anti-competitive and could stifle innovation in the industry.
What are some of the issues caused by big tech companies self-regulating social media platforms?
-Self-regulation has meant companies have not taken responsibility for issues like the well-being of users or the impact on political systems and democracy. This has left citizens and systems worse off.
How could AI governance by governments help address issues seen in social media regulation?
-Governments are trying to regulate AI development with more urgency given the lessons learned from the hands-off approach taken for social media platforms. More assertive governance could help address emerging issues.
What are the concerns about the future consolidation of AI development?
-The compute power and energy required to develop advanced AI may mean that only a few large, systemically important companies and governments are capable of working at the cutting edge. This could require more assertive governance.
What new AI disinformation threats exist for upcoming elections?
-Advances in AI mean bots and fake personas can be created more efficiently to spread disinformation. There are also concerns about the rise of hard-to-detect deepfakes during election campaigns.
How prepared are governments for new AI-enabled disinformation campaigns?
-There are concerns governments do not yet have the tools to properly address emerging disinformation threats enabled by advances in AI. The pace of technological change also outpaces regulatory and policy responses.
What previous government initiatives have aimed to regulate AI development?
-The UK government previously hosted a global AI regulation summit, signalling an intent to lead in this policy area. However, concerns remain about whether policy is keeping pace with technological change.
What historical examples show the impact of uncontrolled new technologies?
-The lack of Internet regulation allowed the emergence of issues like weaponized disinformation and threats to democracy. There are fears unchecked AI development could similarly have broad societal impacts.
What role do tech companies have in addressing AI-enabled disinformation?
-Tech companies will need to respond quickly to new types of information manipulation as governments form their policy responses. Public-private cooperation will likely be necessary.
Outlines
😕 Elon Musk sues OpenAI over profit concerns
Elon Musk is suing OpenAI for breaching their contract by prioritizing profits over AI safety principles. Musk claims Microsoft has effectively turned OpenAI into a subsidiary by investing billions. OpenAI and Microsoft deny this but regulators are investigating. Musk warns unchecked AI could threaten humanity.
😟 Concerns over Microsoft dominating and restricting AI innovation
There are worries Microsoft is stifling AI innovation by acquiring emerging companies, already facing investigation over competition laws. Key questions need addressing around ensuring AI benefits rather than harms humanity. Lessons should be learned from lack of social media regulation.
Mindmap
Keywords
💡Artificial intelligence
💡Governance
💡Deepfakes
💡Disinformation
💡Social media companies
💡Business models
💡Tech bro
💡Crisis
💡Regulation
💡Existential threat
Highlights
Elon Musk is suing OpenAI for breach of contract
Musk says Microsoft has turned OpenAI into a subsidiary by plowing billions into it
Regulators are investigating the parameters of Microsoft's OpenAI investment
Annabelle raises concerns about large organizations creating revolutionary AI technology but not allowing others to access it
Questions need to be asked about whether AI will benefit or harm humanity
Governments have been very slow to regulate new technologies like the internet
Social media companies have done a horrible job self-regulating and democracy is worse off
AI technology is advancing much faster than governments can regulate
In 3-5 years, only a handful of big tech companies may have the resources to be at the cutting edge of AI
Assertive governance will be required if a few big techs control advanced AI
Senator Warner has evidence Russia can now use AI to create social media bots and fake personas much more efficiently
Deep fakes like the fake Biden robo call before New Hampshire primary will increase
The cyber security industry emerged after cyber attacks began; the same will likely happen with AI
The UK government aims to lead on AI regulation but may lack the tools to address issues like weaponized disinformation
The pace of AI change worries me and the government may not be able to address threats before the next election
Transcripts
Elon Musk is suing open AI for breach of
contract the billionaire entrepreneur
says the US firm is now putting profit
before its founding principle of
developing AI responsibly Mr musk who
helped set up the firm says Microsoft
has plowed billions into open Ai and has
in effect turned it into a subsidiary
the two companies deny the claim but us
Regulators are investigating the
parameters of Microsoft's investment Mr
mus left open AI in 2018 to set up his
own rival he has warned before that the
UN feted use of generative artificial
intelligence could pose an existential
threat um Microsoft just bought one of
the uh emerging AI companies in France
this week they're already under
investigation Annabelle uh over uh
competition laws do you think um
Microsoft is starting to suffocate the
space that there is for
Innovation certainly I think it's
problematic when it comes to artifici
artificial intelligence it's an a field
where we need as many contenders able to
climb the ladder and fewer large
organizations which are creating this
revolutionary technology but pulling the
ladder up behind them you know I think
there are serious questions to be asked
about Elon musk's
motivations about um the world's richest
man complaining about profit making over
benefit to humanity which some may view
as a little selfs serving that said I
think that there are very serious
questions that we need to ask at the
government level across the world about
whether AI is going to benefit Humanity
or whether it is going to harm Mankind
and if we have fears over the latter
which certainly some politicians have
voiced then how are they going to
regulate it because in the UK at least
they were very slow to regulate the
internet and when they did
eventually the on sa bill it was a very
far wide reaching piece of legis ation
which encompassed arguably far more she
to without addressing the fundamental
concerns that we have over safety online
yeah I was I was about to ask you Ian
whether actually maybe it's a better
thing that the big companies are in
charge of this and controlling the roll
out of this extremely powerful
technology but Annabelle makes a very
good point that the big companies that
have had charge of social media meta um
uh you know all Twitter you know um Tik
Tok these sort of big companies they've
not done a very good job of that so so
maybe maybe we'd be better going the
other way they've done a horrible job of
it uh social media has basically
regulated itself which meant that it's
had no interest in taking responsibility
for the well-being of either the people
that are on their platforms uh or the
political systems that they operate in
and democracy and our citizens are worse
off as a consequence now when it comes
to AI uh certainly the governments are
trying to get a move on governance with
much greater urgency in part because of
the lessons that they are have learned
uh from the hands off on social media
and companies are trying with various uh
degrees of success uh to cooperate with
those governments but it's not at all
clear uh that it's going to succeed the
technology is moving a lot faster than
the governments are uh and that means
the business models are going to matter
a lot more for the governance um I I do
worry uh that when we think about the
future of AI in 3 or 5 years time it's
quite possible that with the amount of
Comm compute required and the amount of
energy required to run that compute that
it's only going to be perhaps a very
small number of systemically important
companies maybe even working with
governments that will be capable of
operating at The Cutting Edge that that
will require a much more assertive
governance if that's where the
technology goes just in terms maybe get
a quick thought from both of you because
we a bit pressed for time but just one
um social media post that I saw from
Miles Taylor one of our panelists this
week he's been talking to Senator Mark
Warner Ian and and he said that
um they have a dossier in their hands
that the Russians are now able to create
these Bots and personas on social media
using AI in a much more efficient much
more dangerous way than they were able
to in 2016 2020 the concern was that
Senator Warner thinks that just not had
a conversation really within Congress
about how they going to stop that or how
they going to tackle it that's right uh
it's going to require a crisis first we
already saw you know one deep fake with
the robo call pretending to be Biden in
the runup to the New Hampshire primary
there's going to be a lot more of this
and the companies are going to have to
respond very very quickly with
governments as things start to break
that we had that with cyber and entire
cyber security industry came up after
things started to break that's going to
have to happen on AI on about well what
about this side because um I know that
there is a a sort of intelligence-led
approach to it MI5 I think are involved
in it do are you convinced by what the
government is saying to to protect the
upcoming British
election I certainly have concerns now
the country at the moment is being run
by Tech bro rishy sunak who has wanted
to try and lead the world on Innova AI
regulation he had a hosted a global
Summit last Autumn to that effect but
there are very serious concerns what
really worries me is the pace of change
it was 10 years ago that we were worried
about the internet and its ability to be
weaponized in order to spread
disinformation to discredit certain
candidates in an election now we have
the rise of AI and as Ian says deep
fakes which are going to convince
members of the public at at a speed that
may not be corrected before they go to
the poll so certainly that there's a
very real concern there and I'm not sure
that the government has the tools yet to
address it
Посмотреть больше похожих видео
Elon Musk Sues OpenAI!
lol Apple Intelligence is dumb...
Elon Musk Predicts AGI, Self-driving, Unlimited Energy, Robots Coming SOON
一口气搞清楚ChatGPT
FIGURE 01 AI Robot Update w/ OpenAI + Microsoft Shocks Tech World (THEMIS HUMANOID DEMO)
Was GPT-5 Underwhelming? OpenAI Co-founder Leaves, Figure02 Arrives, Character.AI Gutted, GPT-5 2025
5.0 / 5 (0 votes)