OpenAI Former Employees Reveal NEW Details In Surprising Letter...
Summary
TLDRThe California Senate Bill 1047, dubbed the 'Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act', has ignited debate within the AI industry. The bill seeks to regulate costly AI models, mandating safety assessments and compliance with audits. Critics fear it may stifle innovation and benefit large tech firms. Open AI whistleblowers argue for necessary regulation to prevent AI misuse, while others, including Open AI's CEO, warn it could hamper California's AI progress. The debate underscores the difficulty of regulating rapidly evolving technology and the urgent need for adaptable frameworks to ensure safety without stifling innovation.
Takeaways
- π The California Senate Bill 1047, also known as the 'Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act', is a legislative proposal aiming to regulate advanced AI models for safety and ethical deployment.
- π‘ The bill specifically targets AI models that require substantial investment, costing over $100 million to train, and mandates developers to conduct safety assessments and comply with annual audits and safety standards.
- π A new regulatory oversight body, the 'Frontier Model Division' within the Department of Technology, would be responsible for ensuring compliance and could impose penalties for violations, including fines up to 30% of the model's development costs.
- π€ The bill has sparked controversy, with some arguing it's necessary for preventing potential AI harms, while critics fear it could stifle innovation and consolidate power among large tech companies.
- π£οΈ Critics, including tech companies and AI researchers, argue that the bill's focus on AI models rather than their applications could hinder innovation and place undue burdens on startups and open-source projects.
- π The language of the bill is considered vague, leading to concerns about compliance and liability for developers.
- π£οΈ Open AI's Chief Strategy Officer, Jason Quon, has expressed mixed views on AI regulation, acknowledging the need for regulation but also warning that SB 1047 could slow innovation and lead to a brain drain from California.
- π¨ Open AI whistleblowers, including former employees, have expressed concerns about the safety of AI systems, stating that developing frontier models without adequate safety precautions poses foreseeable risks of catastrophic harm to the public.
- π A letter from Open AI whistleblowers highlights the company's internal safety issues and premature deployment of AI systems, suggesting a lack of adherence to safety protocols.
- π Anthropic, in their letter, acknowledges the need for regulation and the challenges of keeping pace with rapidly advancing AI technology, suggesting the need for adaptable and transparent regulatory frameworks.
- π‘οΈ The debate around SB 1047 underscores the broader issue of balancing innovation with safety and the difficulty of creating effective regulations in a fast-evolving field like AI.
Q & A
What is the California Senate Bill 1047?
-California Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, is a legislative proposal aimed at regulating advanced AI models to ensure their safe development and deployment.
What are the key aspects of Senate Bill 1047?
-The key aspects of SB 1047 include targeting AI models that require substantial investment, specifically those costing over $100 million to train. It mandates developers to conduct safety assessments, certify that their models do not enable hazardous capabilities, and comply with annual audits and safety standards.
What is the role of the new Frontier Model Division within the Department of Technology?
-The Frontier Model Division within the Department of Technology would oversee the implementation of the regulations set by SB 1047. It is responsible for ensuring compliance and could impose penalties for violations, potentially up to 30% of the model's development costs.
Why is Senate Bill 1047 considered controversial?
-Senate Bill 1047 is considered controversial because critics argue that it could stifle innovation and concentrate power among a few large tech companies. There are concerns about the bill's vague language, compliance, and liability for developers, and the potential for hindering innovation and placing undue burdens on startups and open source projects.
What are the concerns raised by open AI whistleblowers about the bill?
-Open AI whistleblowers have raised concerns that developing frontier models without adequate safety precautions poses foreseeable risks of catastrophic harm to the public. They argue that the bill is necessary to prevent potential harms from advanced AI and that the rapid advances of AI technology necessitate regulation.
What is the stance of Open AI's Chief Strategy Officer, Jason Quan, on AI regulation?
-Jason Quan, Open AI's Chief Strategy Officer, has stated that AI should be regulated and that commitment remains unchanged. However, he also expressed concerns that SB 1047 could threaten California's growth, slow the pace of innovation, and lead to a mass exodus of AI talent from the state.
What does the letter from Open AI whistleblowers highlight about the company's safety practices?
-The letter from Open AI whistleblowers highlights concerns about the company's safety practices, stating that they joined Open AI to ensure the safety of powerful AI systems but resigned due to a loss of trust in the company's ability to deploy AI systems safely, honestly, and responsibly.
What are the key points from Anthropic's letter regarding SB 1047?
-Anthropic's letter acknowledges the real and serious concerns with catastrophic risk in AI systems. It suggests that a regulatory framework that is adaptable to rapid change in the field is necessary and emphasizes the importance of transparent safety and security practices, incentives for effective safety plans, and public involvement in decisions around high-risk AI systems.
What is the main argument against the current approach to AI regulation as stated by Anthropic?
-Anthropic argues that the current approach to AI regulation is not keeping pace with the rapid advancements in AI technology. They believe that regulation strategies need to be adaptable and that the field is evolving so quickly that traditional regulatory processes are not effective.
What does the video suggest about the future of AI regulation?
-The video suggests that AI regulation is a complex and challenging issue. It implies that current regulatory efforts might not be sufficient to keep up with the rapid pace of AI development and that there might be a need for more adaptable and transparent regulatory frameworks. It also raises the possibility that a significant incident might be necessary to catalyze effective regulation.
Outlines
π California's AI Regulation Debate
The California Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, is a legislative proposal that seeks to regulate advanced AI models with a focus on safety in their development and deployment. The bill has sparked controversy within the AI industry due to its potential impact on innovation. It targets AI models costing over $100 million to train, mandating safety assessments, certification against hazardous capabilities, and compliance with annual audits and safety standards. Critics argue that the bill's vague language could stifle innovation and concentrate power among large tech companies, while supporters believe it is necessary to prevent potential harms from advanced AI. The bill also proposes the creation of a Frontier Model Division within the Department of Technology to oversee compliance and impose penalties for violations.
π¨ Whistleblower Concerns on AI Safety
Whistleblowers from OpenAI have raised concerns about the safety and responsible deployment of AI systems, leading to their resignation from the company. They argue that developing frontier AI models without adequate safety precautions poses significant risks to the public. The whistleblowers are not alone in their concerns, as a consensus paper from 25 leading scientists also describes extreme risks from upcoming advanced AI systems. OpenAI's internal practices have been criticized for not aligning with their public safety stance, including premature deployment and security breaches. The whistleblowers advocate for public involvement in decisions around high-risk AI systems and support the provisions of SB 1047 that require transparency and protection for whistleblowers.
π€ The Challenge of Regulating Rapidly Advancing AI
The rapid advancement of AI technology presents a significant challenge for regulation. While the need for regulation is urgent due to the potential risks AI systems pose, the field's fast pace makes it difficult to create effective and lasting regulatory frameworks. Companies like Anthropic recognize the need for adaptable regulation that can keep up with the rapid changes in AI. They propose a regulatory framework with transparent safety and security practices, incentives for effective safety plans, and accountability measures. The debate over SB 1047 reflects the broader struggle to balance innovation with the need for safety and accountability in AI development.
π The Future of AI Regulation and Safety
The future of AI regulation is uncertain, with companies like OpenAI and Anthropic taking different stances on the proposed SB 1047. While Anthropic supports the need for regulation and the principles outlined in the bill, OpenAI has opposed it, raising questions about the strength of their commitment to safety. The fear of a mass exodus of AI developers from California due to strict regulations is seen as unfounded, with the belief that California remains the best place for AI research. The debate highlights the need for a careful balance between innovation and safety, with the recognition that regulation will likely lag behind development until a significant event prompts more stringent measures.
Mindmap
Keywords
π‘California Senate Bill 1047
π‘Artificial Intelligence (AI)
π‘Safety Assessments
π‘Regulatory Oversight
π‘Whistleblowers
π‘Innovation
π‘Compliance
π‘Anthropic
π‘Open AI
π‘AGI (Artificial General Intelligence)
π‘Public Involvement
Highlights
California Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, aims to regulate advanced AI models to ensure their safe development and deployment.
The bill has sparked controversy in the AI industry due to its potential impact on the future of AI, with differing opinions from companies like Anthropic and OpenAI.
SB 1047 targets AI models requiring substantial investment, specifically those costing over $100 million to train, mandating safety assessments and compliance with annual audits and safety standards.
A new Frontier Model Division within the Department of Technology would oversee the implementation of these regulations, ensuring compliance and imposing penalties for violations.
Critics argue that the bill's vague language could lead to concerns about compliance and liability, potentially stifling innovation and concentrating power among large tech companies.
OpenAI whistleblowers have expressed their reasoning for opposing the bill, fearing it could slow innovation and lead to a consolidation of AI development power.
OpenAI's Chief Strategy Officer, Jason Quon, has made contrasting statements about the need for AI regulation, reflecting internal debate on the issue.
Sam Altman, OpenAI's CEO, has previously called for AI regulation but now opposes the bill, raising questions about the strength of OpenAI's commitments to safety.
The letter from OpenAI whistleblowers, William Saunders and Daniel Kokoale, highlights their safety concerns and reasons for resigning from OpenAI.
The whistleblowers argue that developing frontier models without adequate safety precautions poses foreseeable risks of catastrophic harm to the public.
Science published a consensus paper from 25 leading scientists describing extreme risks from upcoming advanced AI systems, indicating a growing concern in the scientific community.
The whistleblowers claim that OpenAI has not safely deployed their systems in the past, citing examples of premature deployment and security breaches.
Prominent safety researchers have left OpenAI, citing concerns about the company's approach to safety and the prioritization of products over safety culture.
SB 1047 requires publishing a Safety and Security protocol to inform the public about safety standards and protects whistleblowers who raise concerns.
Anthropic's letter acknowledges the need for regulation but suggests that SB 1047 may not be the most effective approach, proposing adaptable regulation as a better solution.
The letter from Anthropic emphasizes the importance of transparent safety and security practices and incentives to make safety plans effective in preventing catastrophes.
Dario Amodei, CEO of Anthropic, stresses the importance of having appropriate regulations in place to ensure the safe development of increasingly powerful AI systems.
The debate over SB 1047 reflects the broader challenges of regulating AI, including the rapid pace of development and the need for adaptable regulatory frameworks.
Transcripts
the California Senate Bill
1047 known as the safe and secure
Innovation for Frontier artificial
intelligence systems Act is a
legislative proposal aimed at regulating
Advanced AI models to ensure their safe
development and deployment now this has
been one of the most controversial
pieces of discussion going around the AI
industry and that's because there are
many different things that could really
impact the future of AI including
statements from anthropic a few open AI
whistleblowers and key industry figures
so in this video I'm going to dive into
all the key aspects and then we're going
to dive into why this is such a
contentious issue so the key aspects of
sb147 are as fodel the bill targets air
models that require substantial
investment specifically those costing
over $100 million to train it mandates
developers to conduct safety assessments
certify that their models do not enable
hazardous capabilities and comply with
annual Audits and safety standards
there's also regulatory oversight a new
frontier model division within the
department of Technology would oversee
the implementation of these regulations
this division would be responsible for
ensuring compliance and could impose
penalties for violation potentially up
to 30% of the model's development costs
now some individuals have argued that
bills like this AR necessary to prevent
potential harms from Advanced AI while
critics are claiming that this could
stifle innovation and concentrate power
among a few large tech companies the
Bill's language is considered vague
leading to concerns about compliance and
liability for developers and many
critics including many tech companies
and AI researchers argue that the bills
focus on the AI models themselves rather
than their applications could hinder
Innovation and place undue burdens on
startups and open source projects they
fear it could lead to a consolidation of
air development power and slow down
progress in California now today there
was a from open AI whistleblowers in
which they explain their reasoning for
their position on this letter and their
positioning is driven by open ai's
recent statements surrounding the letter
you can see that the Twitter control AI
a Twitter account that is focused on
controlling Ai and an organization
that's focused on the safety aspects
tweets this open ai's Chief strategy
Officer Jason Quon last week said that
we've always believed that AI should be
regulated and that commitment remains
unchanged however this week his
statements were quite different he says
that the AI Revolution is just the
beginning in California's unique status
of the global leader in AI is fueling
the state's economic dynamism sb147
would threaten that growth slow the pace
of innovation and Lead California's
world-class engineers and entrepreneurs
to leave the state in search of Greater
opportunity elsewhere and interestingly
enough Sam mman has clearly stated that
we do need air regulation and this is
him talking in October of 2020 about how
these systems should be regulated you
know there's kind of a cohort in silic
and Valley that's very worried about
what AI could do to manate does that
concern you at all for sure um I think
it's going to be fine um I also think
it's like very bad thinking to not take
the apocalyptic downside very seriously
I am more optimistic than I used to be
that we can get through this I think
just saying like oh don't worry about it
it's going to be fine is a very bad
strategy I'm like super proud of the
safety team and the policy team that we
have at open Ai and there's like very
good technical work to do we're doing
some of it others are doing some we
should probably all do more about how
how we build these systems in a way
where they're very humanized you know
how can we have some sort of you know
way for people to feel confident that
technical experts are taking those
necessary safety kind of given the
consequences of potential mistakes or do
you think you know people should be able
to just trust no I don't I think there
has to be government and we're trying to
push for this as as much as we can yeah
and how so far have you found the the
interplay between governments and I do
you work with government regularly you
know is there any sort of regulatory
things you face or how does that we do
Rel there's not much regulatory stuff
yet on AI I'm I'm pretty sure there will
be regulation in the non distant future
I really think there should be I want to
show you guys some key parts of this
letter because there are some parts that
need to be brought to your attention as
you may know the people that wrote this
letter William sworders and Daniel
kokalo were people that actually worked
at openai and left due to safety
concerns this letter was released today
you can see it's August 22nd 2024 and
the letter starts by stting the open AI
and other companies are racing to build
artificial general intelligence or AI
systems that are generally smarter than
humans and in its right open's mission
statement and the company is Raising
billions of dollars to achieve this goal
and along the way they create systems
that pose a risk of critical harms to
society such as unprecedented cyber
attacks or assisting in the creation of
biological weapons and if they succeed
entirely artificial general intelligence
will be the most powerful technology
ever invented I'm going to highlight
that because that is a clear statement
that most people truly haven't grasped
yet now you can see here that they said
we joined openai because we wanted to
ensure the safety of incredibly powerful
AI systems that the company is
developing but we but we resigned from
openi because we lost trust that it
would safely honestly and responsibly
deploy its AI systems in light of that
we are not surprised by open ai's
decision to Lobby against SB 1047 it
clearly states here that developing
Frontier models without adequate safety
precautions poses foreseeable risks of
catastrophic harm to the public we are
not the only ones concerned about the
rapid advances of AI technology and
earlier this year science published
managing extreme AI risks amid rapid
progress a consensus paper from 25
leading scientists describing extreme
risks from upcoming Advanced AI systems
and Sam Alman agreed he stated that the
worst case scenario for AI could be
lights out for all of us so this
statement right here is actually quite
true samman has stated on multiple
different occasions dangerous these AI
systems could be and the kind of things
that could happen and every time I hear
these individuals talk about the safety
precautions of open aai I do truly
wonder how powerful the systems that
they do have are and if the current
preparedness framework that they're
currently using in order to deploy model
safely is actually going to be something
that they stick by considering the fact
that we are now in these terminal race
conditions in which companies are forced
to outdo one another in order to gain
customer satisfaction you can see right
here there are some key issues to where
they describe how open AI has previously
not safely deployed their system it says
in the absence of whistleblower
protections openai demanded we sign away
our rights to ever criticize the company
under threat of losing millions of
dollars invested Equity when we resigned
for the company touting cautious and
gradual deployment practices gbt 4 was
deployed prematurely in India in direct
violation of open ai's internal safety
procedures and more famously openai
provided technology to Bing's chatbot
which then threatened an attempted to
manipulate users and openi claimed to
have strict internal security controls
despite a major security breach and
other internal security concerns the
company also fired a colleague in part
about raising concerns about their
security practices that of course is
referring to Leopold Ashen brener now
you can see right here they also spoke
about how prominent safety researchers
have left the company including
cofounders the head of Team responsible
for controlling smarter than human AI
systems said on resignation that the
company was long overdue getting
incredibly serious about the
implications of AGI and that safety
culture has taken a backseat to shiny
products while these incidents did not
cause catastrophic harms that's only
because truly dangerous systems have not
yet been built not because companies
have safety processes that could truly
handle dangerous system we believe that
there should be public involvement in
decisions around highrisk AI systems and
SB creates a space for this to happen it
requires publishing a Safety and
Security protocol to inform the public
about safety standards and it protects
whistleblowers who raise concerns to the
California attorney general if a model
possesses an unreasonable risk or
causing or capable of causing or
enabling critical harm it says it
provides a possibility for consequences
for companies if they mislead the public
and doing so lead to harm or imminent
threat to Public Safety and extracts a
careful balance that protects legitimate
IP interests now what's interesting
about this is that they say here that
open ai's complaints about s SP 1047 are
not constructive and don't seem to be in
good faith they state that open AI they
don't protect whistleblowers and they do
nothing to prevent a company from
releasing a product that would
foreseeably cause catastrophic harm to
the public and it's perfectly clear that
they are not a substitute for S SP 1047
as open AI knows as much so basically
what they're stating right here is that
currently in the AI space we are waiting
for a disaster to happen I know many
people think that the AI debate is one
that is just pointless but I mean these
guys do actually genuinely have a point
about this companies have completely
disregarded safety precautions in order
to get products into users hands as
quick as possible and now with the
future Cycles ahead of us we know that
systems are going to be a lot more
smarter a lot more capable and thus a
lot more dangerous if this is true
looking historically at how companies
have acted in the past can we not see
how releasing a product that would
foreseeably cause catastrophic harm
could could be possible in the near to
short-term future and I think this is
you know plausible it does say that we
cannot wait for Kress to act they've
explicitly said that they aren't willing
to pass meaningful AI regulation and if
they ever do it can preempt California
it can preempt C regulation an anthropic
join sensible observers when it worries
congressional action will not occur in
the necessary window of time they
basically State here that SB 1047
requirements are things that AI
developers including open AI have
already largely agreed to involuntary
commitments to the White House and S the
main difference is that s SP 1047 would
force developers to show the public that
they're keeping those commitments and
hold them accountable if they don't now
of course this is where they talk about
the fear of mass of Exodus of AI
developers and it says the fears of a
mass of Exodus of AI developers from the
state are contri opening ey said the
same thing about the EU AI act but it
didn't happen California is the best
place in the world to do AI research and
what's more the Bill's requirements
would apply to anyone doing business in
CA regardless of their location and it's
extremely disappointing to see our
former employer pursue Scare Tactics to
derail AI safety legislation and here's
the main point from all of this they
state that Sam Alman our former boss has
repeatedly called for a regulation now
when actual regulation is on the table
he opposes it and he said that
previously obviously they would support
all regulation but yet openai opposes
the even extremely light touch
requirements in SB 1047 most of which
they claim they voluntarily commit to
raising the questions the strengths of
those commit like I said before this
letter was written by William Saunders
and of course Daniel kotalo former open
aai member of policy star so this is
something that is rather surprising
considering the fact that open AI have
consistently shown their position when
it comes to regulations surrounding AI
because they've seemingly been rather
supportive however maybe when it's
actually coming to it right now for
whatever reason they're on the fence now
interestingly enough open AI former
members are not the only people that
have written about this letter and the
issues that this kind of poses to the
area here we can see anthropics letter
that was written just yesterday it does
say a few things and some of these that
I want to bring to your attention are
pretty incredible so you can see right
here it says pros and cons of s SP 1047
it says we want to be clear as we were
in our original Saia letter that s SP
1047 addresses real and serious concerns
with catastrophic risk in AI system AI
systems advancing today are gaining
capability ities extremely quickly which
offer both great promise for
California's economy and substantial
risk and our work and this is where it
gets interesting is that our work with
biod defense experts cyber experts and
others shows a trend for the potential
for serious misuse in the coming years
perhaps as little as 1 to three years
that's a crazy statement but when you
think about the pace of AI development
don't think that this isn't a
possibility and here's some of the key
things about this paper just the bits
that you might want to pay attention to
2 where it says here are some thoughts
about regulating Frontier AI system
regardless or whether or not SB 1047 is
adopted California will be grappling
with how to regulate AI technology for
years to come and it says below we share
our general perspective on AI regulation
which we hope may be useful considering
both s SP 107 and future regulatory
efforts might occur instead or in
addition to it so basically they're
stating some of the problems here that
most regulatory pieces fail to address
and one of the key issues that I've seen
before is of course that you know a
regulation is driven by the speed of
progress regulating things usually does
take time you've got different bills
that you have to pass you've got like
all these committees and you know
honestly just government nonsense which
is really slow but I completely
understand why it needs to go through so
many different areas before bill is
passed but the point here is that this
doesn't work well with AI because AI is
just advancing extremely rapidly so it
says here that on one hand this means
that regulation is urgently needed on
some issues we believe that these
technologies will present present
serious risk to the public in the near
future and on the other hand because the
field is advancing so quickly strategies
for mitigating risk are in a state of
Rapid Evolution often resembling
scientific research problems more than
they resemble established best practices
and we believe that this is genuinely
one of the most difficult dilemas and
it's an important driver of the
Divergence in views among different
experts on
sb147 and in general and it's rightly
said trying to regulate something that
changes literally every 12 months is you
know insane like it's just so hard to do
that and one resolution to this dilemma
which they've spoken about is very
adaptable regulation in grappling with
the Dilemma above we've come to the view
that the best solution is to have a
regulatory framework that is very
adaptable to Rapid change in the field
which does make sense it says in terms
of specific properties of an AI Frontier
Model regulatory framework we see three
key elements as essential transparent
Safety and Security practices at present
many AI companies consider it necessary
to have detailed Safety and Security
plans for managing AI catastrophic risk
but the public and lawmakers have no way
to verify adherence to these plans or
the outcome of any test run as a part of
them basically what they're stating here
is that look these guys always sayate
that okay we're going to test if these
models pass a certain threshold and if
it passes a certain threshold we're
never going to release the model but how
do we know what is going on internally
if they don't release these findings to
anyone they could simply release models
that are completely dangerous if they
haven't tested them in certain ways
trans transparency in this area would
create public accountability accelerate
industry learning and promote a race to
the top with very few downsides and
thropic also talks about incentives to
make Safety and Security plans effective
in preventing catastrophe basically what
they're stating here is that look you
can prescribe rules all day but the main
thing that you need to do is incentivize
the right outcome this is you know how
humans are driven if you incentivize
someone with the right thing they're
always going to do what you want them to
do you can see here it says we believe
it is critical to have some some
framework for managing Frontier AI
systems that roughly meets these
requirements and as AI systems become
more powerful it's crucial for us to
ensure we have appropriate regulations
in place to ensure their safely
sincerely Dario amod CEO of anthropic so
overall what we have here is a
comprehensive view of where companies
stand it's clear that anthropic does
want regulation but understands that
even the current regulation if it's
proposed isn't going to do what it needs
to and open AI seem to be edging towards
not regulating their systems
surprisingly considering their recent
position regarding regulating AI system
either way I do want to know if this
legislation is going to be accepted or
not it seems to be rather interesting
where everyone stands regulating AI is
most certainly hard let me know what you
guys think about air regulation do you
think it makes sense do you think things
like this are going to work and if you
guys do want to know about open ai's
method of their safety this is their
preparedness framework Beta And
basically they do have an updated one
but I can't find it but the long story
short is that you know if models reach a
certain level they're basically saying
they won't release them which is why
I've said that you know um and the model
basically if it gets high or critical on
certain evaluations they're not going to
release them which is why I've said that
before I don't think we're going to get
Frontier models in certain areas because
it's going to be pretty hard um to to do
that whilst increasing the knowledge of
the model so you've got cyber security
you know um this one is biological and
other threats this one is persuasion and
this is models autonomy so this is
basically atic Behavior to go off and do
stuff that's pretty insane so um I
personally do believe that what we're
walking into is you know a gray area
because regulation is pretty difficult
but here's what I think is going to
happen I think that you know regulation
will lag behind a development and
somewhere somehow something's going to
happen and whenever it does happen it's
probably going to then force a
regulation like usually what happens is
in spaces that are pretty Innovative
since regulation can't keep up with
what's going on and Frameworks like this
might not always be effective
unfortunately we're probably going to
have to wait for something bad to happen
and then once it bad happens we then put
in regulation to prevent that regulation
from happening again for example if we
look at the TSA the tragedies that
happened in America how it completely
changed air travel things like that I do
think unfortunately we're probably going
to have to see another scenario like
that I do hope that that isn't the case
I would much rather regulation just
allows these companies to also innovate
and also not share their secrets because
I think that's the main thing that
they're scared of but I guess we'll have
to see
Browse More Related Video
AGI Before 2026? Sam Altman & Max Tegmark on Humanity's Greatest Challenge
AI AND THE GLOBAL SCALE
Why Florida Banned LAB-GROWN MEAT
Nick Bostrom What happens when our computers get smarter than we are?
How to get empowered, not overpowered, by AI | Max Tegmark
Sam Altman CEO of OpenAI | Podcast | In Good Company | Norges Bank Investment Management
5.0 / 5 (0 votes)