OpenAI Former Employees Reveal NEW Details In Surprising Letter...

TheAIGRID
25 Aug 202417:56

Summary

TLDRThe California Senate Bill 1047, dubbed the 'Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act', has ignited debate within the AI industry. The bill seeks to regulate costly AI models, mandating safety assessments and compliance with audits. Critics fear it may stifle innovation and benefit large tech firms. Open AI whistleblowers argue for necessary regulation to prevent AI misuse, while others, including Open AI's CEO, warn it could hamper California's AI progress. The debate underscores the difficulty of regulating rapidly evolving technology and the urgent need for adaptable frameworks to ensure safety without stifling innovation.

Takeaways

  • πŸ“œ The California Senate Bill 1047, also known as the 'Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act', is a legislative proposal aiming to regulate advanced AI models for safety and ethical deployment.
  • πŸ’‘ The bill specifically targets AI models that require substantial investment, costing over $100 million to train, and mandates developers to conduct safety assessments and comply with annual audits and safety standards.
  • πŸ” A new regulatory oversight body, the 'Frontier Model Division' within the Department of Technology, would be responsible for ensuring compliance and could impose penalties for violations, including fines up to 30% of the model's development costs.
  • πŸ€” The bill has sparked controversy, with some arguing it's necessary for preventing potential AI harms, while critics fear it could stifle innovation and consolidate power among large tech companies.
  • πŸ—£οΈ Critics, including tech companies and AI researchers, argue that the bill's focus on AI models rather than their applications could hinder innovation and place undue burdens on startups and open-source projects.
  • πŸ”‘ The language of the bill is considered vague, leading to concerns about compliance and liability for developers.
  • πŸ—£οΈ Open AI's Chief Strategy Officer, Jason Quon, has expressed mixed views on AI regulation, acknowledging the need for regulation but also warning that SB 1047 could slow innovation and lead to a brain drain from California.
  • 🚨 Open AI whistleblowers, including former employees, have expressed concerns about the safety of AI systems, stating that developing frontier models without adequate safety precautions poses foreseeable risks of catastrophic harm to the public.
  • πŸ“ A letter from Open AI whistleblowers highlights the company's internal safety issues and premature deployment of AI systems, suggesting a lack of adherence to safety protocols.
  • 🌐 Anthropic, in their letter, acknowledges the need for regulation and the challenges of keeping pace with rapidly advancing AI technology, suggesting the need for adaptable and transparent regulatory frameworks.
  • πŸ›‘οΈ The debate around SB 1047 underscores the broader issue of balancing innovation with safety and the difficulty of creating effective regulations in a fast-evolving field like AI.

Q & A

  • What is the California Senate Bill 1047?

    -California Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, is a legislative proposal aimed at regulating advanced AI models to ensure their safe development and deployment.

  • What are the key aspects of Senate Bill 1047?

    -The key aspects of SB 1047 include targeting AI models that require substantial investment, specifically those costing over $100 million to train. It mandates developers to conduct safety assessments, certify that their models do not enable hazardous capabilities, and comply with annual audits and safety standards.

  • What is the role of the new Frontier Model Division within the Department of Technology?

    -The Frontier Model Division within the Department of Technology would oversee the implementation of the regulations set by SB 1047. It is responsible for ensuring compliance and could impose penalties for violations, potentially up to 30% of the model's development costs.

  • Why is Senate Bill 1047 considered controversial?

    -Senate Bill 1047 is considered controversial because critics argue that it could stifle innovation and concentrate power among a few large tech companies. There are concerns about the bill's vague language, compliance, and liability for developers, and the potential for hindering innovation and placing undue burdens on startups and open source projects.

  • What are the concerns raised by open AI whistleblowers about the bill?

    -Open AI whistleblowers have raised concerns that developing frontier models without adequate safety precautions poses foreseeable risks of catastrophic harm to the public. They argue that the bill is necessary to prevent potential harms from advanced AI and that the rapid advances of AI technology necessitate regulation.

  • What is the stance of Open AI's Chief Strategy Officer, Jason Quan, on AI regulation?

    -Jason Quan, Open AI's Chief Strategy Officer, has stated that AI should be regulated and that commitment remains unchanged. However, he also expressed concerns that SB 1047 could threaten California's growth, slow the pace of innovation, and lead to a mass exodus of AI talent from the state.

  • What does the letter from Open AI whistleblowers highlight about the company's safety practices?

    -The letter from Open AI whistleblowers highlights concerns about the company's safety practices, stating that they joined Open AI to ensure the safety of powerful AI systems but resigned due to a loss of trust in the company's ability to deploy AI systems safely, honestly, and responsibly.

  • What are the key points from Anthropic's letter regarding SB 1047?

    -Anthropic's letter acknowledges the real and serious concerns with catastrophic risk in AI systems. It suggests that a regulatory framework that is adaptable to rapid change in the field is necessary and emphasizes the importance of transparent safety and security practices, incentives for effective safety plans, and public involvement in decisions around high-risk AI systems.

  • What is the main argument against the current approach to AI regulation as stated by Anthropic?

    -Anthropic argues that the current approach to AI regulation is not keeping pace with the rapid advancements in AI technology. They believe that regulation strategies need to be adaptable and that the field is evolving so quickly that traditional regulatory processes are not effective.

  • What does the video suggest about the future of AI regulation?

    -The video suggests that AI regulation is a complex and challenging issue. It implies that current regulatory efforts might not be sufficient to keep up with the rapid pace of AI development and that there might be a need for more adaptable and transparent regulatory frameworks. It also raises the possibility that a significant incident might be necessary to catalyze effective regulation.

Outlines

00:00

πŸ“œ California's AI Regulation Debate

The California Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, is a legislative proposal that seeks to regulate advanced AI models with a focus on safety in their development and deployment. The bill has sparked controversy within the AI industry due to its potential impact on innovation. It targets AI models costing over $100 million to train, mandating safety assessments, certification against hazardous capabilities, and compliance with annual audits and safety standards. Critics argue that the bill's vague language could stifle innovation and concentrate power among large tech companies, while supporters believe it is necessary to prevent potential harms from advanced AI. The bill also proposes the creation of a Frontier Model Division within the Department of Technology to oversee compliance and impose penalties for violations.

05:00

🚨 Whistleblower Concerns on AI Safety

Whistleblowers from OpenAI have raised concerns about the safety and responsible deployment of AI systems, leading to their resignation from the company. They argue that developing frontier AI models without adequate safety precautions poses significant risks to the public. The whistleblowers are not alone in their concerns, as a consensus paper from 25 leading scientists also describes extreme risks from upcoming advanced AI systems. OpenAI's internal practices have been criticized for not aligning with their public safety stance, including premature deployment and security breaches. The whistleblowers advocate for public involvement in decisions around high-risk AI systems and support the provisions of SB 1047 that require transparency and protection for whistleblowers.

10:02

πŸ€– The Challenge of Regulating Rapidly Advancing AI

The rapid advancement of AI technology presents a significant challenge for regulation. While the need for regulation is urgent due to the potential risks AI systems pose, the field's fast pace makes it difficult to create effective and lasting regulatory frameworks. Companies like Anthropic recognize the need for adaptable regulation that can keep up with the rapid changes in AI. They propose a regulatory framework with transparent safety and security practices, incentives for effective safety plans, and accountability measures. The debate over SB 1047 reflects the broader struggle to balance innovation with the need for safety and accountability in AI development.

15:03

πŸ›‘ The Future of AI Regulation and Safety

The future of AI regulation is uncertain, with companies like OpenAI and Anthropic taking different stances on the proposed SB 1047. While Anthropic supports the need for regulation and the principles outlined in the bill, OpenAI has opposed it, raising questions about the strength of their commitment to safety. The fear of a mass exodus of AI developers from California due to strict regulations is seen as unfounded, with the belief that California remains the best place for AI research. The debate highlights the need for a careful balance between innovation and safety, with the recognition that regulation will likely lag behind development until a significant event prompts more stringent measures.

Mindmap

Keywords

πŸ’‘California Senate Bill 1047

California Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, is a legislative proposal aimed at regulating advanced AI models to ensure their safe development and deployment. It is central to the video's theme as it represents the ongoing debate on AI regulation. The script discusses the bill's stipulations and the controversy surrounding it, including the potential impact on innovation and the concerns raised by industry figures and whistleblowers.

πŸ’‘Artificial Intelligence (AI)

Artificial Intelligence refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the context of the video, AI is the subject of regulatory discussion due to its rapid advancement and potential risks. The script highlights the need for safety assessments and the development of regulations to prevent potential harms from advanced AI systems.

πŸ’‘Safety Assessments

Safety assessments are evaluations conducted to ensure that a product or system does not pose unreasonable risks. Within the video, it is mentioned that under the bill, developers are mandated to conduct safety assessments for AI models costing over $100 million to train, ensuring they do not enable hazardous capabilities. This concept is integral to the discussion on how to manage the risks associated with advanced AI.

πŸ’‘Regulatory Oversight

Regulatory oversight refers to the supervision and enforcement of rules and regulations by a governing body. The video discusses the creation of a new Frontier Model Division within the Department of Technology to oversee the implementation of AI regulations, ensuring compliance and imposing penalties for violations, which underscores the importance of governance in AI development.

πŸ’‘Whistleblowers

Whistleblowers are individuals who reveal confidential information or activity within an organization to the public or to those in positions of authority. The script mentions open AI whistleblowers who have voiced their concerns about the safety and responsible deployment of AI systems, highlighting the internal conflicts and the push for transparency and safety in AI development.

πŸ’‘Innovation

Innovation refers to the process of translating an idea or invention into a good or service that creates value or for which customers will pay. The video script debates how the regulation of AI could potentially stifle innovation, concentrating power among large tech companies and placing undue burdens on startups and open-source projects, which is a key concern in the AI industry.

πŸ’‘Compliance

Compliance refers to the act of conforming to a set of rules, regulations, or instructions. In the context of the video, compliance is a significant issue as the bill's vague language leads to concerns about how developers can adhere to the new regulations without clear guidelines, potentially impacting the development and deployment of AI models.

πŸ’‘Anthropic

Anthropic is an AI research and deployment company that focuses on responsible AI development. In the script, statements from Anthropic are mentioned as part of the broader discussion on AI regulation, indicating the company's stance and the industry's varied perspectives on how to approach the governance of AI systems.

πŸ’‘Open AI

Open AI is a research laboratory that develops AI technologies with the aim of ensuring that they benefit all of humanity. The video script discusses Open AI's stance on regulation, including internal conflicts and the differing opinions of its executives and former members, which is central to understanding the complexities of AI regulation and industry responses.

πŸ’‘AGI (Artificial General Intelligence)

Artificial General Intelligence refers to an AI system that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond that of a human. The script mentions the race to build AGI and the associated risks, emphasizing the urgency and importance of safety measures in AI development.

πŸ’‘Public Involvement

Public involvement refers to the process of engaging the public in decision-making processes. The video script discusses the need for public involvement in decisions around high-risk AI systems, such as through the publication of safety and security protocols, which is crucial for transparency and accountability in AI regulation.

Highlights

California Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, aims to regulate advanced AI models to ensure their safe development and deployment.

The bill has sparked controversy in the AI industry due to its potential impact on the future of AI, with differing opinions from companies like Anthropic and OpenAI.

SB 1047 targets AI models requiring substantial investment, specifically those costing over $100 million to train, mandating safety assessments and compliance with annual audits and safety standards.

A new Frontier Model Division within the Department of Technology would oversee the implementation of these regulations, ensuring compliance and imposing penalties for violations.

Critics argue that the bill's vague language could lead to concerns about compliance and liability, potentially stifling innovation and concentrating power among large tech companies.

OpenAI whistleblowers have expressed their reasoning for opposing the bill, fearing it could slow innovation and lead to a consolidation of AI development power.

OpenAI's Chief Strategy Officer, Jason Quon, has made contrasting statements about the need for AI regulation, reflecting internal debate on the issue.

Sam Altman, OpenAI's CEO, has previously called for AI regulation but now opposes the bill, raising questions about the strength of OpenAI's commitments to safety.

The letter from OpenAI whistleblowers, William Saunders and Daniel Kokoale, highlights their safety concerns and reasons for resigning from OpenAI.

The whistleblowers argue that developing frontier models without adequate safety precautions poses foreseeable risks of catastrophic harm to the public.

Science published a consensus paper from 25 leading scientists describing extreme risks from upcoming advanced AI systems, indicating a growing concern in the scientific community.

The whistleblowers claim that OpenAI has not safely deployed their systems in the past, citing examples of premature deployment and security breaches.

Prominent safety researchers have left OpenAI, citing concerns about the company's approach to safety and the prioritization of products over safety culture.

SB 1047 requires publishing a Safety and Security protocol to inform the public about safety standards and protects whistleblowers who raise concerns.

Anthropic's letter acknowledges the need for regulation but suggests that SB 1047 may not be the most effective approach, proposing adaptable regulation as a better solution.

The letter from Anthropic emphasizes the importance of transparent safety and security practices and incentives to make safety plans effective in preventing catastrophes.

Dario Amodei, CEO of Anthropic, stresses the importance of having appropriate regulations in place to ensure the safe development of increasingly powerful AI systems.

The debate over SB 1047 reflects the broader challenges of regulating AI, including the rapid pace of development and the need for adaptable regulatory frameworks.

Transcripts

play00:00

the California Senate Bill

play00:03

1047 known as the safe and secure

play00:06

Innovation for Frontier artificial

play00:08

intelligence systems Act is a

play00:10

legislative proposal aimed at regulating

play00:13

Advanced AI models to ensure their safe

play00:15

development and deployment now this has

play00:17

been one of the most controversial

play00:19

pieces of discussion going around the AI

play00:22

industry and that's because there are

play00:24

many different things that could really

play00:27

impact the future of AI including

play00:29

statements from anthropic a few open AI

play00:32

whistleblowers and key industry figures

play00:34

so in this video I'm going to dive into

play00:36

all the key aspects and then we're going

play00:37

to dive into why this is such a

play00:40

contentious issue so the key aspects of

play00:43

sb147 are as fodel the bill targets air

play00:46

models that require substantial

play00:48

investment specifically those costing

play00:50

over $100 million to train it mandates

play00:53

developers to conduct safety assessments

play00:55

certify that their models do not enable

play00:57

hazardous capabilities and comply with

play00:59

annual Audits and safety standards

play01:01

there's also regulatory oversight a new

play01:03

frontier model division within the

play01:05

department of Technology would oversee

play01:07

the implementation of these regulations

play01:10

this division would be responsible for

play01:12

ensuring compliance and could impose

play01:14

penalties for violation potentially up

play01:16

to 30% of the model's development costs

play01:19

now some individuals have argued that

play01:22

bills like this AR necessary to prevent

play01:24

potential harms from Advanced AI while

play01:27

critics are claiming that this could

play01:28

stifle innovation and concentrate power

play01:31

among a few large tech companies the

play01:33

Bill's language is considered vague

play01:35

leading to concerns about compliance and

play01:37

liability for developers and many

play01:39

critics including many tech companies

play01:41

and AI researchers argue that the bills

play01:43

focus on the AI models themselves rather

play01:46

than their applications could hinder

play01:48

Innovation and place undue burdens on

play01:50

startups and open source projects they

play01:52

fear it could lead to a consolidation of

play01:55

air development power and slow down

play01:57

progress in California now today there

play01:59

was a from open AI whistleblowers in

play02:02

which they explain their reasoning for

play02:04

their position on this letter and their

play02:07

positioning is driven by open ai's

play02:09

recent statements surrounding the letter

play02:11

you can see that the Twitter control AI

play02:13

a Twitter account that is focused on

play02:15

controlling Ai and an organization

play02:17

that's focused on the safety aspects

play02:19

tweets this open ai's Chief strategy

play02:22

Officer Jason Quon last week said that

play02:24

we've always believed that AI should be

play02:26

regulated and that commitment remains

play02:28

unchanged however this week his

play02:31

statements were quite different he says

play02:33

that the AI Revolution is just the

play02:34

beginning in California's unique status

play02:36

of the global leader in AI is fueling

play02:38

the state's economic dynamism sb147

play02:42

would threaten that growth slow the pace

play02:44

of innovation and Lead California's

play02:45

world-class engineers and entrepreneurs

play02:48

to leave the state in search of Greater

play02:49

opportunity elsewhere and interestingly

play02:51

enough Sam mman has clearly stated that

play02:53

we do need air regulation and this is

play02:55

him talking in October of 2020 about how

play02:58

these systems should be regulated you

play03:00

know there's kind of a cohort in silic

play03:01

and Valley that's very worried about

play03:03

what AI could do to manate does that

play03:05

concern you at all for sure um I think

play03:07

it's going to be fine um I also think

play03:09

it's like very bad thinking to not take

play03:11

the apocalyptic downside very seriously

play03:13

I am more optimistic than I used to be

play03:15

that we can get through this I think

play03:17

just saying like oh don't worry about it

play03:18

it's going to be fine is a very bad

play03:20

strategy I'm like super proud of the

play03:21

safety team and the policy team that we

play03:22

have at open Ai and there's like very

play03:24

good technical work to do we're doing

play03:26

some of it others are doing some we

play03:27

should probably all do more about how

play03:29

how we build these systems in a way

play03:31

where they're very humanized you know

play03:32

how can we have some sort of you know

play03:34

way for people to feel confident that

play03:36

technical experts are taking those

play03:38

necessary safety kind of given the

play03:40

consequences of potential mistakes or do

play03:42

you think you know people should be able

play03:44

to just trust no I don't I think there

play03:46

has to be government and we're trying to

play03:48

push for this as as much as we can yeah

play03:50

and how so far have you found the the

play03:52

interplay between governments and I do

play03:54

you work with government regularly you

play03:56

know is there any sort of regulatory

play03:58

things you face or how does that we do

play04:00

Rel there's not much regulatory stuff

play04:02

yet on AI I'm I'm pretty sure there will

play04:04

be regulation in the non distant future

play04:06

I really think there should be I want to

play04:07

show you guys some key parts of this

play04:09

letter because there are some parts that

play04:10

need to be brought to your attention as

play04:12

you may know the people that wrote this

play04:14

letter William sworders and Daniel

play04:16

kokalo were people that actually worked

play04:19

at openai and left due to safety

play04:21

concerns this letter was released today

play04:23

you can see it's August 22nd 2024 and

play04:27

the letter starts by stting the open AI

play04:29

and other companies are racing to build

play04:32

artificial general intelligence or AI

play04:34

systems that are generally smarter than

play04:36

humans and in its right open's mission

play04:38

statement and the company is Raising

play04:40

billions of dollars to achieve this goal

play04:42

and along the way they create systems

play04:44

that pose a risk of critical harms to

play04:46

society such as unprecedented cyber

play04:48

attacks or assisting in the creation of

play04:50

biological weapons and if they succeed

play04:52

entirely artificial general intelligence

play04:54

will be the most powerful technology

play04:57

ever invented I'm going to highlight

play04:58

that because that is a clear statement

play05:00

that most people truly haven't grasped

play05:02

yet now you can see here that they said

play05:04

we joined openai because we wanted to

play05:06

ensure the safety of incredibly powerful

play05:08

AI systems that the company is

play05:10

developing but we but we resigned from

play05:12

openi because we lost trust that it

play05:13

would safely honestly and responsibly

play05:16

deploy its AI systems in light of that

play05:18

we are not surprised by open ai's

play05:20

decision to Lobby against SB 1047 it

play05:24

clearly states here that developing

play05:25

Frontier models without adequate safety

play05:27

precautions poses foreseeable risks of

play05:30

catastrophic harm to the public we are

play05:31

not the only ones concerned about the

play05:33

rapid advances of AI technology and

play05:35

earlier this year science published

play05:37

managing extreme AI risks amid rapid

play05:39

progress a consensus paper from 25

play05:42

leading scientists describing extreme

play05:44

risks from upcoming Advanced AI systems

play05:47

and Sam Alman agreed he stated that the

play05:49

worst case scenario for AI could be

play05:51

lights out for all of us so this

play05:53

statement right here is actually quite

play05:55

true samman has stated on multiple

play05:57

different occasions dangerous these AI

play05:59

systems could be and the kind of things

play06:01

that could happen and every time I hear

play06:03

these individuals talk about the safety

play06:05

precautions of open aai I do truly

play06:07

wonder how powerful the systems that

play06:09

they do have are and if the current

play06:11

preparedness framework that they're

play06:12

currently using in order to deploy model

play06:14

safely is actually going to be something

play06:16

that they stick by considering the fact

play06:18

that we are now in these terminal race

play06:20

conditions in which companies are forced

play06:22

to outdo one another in order to gain

play06:24

customer satisfaction you can see right

play06:26

here there are some key issues to where

play06:28

they describe how open AI has previously

play06:30

not safely deployed their system it says

play06:32

in the absence of whistleblower

play06:34

protections openai demanded we sign away

play06:36

our rights to ever criticize the company

play06:39

under threat of losing millions of

play06:40

dollars invested Equity when we resigned

play06:43

for the company touting cautious and

play06:44

gradual deployment practices gbt 4 was

play06:47

deployed prematurely in India in direct

play06:49

violation of open ai's internal safety

play06:52

procedures and more famously openai

play06:55

provided technology to Bing's chatbot

play06:58

which then threatened an attempted to

play06:59

manipulate users and openi claimed to

play07:02

have strict internal security controls

play07:05

despite a major security breach and

play07:07

other internal security concerns the

play07:09

company also fired a colleague in part

play07:12

about raising concerns about their

play07:13

security practices that of course is

play07:15

referring to Leopold Ashen brener now

play07:17

you can see right here they also spoke

play07:19

about how prominent safety researchers

play07:21

have left the company including

play07:23

cofounders the head of Team responsible

play07:25

for controlling smarter than human AI

play07:27

systems said on resignation that the

play07:30

company was long overdue getting

play07:32

incredibly serious about the

play07:33

implications of AGI and that safety

play07:35

culture has taken a backseat to shiny

play07:38

products while these incidents did not

play07:40

cause catastrophic harms that's only

play07:43

because truly dangerous systems have not

play07:45

yet been built not because companies

play07:47

have safety processes that could truly

play07:50

handle dangerous system we believe that

play07:51

there should be public involvement in

play07:53

decisions around highrisk AI systems and

play07:56

SB creates a space for this to happen it

play07:58

requires publishing a Safety and

play08:00

Security protocol to inform the public

play08:02

about safety standards and it protects

play08:04

whistleblowers who raise concerns to the

play08:06

California attorney general if a model

play08:09

possesses an unreasonable risk or

play08:10

causing or capable of causing or

play08:12

enabling critical harm it says it

play08:14

provides a possibility for consequences

play08:17

for companies if they mislead the public

play08:19

and doing so lead to harm or imminent

play08:21

threat to Public Safety and extracts a

play08:23

careful balance that protects legitimate

play08:25

IP interests now what's interesting

play08:27

about this is that they say here that

play08:29

open ai's complaints about s SP 1047 are

play08:33

not constructive and don't seem to be in

play08:35

good faith they state that open AI they

play08:37

don't protect whistleblowers and they do

play08:39

nothing to prevent a company from

play08:41

releasing a product that would

play08:42

foreseeably cause catastrophic harm to

play08:45

the public and it's perfectly clear that

play08:47

they are not a substitute for S SP 1047

play08:50

as open AI knows as much so basically

play08:52

what they're stating right here is that

play08:54

currently in the AI space we are waiting

play08:56

for a disaster to happen I know many

play08:58

people think that the AI debate is one

play09:00

that is just pointless but I mean these

play09:02

guys do actually genuinely have a point

play09:05

about this companies have completely

play09:06

disregarded safety precautions in order

play09:08

to get products into users hands as

play09:10

quick as possible and now with the

play09:12

future Cycles ahead of us we know that

play09:15

systems are going to be a lot more

play09:16

smarter a lot more capable and thus a

play09:19

lot more dangerous if this is true

play09:22

looking historically at how companies

play09:23

have acted in the past can we not see

play09:25

how releasing a product that would

play09:27

foreseeably cause catastrophic harm

play09:29

could could be possible in the near to

play09:30

short-term future and I think this is

play09:32

you know plausible it does say that we

play09:34

cannot wait for Kress to act they've

play09:36

explicitly said that they aren't willing

play09:37

to pass meaningful AI regulation and if

play09:39

they ever do it can preempt California

play09:42

it can preempt C regulation an anthropic

play09:44

join sensible observers when it worries

play09:46

congressional action will not occur in

play09:47

the necessary window of time they

play09:49

basically State here that SB 1047

play09:51

requirements are things that AI

play09:52

developers including open AI have

play09:55

already largely agreed to involuntary

play09:57

commitments to the White House and S the

play09:59

main difference is that s SP 1047 would

play10:02

force developers to show the public that

play10:04

they're keeping those commitments and

play10:05

hold them accountable if they don't now

play10:07

of course this is where they talk about

play10:09

the fear of mass of Exodus of AI

play10:11

developers and it says the fears of a

play10:13

mass of Exodus of AI developers from the

play10:15

state are contri opening ey said the

play10:16

same thing about the EU AI act but it

play10:18

didn't happen California is the best

play10:21

place in the world to do AI research and

play10:23

what's more the Bill's requirements

play10:24

would apply to anyone doing business in

play10:26

CA regardless of their location and it's

play10:29

extremely disappointing to see our

play10:31

former employer pursue Scare Tactics to

play10:33

derail AI safety legislation and here's

play10:36

the main point from all of this they

play10:38

state that Sam Alman our former boss has

play10:40

repeatedly called for a regulation now

play10:43

when actual regulation is on the table

play10:45

he opposes it and he said that

play10:47

previously obviously they would support

play10:48

all regulation but yet openai opposes

play10:51

the even extremely light touch

play10:53

requirements in SB 1047 most of which

play10:56

they claim they voluntarily commit to

play10:58

raising the questions the strengths of

play10:59

those commit like I said before this

play11:01

letter was written by William Saunders

play11:03

and of course Daniel kotalo former open

play11:06

aai member of policy star so this is

play11:10

something that is rather surprising

play11:12

considering the fact that open AI have

play11:14

consistently shown their position when

play11:16

it comes to regulations surrounding AI

play11:18

because they've seemingly been rather

play11:20

supportive however maybe when it's

play11:22

actually coming to it right now for

play11:24

whatever reason they're on the fence now

play11:25

interestingly enough open AI former

play11:27

members are not the only people that

play11:29

have written about this letter and the

play11:31

issues that this kind of poses to the

play11:33

area here we can see anthropics letter

play11:35

that was written just yesterday it does

play11:37

say a few things and some of these that

play11:40

I want to bring to your attention are

play11:42

pretty incredible so you can see right

play11:43

here it says pros and cons of s SP 1047

play11:46

it says we want to be clear as we were

play11:48

in our original Saia letter that s SP

play11:50

1047 addresses real and serious concerns

play11:54

with catastrophic risk in AI system AI

play11:56

systems advancing today are gaining

play11:58

capability ities extremely quickly which

play12:00

offer both great promise for

play12:02

California's economy and substantial

play12:04

risk and our work and this is where it

play12:06

gets interesting is that our work with

play12:09

biod defense experts cyber experts and

play12:12

others shows a trend for the potential

play12:14

for serious misuse in the coming years

play12:16

perhaps as little as 1 to three years

play12:19

that's a crazy statement but when you

play12:21

think about the pace of AI development

play12:22

don't think that this isn't a

play12:24

possibility and here's some of the key

play12:26

things about this paper just the bits

play12:28

that you might want to pay attention to

play12:29

2 where it says here are some thoughts

play12:30

about regulating Frontier AI system

play12:32

regardless or whether or not SB 1047 is

play12:35

adopted California will be grappling

play12:37

with how to regulate AI technology for

play12:39

years to come and it says below we share

play12:41

our general perspective on AI regulation

play12:44

which we hope may be useful considering

play12:45

both s SP 107 and future regulatory

play12:48

efforts might occur instead or in

play12:50

addition to it so basically they're

play12:51

stating some of the problems here that

play12:53

most regulatory pieces fail to address

play12:55

and one of the key issues that I've seen

play12:57

before is of course that you know a

play12:59

regulation is driven by the speed of

play13:01

progress regulating things usually does

play13:04

take time you've got different bills

play13:06

that you have to pass you've got like

play13:07

all these committees and you know

play13:09

honestly just government nonsense which

play13:11

is really slow but I completely

play13:13

understand why it needs to go through so

play13:14

many different areas before bill is

play13:16

passed but the point here is that this

play13:18

doesn't work well with AI because AI is

play13:20

just advancing extremely rapidly so it

play13:22

says here that on one hand this means

play13:24

that regulation is urgently needed on

play13:26

some issues we believe that these

play13:28

technologies will present present

play13:29

serious risk to the public in the near

play13:30

future and on the other hand because the

play13:33

field is advancing so quickly strategies

play13:35

for mitigating risk are in a state of

play13:37

Rapid Evolution often resembling

play13:39

scientific research problems more than

play13:41

they resemble established best practices

play13:44

and we believe that this is genuinely

play13:46

one of the most difficult dilemas and

play13:48

it's an important driver of the

play13:49

Divergence in views among different

play13:51

experts on

play13:53

sb147 and in general and it's rightly

play13:55

said trying to regulate something that

play13:57

changes literally every 12 months is you

play13:59

know insane like it's just so hard to do

play14:01

that and one resolution to this dilemma

play14:03

which they've spoken about is very

play14:05

adaptable regulation in grappling with

play14:07

the Dilemma above we've come to the view

play14:09

that the best solution is to have a

play14:10

regulatory framework that is very

play14:12

adaptable to Rapid change in the field

play14:14

which does make sense it says in terms

play14:16

of specific properties of an AI Frontier

play14:18

Model regulatory framework we see three

play14:20

key elements as essential transparent

play14:22

Safety and Security practices at present

play14:24

many AI companies consider it necessary

play14:26

to have detailed Safety and Security

play14:28

plans for managing AI catastrophic risk

play14:31

but the public and lawmakers have no way

play14:33

to verify adherence to these plans or

play14:36

the outcome of any test run as a part of

play14:38

them basically what they're stating here

play14:39

is that look these guys always sayate

play14:41

that okay we're going to test if these

play14:43

models pass a certain threshold and if

play14:45

it passes a certain threshold we're

play14:46

never going to release the model but how

play14:48

do we know what is going on internally

play14:51

if they don't release these findings to

play14:53

anyone they could simply release models

play14:55

that are completely dangerous if they

play14:57

haven't tested them in certain ways

play14:58

trans transparency in this area would

play15:00

create public accountability accelerate

play15:02

industry learning and promote a race to

play15:05

the top with very few downsides and

play15:07

thropic also talks about incentives to

play15:08

make Safety and Security plans effective

play15:11

in preventing catastrophe basically what

play15:12

they're stating here is that look you

play15:14

can prescribe rules all day but the main

play15:16

thing that you need to do is incentivize

play15:18

the right outcome this is you know how

play15:20

humans are driven if you incentivize

play15:22

someone with the right thing they're

play15:24

always going to do what you want them to

play15:25

do you can see here it says we believe

play15:27

it is critical to have some some

play15:29

framework for managing Frontier AI

play15:31

systems that roughly meets these

play15:33

requirements and as AI systems become

play15:35

more powerful it's crucial for us to

play15:37

ensure we have appropriate regulations

play15:39

in place to ensure their safely

play15:41

sincerely Dario amod CEO of anthropic so

play15:44

overall what we have here is a

play15:46

comprehensive view of where companies

play15:48

stand it's clear that anthropic does

play15:50

want regulation but understands that

play15:52

even the current regulation if it's

play15:53

proposed isn't going to do what it needs

play15:55

to and open AI seem to be edging towards

play15:59

not regulating their systems

play16:00

surprisingly considering their recent

play16:02

position regarding regulating AI system

play16:04

either way I do want to know if this

play16:06

legislation is going to be accepted or

play16:08

not it seems to be rather interesting

play16:10

where everyone stands regulating AI is

play16:12

most certainly hard let me know what you

play16:14

guys think about air regulation do you

play16:16

think it makes sense do you think things

play16:17

like this are going to work and if you

play16:19

guys do want to know about open ai's

play16:21

method of their safety this is their

play16:24

preparedness framework Beta And

play16:26

basically they do have an updated one

play16:27

but I can't find it but the long story

play16:29

short is that you know if models reach a

play16:31

certain level they're basically saying

play16:32

they won't release them which is why

play16:34

I've said that you know um and the model

play16:36

basically if it gets high or critical on

play16:39

certain evaluations they're not going to

play16:40

release them which is why I've said that

play16:42

before I don't think we're going to get

play16:43

Frontier models in certain areas because

play16:45

it's going to be pretty hard um to to do

play16:47

that whilst increasing the knowledge of

play16:49

the model so you've got cyber security

play16:51

you know um this one is biological and

play16:53

other threats this one is persuasion and

play16:55

this is models autonomy so this is

play16:56

basically atic Behavior to go off and do

play16:59

stuff that's pretty insane so um I

play17:01

personally do believe that what we're

play17:03

walking into is you know a gray area

play17:05

because regulation is pretty difficult

play17:07

but here's what I think is going to

play17:08

happen I think that you know regulation

play17:11

will lag behind a development and

play17:13

somewhere somehow something's going to

play17:15

happen and whenever it does happen it's

play17:17

probably going to then force a

play17:18

regulation like usually what happens is

play17:21

in spaces that are pretty Innovative

play17:22

since regulation can't keep up with

play17:24

what's going on and Frameworks like this

play17:26

might not always be effective

play17:27

unfortunately we're probably going to

play17:28

have to wait for something bad to happen

play17:30

and then once it bad happens we then put

play17:32

in regulation to prevent that regulation

play17:33

from happening again for example if we

play17:36

look at the TSA the tragedies that

play17:37

happened in America how it completely

play17:39

changed air travel things like that I do

play17:41

think unfortunately we're probably going

play17:43

to have to see another scenario like

play17:44

that I do hope that that isn't the case

play17:46

I would much rather regulation just

play17:49

allows these companies to also innovate

play17:51

and also not share their secrets because

play17:52

I think that's the main thing that

play17:53

they're scared of but I guess we'll have

play17:55

to see

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
AI RegulationInnovationSafetyCalifornia BillWhistleblowersOpen AIAnthropicTech IndustryEthical AIFuture Risks