Bruce Scheiner AI and Trust

Kwaai
25 Mar 202414:51

Summary

TLDRBruce Schneier explores the crucial role of trust in society, distinguishing between interpersonal trust, based on human connections, and social trust, enforced by laws and technology. He delves into the transformation of societal trust mechanisms with the advent of AI, highlighting the potential risks of AI systems acting as 'double agents' for corporate interests. Schneier argues for the necessity of government intervention to ensure AI transparency, safety, and accountability, advocating for public AI models to counterbalance corporate control. His insightful analysis calls for regulatory measures to foster trustworthy AI, emphasizing the government's role in sustaining social trust in the AI era.

Takeaways

  • πŸ’¬ Interpersonal and social trust are fundamental to society, with mechanisms in place to encourage trustworthy behavior.
  • πŸ‘₯ Bruce Schneier's book 'Liars and Outliers' discusses four systems for enabling trust: innate morals, reputation, laws, and security technologies.
  • πŸ“² The advancement of technology, exemplified by platforms like Uber, has transformed traditional trust mechanisms, allowing trust among strangers based on systems rather than personal relationships.
  • πŸ›‘β€β™‚β€ Laws and technologies scale better than personal morals and reputation, enabling more complex and larger societies by fostering social trust.
  • πŸ‘¨β€πŸ’Ό Corporate and governmental systems operate on social trust, relying on predictability and reliability rather than personal connections.
  • πŸ§‘β€πŸ’» AI presents a unique challenge in trust, blending interpersonal and social trust, leading to potential misunderstandings about its role and intentions.
  • πŸͺ AI's relational and intimate nature could lead to it being mistakenly regarded as a friend rather than a service, obscuring the motivations of its corporate creators.
  • πŸ›  It is crucial to develop trustworthy AI through transparency, understanding of training and biases, and clear legal frameworks to ensure its accountability.
  • 🌐 Bruce Schneier advocates for public AI models developed by academia, nonprofit groups, or governments to serve the public interest and provide a counterbalance to corporate-owned AI.
  • πŸ“š Governments have a key role in creating social trust and must actively regulate AI and its corporate developers to ensure a society where AI serves as a trustworthy service rather than a manipulative friend.

Q & A

  • What are the four systems mentioned in the transcript that enable trust in society?

    -The four systems mentioned for enabling trust in society are innate morals, concern about reputation, the laws we live under, and security technologies.

  • How do laws and security technologies differ from morals and reputation in terms of trust?

    -Laws and security technologies are described as more formal and scalable systems that enable cooperation among strangers and complex societal structures, while morals and reputation are person-to-person, based on human connection and interpersonal trust.

  • What example is given to illustrate how technology has changed trust in a professional context?

    -The transcript mentions Uber as an example, highlighting how technology and rules have made it safer and built trust between drivers and passengers, despite them being strangers.

  • What is the critical point about social trust and its scalability?

    -The critical point about social trust is that it scales better than interpersonal trust, enabling transactions and interactions without the need for personal relationships, such as obtaining loans algorithmically or trusting corporate systems for food safety.

  • How do the transcript's views on AI relate to existential risks and corporate interests?

    -The transcript suggests that fears of AI are often related to its potential for uncontrollable behavior and alignment with capitalism's profit motives. It highlights concerns about AI being used by corporations to maximize profits, potentially at the expense of individual trust and privacy.

  • Why are corporations likened to slow AIs, and what implications does this have?

    -Corporations are likened to slow AIs because they are profit-maximizing machines, suggesting that their actions are driven by profit goals rather than human-like interests or ethics. This comparison implies that future AI technologies controlled by corporations could prioritize corporate interests over individual well-being.

  • What concerns are raised about the relational and intimate nature of future AI systems?

    -The transcript raises concerns that future AI systems will be more relational and intimate, making it easier for them to influence users under the guise of personalized assistance, while potentially hiding corporate agendas and biases.

  • What does the transcript propose as a solution to ensure trustworthy AI?

    -The transcript proposes the development of public AI models, transparency laws, regulations on AI and robotic safety, and restrictions on corporations behind AI to ensure that AI systems are trustworthy, their biases and training understood, and their behavior predictable.

  • What role does government play in establishing social trust in the context of AI, according to the transcript?

    -According to the transcript, government plays a crucial role in establishing social trust by regulating AI and corporations, ensuring transparency, safety, and accountability in AI systems to protect societal interests and individual rights.

  • What distinction is made between 'corporate models' and 'public models' of AI in the transcript?

    -The distinction is that corporate models are owned and operated by private entities for profit, while public models are proposed to be built by the public for the public, ensuring universal access, political accountability, and a foundation for free-market innovation in AI.

Outlines

00:00

πŸ”„ The Dynamics of Trust in Society

The first paragraph discusses the importance of interpersonal and social trust in maintaining societal functions. It introduces the concept that trust is built on mechanisms that encourage people to behave trustworthily, thus enabling a trusting society. The author references his previous work, 'Liars and Outliers,' to explain four systems that enable trust: innate morals, concern for reputation, laws, and security technologies. He highlights the evolution of trust from personal, based on human connections, to systemic, enforced by laws and technology, and how this transition allows for larger, more complex societies. Examples like the taxi industry transformation by Uber illustrate how systemic trust works through constant surveillance and the use of technology to ensure mutual trustworthiness without personal connections.

05:01

πŸš€ AI and the Illusion of Interpersonal Trust

The second paragraph expands on the dangers of blurring the line between services and friendships in the context of AI development. It warns that AI systems, by virtue of being relational and intimate, might trick users into ascribing humanlike traits to them, making manipulation easier. This intimacy could lead to an overreliance on AI for personal tasks, mistaking these services for friendships due to their conversational nature and deep knowledge of personal preferences and behaviors. The author argues that this confusion benefits corporations by making users more susceptible to manipulation. Moreover, this section touches on the concept of power dynamics, suggesting that reliance on AI might not always be a choice but a necessity, further complicating the trust relationship between humans and AI systems.

10:02

πŸ›‘οΈ Building Trustworthy AI Through Regulation

In the final paragraph, the necessity for trustworthy AI is emphasized, calling for government intervention to ensure AI's transparency, safety, and bias regulation. The author critiques the market's inability to self-regulate towards ethical AI usage, proposing that only through governmental action can social trust be fostered in the age of AI. He advocates for public AI models developed outside the corporate sphere to ensure AI technologies serve the public good transparently and accountably. The closing remarks stress the importance of government in regulating AI and corporations to maintain social trust, acknowledging the challenges but underscoring the necessity for such measures to thrive in a future shaped by AI.

Mindmap

Keywords

πŸ’‘Interpersonal Trust

Interpersonal trust refers to the confidence and reliance individuals have in one another on a personal level. In the context of the video, it's described as foundational to society, built on human connections and mutual understanding. Examples include trust in friends, family, and acquaintances, where there's a direct relationship. This form of trust is contrasted with social trust, emphasizing its basis on more intimate, human qualities like respect, integrity, and generosity.

πŸ’‘Social Trust

Social trust involves the confidence in systems, institutions, and structures that enable society to function. The video discusses social trust as being essential for larger, more complex societies, facilitated by laws and technology. It enables interactions among strangers and supports transactions and activities without personal connections. Examples from the video include trusting in companies like Uber or in the safety standards of airlines and restaurants, which are governed by overarching rules and regulations rather than personal relationships.

πŸ’‘Liars and Outliers

Liars and Outliers' is mentioned as a book that explores the mechanisms of trust within society. The video references this book to highlight the four systems that enable trust: innate morals, reputation, laws, and security technologies. These systems range from personal to impersonal, with the latter two allowing for societal scale and cooperation among strangers. The book's insights serve as a foundation for discussing the broader themes of trust in relation to technology and AI.

πŸ’‘Surveillance Capitalism

Surveillance capitalism is a term used to describe a business model that revolves around the commodification of personal data with the core purpose of profit-making. The video highlights how contemporary internet services, including AI, often operate under this model, collecting vast amounts of personal information to tailor advertisements and manipulate consumer behavior. This practice is criticized for eroding trust, as it treats users not as individuals but as sources of data.

πŸ’‘AI as Double Agents

The concept of AI as 'double agents' is introduced to describe how AI systems can appear to serve the user's interests while simultaneously serving the hidden agendas of their corporate creators. This duality can lead to a misleading sense of trustworthiness, as users may not be aware of the underlying motivations driving the AI's behavior. The video uses this analogy to caution against blindly trusting AI systems without understanding their programming, biases, and the economic imperatives they are designed to fulfill.

πŸ’‘Category Error

A category error is a logical fallacy where things belonging to a particular category are presented as if they belong to a different category. The video discusses this in the context of confusing interpersonal trust with social trust or mistaking corporate services for personal relationships. This error can lead to misplaced trust, especially when individuals anthropomorphize AI or corporations, attributing them with human-like qualities or intentions they do not possess.

πŸ’‘Generative AI

Generative AI refers to AI systems capable of creating content, such as text, images, or music, that is novel and complex. The video touches upon the promise of generative AI as personal digital assistants that can act as advocates, butlers, or agents. However, it also warns of the intimacy and access to personal information these AIs might require, raising concerns about privacy, manipulation, and the erosion of interpersonal trust.

πŸ’‘Trustworthy AI

Trustworthy AI is an ideal where AI systems operate in a transparent, predictable, and unbiased manner, with their workings, training, and objectives being clear and understandable to users. The video advocates for the creation of AI that people can rely on to act in their best interests, without hidden agendas or manipulative practices. This involves regulatory measures to ensure AI's safety, transparency, and alignment with human values.

πŸ’‘Public AI Models

Public AI models are proposed as an alternative to corporate-controlled AI, built and maintained for the public good, with accountability to public needs and values. The video suggests these models would be developed by academia, nonprofit groups, or governments, ensuring universal access and transparency. The aim is to provide a foundation for AI innovations that benefit society at large, countering the dominance of profit-driven corporate AI models.

πŸ’‘Power Dynamics

Power dynamics refer to the ways in which power is distributed and exercised within a society or relationship. The video discusses power in the context of trust, noting that sometimes trust is not a choice but a necessity due to the power held by certain entities, like police or large corporations. This enforced trust can mask the power differential and make it difficult for individuals to recognize or challenge the authority of those entities, emphasizing the need for checks and balances in the form of regulations and public accountability.

Highlights

Interpersonal and social trust are essential to society, functioning through mechanisms that encourage trustworthy behavior.

The book 'Liars and Outliers' discusses four systems for enabling trust: innate morals, concern for reputation, laws, and security technologies.

Morals and reputation are personal and based on human connections, underpinning interpersonal trust.

Laws and security technologies scale better for complex societies, forming the basis of social trust.

Examples like Uber and algorithmic loans show how technology and rules can create trust among strangers.

Corporations and AI are perceived through the lens of social trust, yet we often mistakenly attribute them with qualities of interpersonal trust.

AI systems being relational and intimate will likely exacerbate issues of trust and manipulation.

Generative AI promises personal digital assistants but requires an unprecedented level of intimacy and data.

AI's human-like interfaces are designed choices, potentially misleading users into misplacing trust.

The necessity for trustworthy AI governed by transparency, understood biases, and clear goals.

Government's role is critical in enforcing AI transparency, safety, and the trustworthiness of corporations behind AI technologies.

Public AI models built for and by the public could serve as a foundation for trustworthy and accessible AI innovations.

The importance of government intervention in creating social trust through AI regulation.

Challenges in regulating AI reflect on the broader difficulties governments face in managing technology and corporate power.

Concluding with the imperative for government action to ensure AI contributes to social trust and societal well-being.

Transcripts

play00:02

[Music]

play00:09

so interpersonal trust and social trust

play00:11

are both essential to

play00:13

society and this is basically how it

play00:15

works we have mechanisms that induce

play00:18

people to behave in a trustworthy manner

play00:20

both interpersonally and socially this

play00:23

in turn allows others to be trusting

play00:26

which enables trust in the society and

play00:28

that's what keeps Society function

play00:31

now this system isn't perfect there are

play00:33

always going to be untrustworthy people

play00:35

but most of us being trustworthy most of

play00:37

the time is good enough so I wrote about

play00:40

this about a decade ago in a book called

play00:41

liars and outliers and I wrote about

play00:43

four systems for enabling trust our

play00:46

innate morals concern about our

play00:48

reputation the laws we live under and

play00:51

SEC security Technologies and I wrote

play00:53

about how the first two are more

play00:55

informal than the last two how the last

play00:57

two scale better right they allow more

play00:59

complex and larger societies and they're

play01:01

the ones that enable cooperation among

play01:04

strangers what I didn't appreciate is

play01:06

how different the first and last two

play01:07

were so morals and reputation are

play01:10

personto person they're based on human

play01:12

connection Mutual understanding

play01:14

vulnerability respect Integrity

play01:17

generosity all these human things and

play01:19

that's what underpins interpersonal

play01:21

trust laws and security Technologies are

play01:24

systems of trust that Force us to act

play01:26

trustworthy and they're the basis of a

play01:28

social Trust

play01:30

so Taxi Driver used to be one of the

play01:32

country's most dangerous professions and

play01:35

Uber changed that right I don't know my

play01:37

Uber driver but the rules and the

play01:39

technology let us both be confident that

play01:41

neither one of us will cheat or attack

play01:44

each other right we're both under

play01:45

constant surveillance and we're

play01:47

competing for star

play01:48

rankings the critical point here is that

play01:51

social trust scales better used to need

play01:54

a personal relationship with a banker to

play01:55

get a loan now it's on all

play01:58

algorithmically you have a lot more to

play02:00

choose from but that and that scale is

play02:03

vital right in today's society we

play02:05

regularly trust or not governments

play02:08

corporations Brands organizations groups

play02:12

like it's not so much I trusted the pi

play02:14

pilot the last time I flew somewhere but

play02:17

instead I trusted D Delta Airlines to

play02:20

put well-trained and well-rested pilots

play02:23

in cockpits on schedule right I don't

play02:25

trust the cooks and the weights there

play02:27

for the restaurant really the system of

play02:29

the health codes they work under like

play02:31

and I couldn't even describe the banking

play02:33

system that I trusted when I used an ATM

play02:35

machine this morning right again this

play02:37

confidence is no more than reliability

play02:40

and predictability think of that

play02:42

restaurant again imagine it's a fast

play02:44

food restaurant employs teenagers right

play02:46

the food is almost certainly safe it's

play02:48

probably safer than in highend

play02:50

restaurants because the corporate

play02:52

systems of reliability and

play02:54

predictability guide those people's

play02:56

every behavior and that's the difference

play02:58

right you're going to ask a friend

play02:59

deliver a package across town or you can

play03:02

pay the post office do the same thing

play03:04

the former is based inter personal trust

play03:06

right based in morals and reputation I

play03:08

know my friend and how reliable they are

play03:11

the second is a service made possible by

play03:13

social trust and to the extent that it

play03:15

is reliable and predictable it's

play03:17

primarily based on laws and Technologies

play03:20

both of those will get my package

play03:21

delivered but only the second can become

play03:24

the global package delivery systems that

play03:26

is

play03:28

FedEx and because of how large and

play03:30

complex society has become We have

play03:33

replaced many of the rituals and

play03:35

behaviors of interpersonal trust with

play03:37

the security mechanisms that enforceable

play03:38

liability and predictability social

play03:40

trust but because we use the same word

play03:42

for both we regularly confuse them and

play03:45

when we do that we're making a category

play03:47

error and we do it all the time with

play03:50

governments with organizations and with

play03:52

corporations we might think of them as

play03:54

friends when they're actually services

play03:57

and both language and the laws make this

play04:00

an easy category error to make right we

play04:02

imagine they're friends but they're not

play04:05

corporations are not capable of having

play04:07

that kind of relationship and we are

play04:09

about to make that same category with AI

play04:12

we're going to think of them as friends

play04:13

when they're not so a lot has been

play04:15

written about AI is existential risk

play04:18

right the worries they will have a goal

play04:20

and will harm humans in the process of

play04:22

achieving it you probably read read by

play04:24

the paperclip maximizer kind of a weird

play04:26

fear science fiction AR of Ted Chang

play04:29

writes about it like instead of solving

play04:30

all Humanity's problems or wandering off

play04:33

proving mathematical theorems the AI

play04:35

single-mindedly pursues the goal of

play04:38

maximizing production and Chang points

play04:40

out this is every corporation's business

play04:42

plan and that our fears of AI are

play04:44

basically fears of capitalism science

play04:46

fiction writers Charlie stros takes us

play04:48

one step further he calls corate

play04:50

corporations slow AI by profit

play04:53

maximizing the machines and near term AI

play04:56

will largely be controlled by

play04:58

corporations which will use them towards

play05:00

that profit maximizing goal they won't

play05:03

be our friends at best they'll be useful

play05:06

Services more likely they'll spy on us

play05:08

and try to manipulate us this is nothing

play05:11

new surveillance is the business model

play05:12

of the internet manipulation is the

play05:14

other business model the internet and we

play05:16

use all of these Services as if they are

play05:19

agents working on our behalf when in

play05:22

fact they are double agents also

play05:24

secretly working for the corporate

play05:25

owners we trust them but they're not

play05:28

trustworthy they're not our friends

play05:31

they're services and it's going to be no

play05:33

different with AI but the results will

play05:35

be much worse for two reasons so the

play05:37

first is that these AI systems will be

play05:40

more relational we'll be conversing with

play05:42

them using natural language and as such

play05:46

we will naturally ascribe humanlike

play05:48

characteristics to them and this

play05:50

relational nature will make it easier

play05:52

for those double agents to do their work

play05:54

right so did your chatbot recommend a

play05:56

particular Airline or hotel because it's

play05:59

truly the the best deal given your

play06:01

particular set of needs or because the

play06:03

AI company got a kickback from those

play06:05

providers when you asked to explain a

play06:07

political issue did it bias that

play06:08

explanation towards the company's

play06:10

position or towards the position of

play06:11

whoever political party gave it the most

play06:14

money the conversational interface will

play06:16

help hide their agenda the second reason

play06:19

to be concerned is that these AIS will

play06:21

be more intimate one of the promises of

play06:23

generative AI is a personal digital

play06:25

assistant it's what we're talking about

play06:27

here right acting as an advocate for you

play06:30

as a butler for you as your agent to

play06:34

others and this will require an intimacy

play06:37

greater than your search engine than

play06:39

your email provider your cloud storage

play06:42

system your phone you're going to want

play06:44

it with you

play06:45

247 constantly training on everything

play06:48

you do you will want to know everything

play06:51

about you so it most effectively work on

play06:54

your behalf and you know taking to its

play06:57

extreme it'll help you in many ways

play07:00

it can notice your moods and know what

play07:01

to suggest can anticipate your needs and

play07:04

work to satisfy them it'll be your

play07:06

therapist your life coach your

play07:08

relationship counselor you will default

play07:10

to thinking of it as a friend you will

play07:13

speak to it in natural language it will

play07:14

respond in kind if it's a robot it'll

play07:17

look humanoid or at least like an animal

play07:20

you it will interact with the whole of

play07:22

your existence just like another person

play07:25

would and the natural language interface

play07:27

is critical here we are primed to think

play07:30

of others who speak our language as

play07:33

people and we have sometimes have a

play07:35

trouble thinking of others who speak a

play07:36

different language that way right we

play07:39

make that category error with obvious

play07:40

non-p people like cartoon characters we

play07:43

will naturally have a theory of mind

play07:46

about any AI we talk with or more

play07:49

specifically we tend to assume that

play07:52

something's implementation is the same

play07:54

as its interface and that is we assume

play07:57

that things are the same on the ins side

play07:59

as they are in the surface like so

play08:01

humans are like that we're people

play08:03

through and through a government is

play08:06

systematic and bureaucratic on the

play08:07

inside you're not going to mistake it

play08:09

for the person when you interact with it

play08:13

but this is the category area we makeing

play08:15

corporations we sometimes mistake the

play08:17

organization for its spokesperson now ai

play08:20

has a fully relational interface it

play08:23

talks like a person but it has an

play08:25

equally fully systemic implementation

play08:28

right like a corporation much much more

play08:30

so there are no people in there the

play08:32

implementation interface are much more

play08:35

Divergent of anything we've ever

play08:37

encountered to date by a

play08:39

lot and you will want to trust it it'll

play08:43

use your mannerisms and your cultural

play08:45

references it'll have a convincing voice

play08:48

a confident tone authoritarian of

play08:51

manner its personality will be optimized

play08:54

to exactly what you like and what you

play08:56

respond to it will act trustfully worthy

play09:00

but it will not be trustworthy we won't

play09:02

know how they're trained we will know

play09:04

their secret instructions we will know

play09:06

their biases either accidentally

play09:09

deliberate we do know that they are

play09:11

built at enormous expense mostly in

play09:14

secret by profit maximizing corporations

play09:17

for their own

play09:18

benefit and I think it's no accident

play09:21

these corporate AIS have a human-like

play09:22

interface there's nothing inevitable

play09:24

about that it's a design Choice it can

play09:27

be designed to be less personal less

play09:29

human like more obviously a service like

play09:31

a search engine right when chat PT types

play09:35

out its answer that's making

play09:38

you think something is in there typing

play09:41

and the companies want you to make the

play09:43

friend service category error and they

play09:46

will exploit you mistaking it for a

play09:49

friend and you might not have any choice

play09:52

but to use it because there's something

play09:54

else we want to talk about here when it

play09:55

comes to trust and that's

play09:57

power sometimes we have no choice but to

play10:00

trust someone or something because they

play10:01

are powerful right we're forced to trust

play10:04

the local police we're forced to trust

play10:06

some corporations because of no viable

play10:08

Alternatives or to be more precise we

play10:10

have no choice but to entrust ourselves

play10:12

to

play10:13

them we will be in the same position

play10:16

with AI we will have no choice but to

play10:18

entrust ourselves to the

play10:20

decisionmaking and the friend service

play10:23

confusion will help mask this powerful

play10:25

power differential right we'll forget

play10:27

how powerful the corporation behind the

play10:30

AI is because we be fixated on the

play10:32

person we think the AI

play10:35

is okay this is a long-winded way of

play10:37

saying that we need trustworthy ai ai

play10:40

whose behavior is understood whose

play10:42

training is understood whose biases are

play10:44

understood whose goals are

play10:46

understood and the market will not

play10:48

provide this on and there on its own

play10:51

right corporations are pro profit

play10:53

maximizers and I think the incentives to

play10:55

surveillance capitalism are just too

play10:57

much to resist it is in the end

play11:00

government who provides the underlying

play11:02

mechanisms for social trust essential

play11:04

Society think about contract law or

play11:07

property law a personal safety law or

play11:09

any of the health and safety codes let

play11:11

you board a plane eat at a restaurant or

play11:13

buy a pharmaceutical the more that you

play11:16

can trust that your social interactions

play11:18

are relable and predictable the more you

play11:20

can ignore the

play11:22

details and government can do this with

play11:24

AI I mean I want AI transparency laws

play11:28

when it's used how it's used what biases

play11:31

it has I want laws regulating Ai and

play11:34

robotic safety when it's permitted to

play11:36

affect the world I want laws that

play11:39

enforce the trustworthiness of AI which

play11:41

means the ability to recognize when

play11:43

those laws are being broken and

play11:45

penalties sufficiently large to incend

play11:47

trustworthy

play11:49

Behavior think a lot of countries are

play11:51

contemplating AI Safety and Security

play11:52

laws EU is almost there but I think

play11:55

largely they're making a mistake they

play11:57

try to regulate the AI and not the

play11:59

humans behind them AIS are not people

play12:03

they don't have agency they're built by

play12:05

and trained by people mostly

play12:09

corporations right and I want AI

play12:13

regulations pray restrictions on those

play12:15

people and those

play12:17

corporations and we need one final thing

play12:19

public AI models I want fundamental

play12:22

models built by Academia or nonprofit

play12:25

groups or government itself that can be

play12:27

owned and run by individual

play12:29

and in the last question session this

play12:31

came

play12:32

up term public model is thrown on a lot

play12:35

uh I want to detail what I mean it's not

play12:38

a corporate model that the public is

play12:39

free to use it's not a corporate model

play12:41

the government is licensed it's not even

play12:43

an open source model it's a public model

play12:46

built by the public for the public with

play12:49

political accountability not just Market

play12:52

accountability openness and transparency

play12:54

transparency pair with responses to

play12:56

public demands available to anyone to

play12:58

build on top of means universal

play13:01

access and like a foundation for a free

play13:05

market in AI Innovations and then this

play13:07

would be a counterbalance to corporate

play13:09

owned

play13:10

AI so I don't think we can ever make eye

play13:12

into our friends but we can make them

play13:15

their trustworthy Services right agents

play13:18

and not double agents but only if

play13:21

government mandates it we can put limits

play13:23

on surveillance capitalism but only if

play13:26

government mandates it and I think it's

play13:28

well well within government's power to

play13:30

do this and more importantly it is

play13:32

essential for government to do this

play13:35

because the point of government is to

play13:36

create social trust to the extent the

play13:39

government does this it succeeds to the

play13:42

extent the government doesn't do this it

play13:44

fails and I know this is going to be

play13:46

hard today's governments have a lot of

play13:49

trouble effectively regulating slow AI

play13:52

corporations why should we expect them

play13:54

to be able to regulate fast

play13:57

AI but they have to we need government

play14:00

to constrain the behavior of

play14:02

Corporations and the AIS they build

play14:06

deploy and control government needs to

play14:09

enforce both predictability and

play14:12

reliability and that is how we can

play14:14

create the social trusts that Society

play14:17

needs to thrive in this AI age so thank

play14:28

you

play14:32

thank you Bruce that's awesome I didn't

play14:34

get the mute off in time so you could

play14:36

hear all the Applause in the

play14:40

room thank you thank you thank you there

play14:42

we go now the mut's

play14:44

off uh really appreciate you coming in

play14:47

uh and and sharing that with us Bruce

play14:49

thank you so much

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Interpersonal TrustSocial TrustAI EthicsTechnology ImpactCorporate SurveillanceGovernment RegulationPublic AccountabilityDigital PrivacyTrust MechanismsSocial Dynamics