Bruce Scheiner AI and Trust
Summary
TLDRBruce Schneier explores the crucial role of trust in society, distinguishing between interpersonal trust, based on human connections, and social trust, enforced by laws and technology. He delves into the transformation of societal trust mechanisms with the advent of AI, highlighting the potential risks of AI systems acting as 'double agents' for corporate interests. Schneier argues for the necessity of government intervention to ensure AI transparency, safety, and accountability, advocating for public AI models to counterbalance corporate control. His insightful analysis calls for regulatory measures to foster trustworthy AI, emphasizing the government's role in sustaining social trust in the AI era.
Takeaways
- 💬 Interpersonal and social trust are fundamental to society, with mechanisms in place to encourage trustworthy behavior.
- 👥 Bruce Schneier's book 'Liars and Outliers' discusses four systems for enabling trust: innate morals, reputation, laws, and security technologies.
- 📲 The advancement of technology, exemplified by platforms like Uber, has transformed traditional trust mechanisms, allowing trust among strangers based on systems rather than personal relationships.
- 🛡♂ Laws and technologies scale better than personal morals and reputation, enabling more complex and larger societies by fostering social trust.
- 👨💼 Corporate and governmental systems operate on social trust, relying on predictability and reliability rather than personal connections.
- 🧑💻 AI presents a unique challenge in trust, blending interpersonal and social trust, leading to potential misunderstandings about its role and intentions.
- 🪐 AI's relational and intimate nature could lead to it being mistakenly regarded as a friend rather than a service, obscuring the motivations of its corporate creators.
- 🛠 It is crucial to develop trustworthy AI through transparency, understanding of training and biases, and clear legal frameworks to ensure its accountability.
- 🌐 Bruce Schneier advocates for public AI models developed by academia, nonprofit groups, or governments to serve the public interest and provide a counterbalance to corporate-owned AI.
- 📚 Governments have a key role in creating social trust and must actively regulate AI and its corporate developers to ensure a society where AI serves as a trustworthy service rather than a manipulative friend.
Q & A
What are the four systems mentioned in the transcript that enable trust in society?
-The four systems mentioned for enabling trust in society are innate morals, concern about reputation, the laws we live under, and security technologies.
How do laws and security technologies differ from morals and reputation in terms of trust?
-Laws and security technologies are described as more formal and scalable systems that enable cooperation among strangers and complex societal structures, while morals and reputation are person-to-person, based on human connection and interpersonal trust.
What example is given to illustrate how technology has changed trust in a professional context?
-The transcript mentions Uber as an example, highlighting how technology and rules have made it safer and built trust between drivers and passengers, despite them being strangers.
What is the critical point about social trust and its scalability?
-The critical point about social trust is that it scales better than interpersonal trust, enabling transactions and interactions without the need for personal relationships, such as obtaining loans algorithmically or trusting corporate systems for food safety.
How do the transcript's views on AI relate to existential risks and corporate interests?
-The transcript suggests that fears of AI are often related to its potential for uncontrollable behavior and alignment with capitalism's profit motives. It highlights concerns about AI being used by corporations to maximize profits, potentially at the expense of individual trust and privacy.
Why are corporations likened to slow AIs, and what implications does this have?
-Corporations are likened to slow AIs because they are profit-maximizing machines, suggesting that their actions are driven by profit goals rather than human-like interests or ethics. This comparison implies that future AI technologies controlled by corporations could prioritize corporate interests over individual well-being.
What concerns are raised about the relational and intimate nature of future AI systems?
-The transcript raises concerns that future AI systems will be more relational and intimate, making it easier for them to influence users under the guise of personalized assistance, while potentially hiding corporate agendas and biases.
What does the transcript propose as a solution to ensure trustworthy AI?
-The transcript proposes the development of public AI models, transparency laws, regulations on AI and robotic safety, and restrictions on corporations behind AI to ensure that AI systems are trustworthy, their biases and training understood, and their behavior predictable.
What role does government play in establishing social trust in the context of AI, according to the transcript?
-According to the transcript, government plays a crucial role in establishing social trust by regulating AI and corporations, ensuring transparency, safety, and accountability in AI systems to protect societal interests and individual rights.
What distinction is made between 'corporate models' and 'public models' of AI in the transcript?
-The distinction is that corporate models are owned and operated by private entities for profit, while public models are proposed to be built by the public for the public, ensuring universal access, political accountability, and a foundation for free-market innovation in AI.
Outlines
🔄 The Dynamics of Trust in Society
The first paragraph discusses the importance of interpersonal and social trust in maintaining societal functions. It introduces the concept that trust is built on mechanisms that encourage people to behave trustworthily, thus enabling a trusting society. The author references his previous work, 'Liars and Outliers,' to explain four systems that enable trust: innate morals, concern for reputation, laws, and security technologies. He highlights the evolution of trust from personal, based on human connections, to systemic, enforced by laws and technology, and how this transition allows for larger, more complex societies. Examples like the taxi industry transformation by Uber illustrate how systemic trust works through constant surveillance and the use of technology to ensure mutual trustworthiness without personal connections.
🚀 AI and the Illusion of Interpersonal Trust
The second paragraph expands on the dangers of blurring the line between services and friendships in the context of AI development. It warns that AI systems, by virtue of being relational and intimate, might trick users into ascribing humanlike traits to them, making manipulation easier. This intimacy could lead to an overreliance on AI for personal tasks, mistaking these services for friendships due to their conversational nature and deep knowledge of personal preferences and behaviors. The author argues that this confusion benefits corporations by making users more susceptible to manipulation. Moreover, this section touches on the concept of power dynamics, suggesting that reliance on AI might not always be a choice but a necessity, further complicating the trust relationship between humans and AI systems.
🛡️ Building Trustworthy AI Through Regulation
In the final paragraph, the necessity for trustworthy AI is emphasized, calling for government intervention to ensure AI's transparency, safety, and bias regulation. The author critiques the market's inability to self-regulate towards ethical AI usage, proposing that only through governmental action can social trust be fostered in the age of AI. He advocates for public AI models developed outside the corporate sphere to ensure AI technologies serve the public good transparently and accountably. The closing remarks stress the importance of government in regulating AI and corporations to maintain social trust, acknowledging the challenges but underscoring the necessity for such measures to thrive in a future shaped by AI.
Mindmap
Keywords
💡Interpersonal Trust
💡Social Trust
💡Liars and Outliers
💡Surveillance Capitalism
💡AI as Double Agents
💡Category Error
💡Generative AI
💡Trustworthy AI
💡Public AI Models
💡Power Dynamics
Highlights
Interpersonal and social trust are essential to society, functioning through mechanisms that encourage trustworthy behavior.
The book 'Liars and Outliers' discusses four systems for enabling trust: innate morals, concern for reputation, laws, and security technologies.
Morals and reputation are personal and based on human connections, underpinning interpersonal trust.
Laws and security technologies scale better for complex societies, forming the basis of social trust.
Examples like Uber and algorithmic loans show how technology and rules can create trust among strangers.
Corporations and AI are perceived through the lens of social trust, yet we often mistakenly attribute them with qualities of interpersonal trust.
AI systems being relational and intimate will likely exacerbate issues of trust and manipulation.
Generative AI promises personal digital assistants but requires an unprecedented level of intimacy and data.
AI's human-like interfaces are designed choices, potentially misleading users into misplacing trust.
The necessity for trustworthy AI governed by transparency, understood biases, and clear goals.
Government's role is critical in enforcing AI transparency, safety, and the trustworthiness of corporations behind AI technologies.
Public AI models built for and by the public could serve as a foundation for trustworthy and accessible AI innovations.
The importance of government intervention in creating social trust through AI regulation.
Challenges in regulating AI reflect on the broader difficulties governments face in managing technology and corporate power.
Concluding with the imperative for government action to ensure AI contributes to social trust and societal well-being.
Transcripts
[Music]
so interpersonal trust and social trust
are both essential to
society and this is basically how it
works we have mechanisms that induce
people to behave in a trustworthy manner
both interpersonally and socially this
in turn allows others to be trusting
which enables trust in the society and
that's what keeps Society function
now this system isn't perfect there are
always going to be untrustworthy people
but most of us being trustworthy most of
the time is good enough so I wrote about
this about a decade ago in a book called
liars and outliers and I wrote about
four systems for enabling trust our
innate morals concern about our
reputation the laws we live under and
SEC security Technologies and I wrote
about how the first two are more
informal than the last two how the last
two scale better right they allow more
complex and larger societies and they're
the ones that enable cooperation among
strangers what I didn't appreciate is
how different the first and last two
were so morals and reputation are
personto person they're based on human
connection Mutual understanding
vulnerability respect Integrity
generosity all these human things and
that's what underpins interpersonal
trust laws and security Technologies are
systems of trust that Force us to act
trustworthy and they're the basis of a
social Trust
so Taxi Driver used to be one of the
country's most dangerous professions and
Uber changed that right I don't know my
Uber driver but the rules and the
technology let us both be confident that
neither one of us will cheat or attack
each other right we're both under
constant surveillance and we're
competing for star
rankings the critical point here is that
social trust scales better used to need
a personal relationship with a banker to
get a loan now it's on all
algorithmically you have a lot more to
choose from but that and that scale is
vital right in today's society we
regularly trust or not governments
corporations Brands organizations groups
like it's not so much I trusted the pi
pilot the last time I flew somewhere but
instead I trusted D Delta Airlines to
put well-trained and well-rested pilots
in cockpits on schedule right I don't
trust the cooks and the weights there
for the restaurant really the system of
the health codes they work under like
and I couldn't even describe the banking
system that I trusted when I used an ATM
machine this morning right again this
confidence is no more than reliability
and predictability think of that
restaurant again imagine it's a fast
food restaurant employs teenagers right
the food is almost certainly safe it's
probably safer than in highend
restaurants because the corporate
systems of reliability and
predictability guide those people's
every behavior and that's the difference
right you're going to ask a friend
deliver a package across town or you can
pay the post office do the same thing
the former is based inter personal trust
right based in morals and reputation I
know my friend and how reliable they are
the second is a service made possible by
social trust and to the extent that it
is reliable and predictable it's
primarily based on laws and Technologies
both of those will get my package
delivered but only the second can become
the global package delivery systems that
is
FedEx and because of how large and
complex society has become We have
replaced many of the rituals and
behaviors of interpersonal trust with
the security mechanisms that enforceable
liability and predictability social
trust but because we use the same word
for both we regularly confuse them and
when we do that we're making a category
error and we do it all the time with
governments with organizations and with
corporations we might think of them as
friends when they're actually services
and both language and the laws make this
an easy category error to make right we
imagine they're friends but they're not
corporations are not capable of having
that kind of relationship and we are
about to make that same category with AI
we're going to think of them as friends
when they're not so a lot has been
written about AI is existential risk
right the worries they will have a goal
and will harm humans in the process of
achieving it you probably read read by
the paperclip maximizer kind of a weird
fear science fiction AR of Ted Chang
writes about it like instead of solving
all Humanity's problems or wandering off
proving mathematical theorems the AI
single-mindedly pursues the goal of
maximizing production and Chang points
out this is every corporation's business
plan and that our fears of AI are
basically fears of capitalism science
fiction writers Charlie stros takes us
one step further he calls corate
corporations slow AI by profit
maximizing the machines and near term AI
will largely be controlled by
corporations which will use them towards
that profit maximizing goal they won't
be our friends at best they'll be useful
Services more likely they'll spy on us
and try to manipulate us this is nothing
new surveillance is the business model
of the internet manipulation is the
other business model the internet and we
use all of these Services as if they are
agents working on our behalf when in
fact they are double agents also
secretly working for the corporate
owners we trust them but they're not
trustworthy they're not our friends
they're services and it's going to be no
different with AI but the results will
be much worse for two reasons so the
first is that these AI systems will be
more relational we'll be conversing with
them using natural language and as such
we will naturally ascribe humanlike
characteristics to them and this
relational nature will make it easier
for those double agents to do their work
right so did your chatbot recommend a
particular Airline or hotel because it's
truly the the best deal given your
particular set of needs or because the
AI company got a kickback from those
providers when you asked to explain a
political issue did it bias that
explanation towards the company's
position or towards the position of
whoever political party gave it the most
money the conversational interface will
help hide their agenda the second reason
to be concerned is that these AIS will
be more intimate one of the promises of
generative AI is a personal digital
assistant it's what we're talking about
here right acting as an advocate for you
as a butler for you as your agent to
others and this will require an intimacy
greater than your search engine than
your email provider your cloud storage
system your phone you're going to want
it with you
247 constantly training on everything
you do you will want to know everything
about you so it most effectively work on
your behalf and you know taking to its
extreme it'll help you in many ways
it can notice your moods and know what
to suggest can anticipate your needs and
work to satisfy them it'll be your
therapist your life coach your
relationship counselor you will default
to thinking of it as a friend you will
speak to it in natural language it will
respond in kind if it's a robot it'll
look humanoid or at least like an animal
you it will interact with the whole of
your existence just like another person
would and the natural language interface
is critical here we are primed to think
of others who speak our language as
people and we have sometimes have a
trouble thinking of others who speak a
different language that way right we
make that category error with obvious
non-p people like cartoon characters we
will naturally have a theory of mind
about any AI we talk with or more
specifically we tend to assume that
something's implementation is the same
as its interface and that is we assume
that things are the same on the ins side
as they are in the surface like so
humans are like that we're people
through and through a government is
systematic and bureaucratic on the
inside you're not going to mistake it
for the person when you interact with it
but this is the category area we makeing
corporations we sometimes mistake the
organization for its spokesperson now ai
has a fully relational interface it
talks like a person but it has an
equally fully systemic implementation
right like a corporation much much more
so there are no people in there the
implementation interface are much more
Divergent of anything we've ever
encountered to date by a
lot and you will want to trust it it'll
use your mannerisms and your cultural
references it'll have a convincing voice
a confident tone authoritarian of
manner its personality will be optimized
to exactly what you like and what you
respond to it will act trustfully worthy
but it will not be trustworthy we won't
know how they're trained we will know
their secret instructions we will know
their biases either accidentally
deliberate we do know that they are
built at enormous expense mostly in
secret by profit maximizing corporations
for their own
benefit and I think it's no accident
these corporate AIS have a human-like
interface there's nothing inevitable
about that it's a design Choice it can
be designed to be less personal less
human like more obviously a service like
a search engine right when chat PT types
out its answer that's making
you think something is in there typing
and the companies want you to make the
friend service category error and they
will exploit you mistaking it for a
friend and you might not have any choice
but to use it because there's something
else we want to talk about here when it
comes to trust and that's
power sometimes we have no choice but to
trust someone or something because they
are powerful right we're forced to trust
the local police we're forced to trust
some corporations because of no viable
Alternatives or to be more precise we
have no choice but to entrust ourselves
to
them we will be in the same position
with AI we will have no choice but to
entrust ourselves to the
decisionmaking and the friend service
confusion will help mask this powerful
power differential right we'll forget
how powerful the corporation behind the
AI is because we be fixated on the
person we think the AI
is okay this is a long-winded way of
saying that we need trustworthy ai ai
whose behavior is understood whose
training is understood whose biases are
understood whose goals are
understood and the market will not
provide this on and there on its own
right corporations are pro profit
maximizers and I think the incentives to
surveillance capitalism are just too
much to resist it is in the end
government who provides the underlying
mechanisms for social trust essential
Society think about contract law or
property law a personal safety law or
any of the health and safety codes let
you board a plane eat at a restaurant or
buy a pharmaceutical the more that you
can trust that your social interactions
are relable and predictable the more you
can ignore the
details and government can do this with
AI I mean I want AI transparency laws
when it's used how it's used what biases
it has I want laws regulating Ai and
robotic safety when it's permitted to
affect the world I want laws that
enforce the trustworthiness of AI which
means the ability to recognize when
those laws are being broken and
penalties sufficiently large to incend
trustworthy
Behavior think a lot of countries are
contemplating AI Safety and Security
laws EU is almost there but I think
largely they're making a mistake they
try to regulate the AI and not the
humans behind them AIS are not people
they don't have agency they're built by
and trained by people mostly
corporations right and I want AI
regulations pray restrictions on those
people and those
corporations and we need one final thing
public AI models I want fundamental
models built by Academia or nonprofit
groups or government itself that can be
owned and run by individual
and in the last question session this
came
up term public model is thrown on a lot
uh I want to detail what I mean it's not
a corporate model that the public is
free to use it's not a corporate model
the government is licensed it's not even
an open source model it's a public model
built by the public for the public with
political accountability not just Market
accountability openness and transparency
transparency pair with responses to
public demands available to anyone to
build on top of means universal
access and like a foundation for a free
market in AI Innovations and then this
would be a counterbalance to corporate
owned
AI so I don't think we can ever make eye
into our friends but we can make them
their trustworthy Services right agents
and not double agents but only if
government mandates it we can put limits
on surveillance capitalism but only if
government mandates it and I think it's
well well within government's power to
do this and more importantly it is
essential for government to do this
because the point of government is to
create social trust to the extent the
government does this it succeeds to the
extent the government doesn't do this it
fails and I know this is going to be
hard today's governments have a lot of
trouble effectively regulating slow AI
corporations why should we expect them
to be able to regulate fast
AI but they have to we need government
to constrain the behavior of
Corporations and the AIS they build
deploy and control government needs to
enforce both predictability and
reliability and that is how we can
create the social trusts that Society
needs to thrive in this AI age so thank
you
thank you Bruce that's awesome I didn't
get the mute off in time so you could
hear all the Applause in the
room thank you thank you thank you there
we go now the mut's
off uh really appreciate you coming in
uh and and sharing that with us Bruce
thank you so much
Weitere ähnliche Videos ansehen
How responsible AI can prepare you for AI regulations
Yuval Noah Harari on AI, Future Tech, Society & Global Finance
Ethics of AI: Challenges and Governance
The future of AI in medicine | Conor Judge | TEDxGalway
Creativity in the Age of AI: Generative AI Issues in Art Copyright & Open Source
AI Radioroom Episode 1 | Exploring 0G Labs’ Solutions to Challenges in AI
5.0 / 5 (0 votes)