Maggie Little: Data & Digital Ethics
Summary
TLDRIn this talk, Senior Research Scholar Maggie Little explores the ethical implications of the data and digital revolution, akin to the agricultural revolution 10,000 years ago. She discusses the massive amounts of data generated through new technologies and the ethical concerns surrounding privacy, surveillance, and data usage. Little also addresses the revolution in data analytics, including AI and machine learning, and the potential for bias and mistrust. She emphasizes the importance of designing, deploying, and governing these technologies responsibly to preserve privacy, advance justice, and maintain public trust.
Takeaways
- 🌐 The data and digital revolution is compared to the monumental shift from hunter-gatherers to agriculture 10,000 years ago, emphasizing its transformative impact on society.
- 📈 The revolution is powered by two main components: the data revolution and advancements in data analytics, particularly artificial intelligence (AI) and machine learning.
- 🔍 The data revolution involves massive amounts of data gathered through new technologies like drones with facial recognition and smart cities, as well as organic data from consumer digital activities.
- 🤖 AI and machine learning allow for the discovery of hidden patterns in data that traditional statistics might miss, offering new insights but also posing ethical challenges.
- 🔒 Privacy rights are a central ethical concern in the data revolution, with questions around the collection, storage, and secondary use of personal data.
- 🚨 Informational risks, such as the potential for data breaches and de-anonymization, are significant ethical issues that must be managed when handling sensitive data.
- 🏥 The potential for AI tools to improve public health, personalized medicine, and medical imaging is vast, but they must be designed and deployed with careful consideration of ethical implications.
- 🚫 The misuse of AI tools, including mission creep and commercialization, can lead to ethical concerns, especially if the tools are used for purposes beyond their intended design.
- 📊 Ethical biases in AI can occur when algorithms unintentionally reinforce existing prejudices or不公平ly impact vulnerable populations, highlighting the need for careful algorithm design and validation.
- 🌟 Public trust is crucial for the successful deployment of AI in healthcare and other sectors, and transparency, communication, and democratic accountability are key to maintaining this trust.
Q & A
What is the significance of the data or digital revolution compared to historical societal changes?
-The data or digital revolution is considered as momentous for human society as the agricultural revolution 10,000 years ago, which saw humans shift from being hunter-gatherers to settled agriculture.
What are the two main components of the current revolution in the 21st century?
-The two main components are the data revolution and the revolution in data analytics, both powered by the massive increase in computational power achieved in the last two decades.
How has the way we gather data changed due to new technologies?
-New technologies like drone planes with facial recognition and smart cities with sensors have enabled the gathering of massive amounts of data, which is part of the data revolution.
What is meant by 'organic data' in the context of the data revolution?
-Organic data refers to novel forms of data that are generated from everyday activities, such as social media posts, which can be scraped and analyzed for insights.
What is the role of artificial intelligence and machine learning in the data analytics revolution?
-Artificial intelligence and machine learning play a crucial role by enabling the analysis of large datasets to find patterns that traditional statistics might miss, thus enhancing our ability to extract information from data.
What are the potential benefits of leveraging massive amounts of data in public health?
-Leveraging massive amounts of data can aid in public health by generating hypotheses, tracking disease hot spots, and improving personalized medicine and medical imaging analysis.
What are the ethical concerns regarding the collection and storage of data?
-Ethical concerns include privacy rights, the question of who owns the data, the risk of data being used beyond its original intent, and the potential for misuse or commercialization of data.
Why is it important to consider informational risks when dealing with sensitive data?
-Informational risks are important because they involve the potential for sensitive information to be identified or inferred, even from anonymized data, through data aggregation or breaches, which can lead to privacy violations.
What is the 'mosaic effect' in the context of data ethics?
-The mosaic effect refers to the risk of de-anonymization where combining non-sensitive data from one database with sensitive data from another can lead to the identification of individuals and their sensitive information.
How can the deployment of AI tools in healthcare be ethically problematic?
-The deployment of AI tools can be ethically problematic due to issues like inaccurate predictions, biases in algorithms that disproportionately affect vulnerable populations, and the potential for misuse or mission creep where the tool is used for purposes beyond its original intent.
Why is public trust important when it comes to the use of AI in public health?
-Public trust is crucial because without it, people may be less likely to engage with health services or follow public health recommendations, which can have negative impacts on individual and community health.
Outlines
🌐 The Digital Revolution and Its Ethical Challenges
Maggie Little introduces the concept of the data or digital revolution, comparing its significance to the agricultural revolution of 10,000 years ago. She outlines two main components of this revolution: the data revolution and the computational power revolution. The data revolution involves massive data collection, enabled by new technologies like drones and smart cities, as well as the generation of organic data from consumer digital activities. The computational power revolution encompasses advancements in data analytics and artificial intelligence, which allow for the discovery of hidden patterns in data. Little emphasizes the potential benefits of these technologies for public health and personalized medicine but also warns of the ethical perils if not used responsibly.
🔒 Data Ethics: Privacy and Informational Risks
This paragraph delves into the ethical considerations of data ethics, particularly around privacy rights and the collection, storage, and use of data. Little discusses the challenges posed by new surveillance methods and the collection of data without consent. She also addresses the issue of secondary data use, where data initially collected for one purpose may be used for another, raising questions about who should give permission for such use. The concept of informational risks is introduced, highlighting the potential for sensitive information to be inferred or de-anonymized when data is combined from different sources. The importance of considering these risks before using data, even for beneficial purposes, is emphasized.
🛡️ Data Governance and Mission Creep
The focus shifts to the risks associated with data governance, including the potential for data breaches and the misuse of data through 'mission creep,' where data initially collected for one purpose is used for other purposes over time. Little illustrates this with the example of health records and phone metadata, which, when combined, can reveal sensitive information about individuals. The paragraph also touches on the risks of commercializing databases and the dangers of regime change, where data collected under one government may be misused by a subsequent, potentially less ethical government. The importance of robust data governance policies is underscored to prevent such misuse.
🤖 AI Ethics: Design, Deployment, and Accountability
The discussion turns to the ethics of AI, specifically in the context of decision support tools. Little highlights the importance of considering accuracy, potential biases, and the broader implications of deploying AI tools. She gives an example of an AI tool developed to predict when pneumonia patients could be safely discharged, which initially showed accuracy issues due to biases in the training data. The paragraph emphasizes the need for clear verification and validation of AI tools beyond their initial training and validation datasets to ensure they perform well in real-world scenarios and do not perpetuate harmful biases.
🏥 Bias in AI and Its Ethical Implications
This section explores the ethical concerns related to biased algorithms in AI, particularly those that can disproportionately affect vulnerable populations. Little cites a study that found an algorithm was less likely to refer black patients for special medical care compared to white patients, even when they were equally sick. This example illustrates how AI can inadvertently perpetuate existing biases if not carefully designed and monitored. The paragraph stresses the need for algorithms to be transparent, fair, and free from discriminatory impacts, and it calls for a critical examination of the data and decisions that inform AI systems.
🌟 Trust, Transparency, and the Future of AI
In the final paragraph, Little discusses the importance of public trust in AI and the risks to that trust if AI tools are not well understood or are perceived as infallible. She notes that AI's complexity and the difficulty in explaining its recommendations can lead to mistrust, especially if there are concerns about accuracy or fairness. The paragraph concludes by emphasizing the need for transparency, democratic accountability, and robust governance structures when deploying AI tools. Little suggests that these practices are essential to ensure that AI is used responsibly and ethically, preserving trust and benefiting society.
Mindmap
Keywords
💡Data Revolution
💡Digital Revolution
💡Artificial Intelligence (AI)
💡Machine Learning
💡Privacy Rights
💡Data Breaches
💡Informational Risks
💡Ethical Bias
💡Data Governance
💡Public Trust
Highlights
The data or digital revolution is compared to the monumental shift from hunter-gatherer to sedentary agriculture societies around 10,000 years ago.
The revolution is powered by a massive increase in computational power, leading to two distinct revolutions: the data revolution and the revolution in data analytics.
New technologies for data gathering, such as drones with facial recognition and smart city sensors, contribute to the data revolution.
Data is now being generated in massive and novel ways, including consumer digital data from mobile phones and other devices.
Data analytics have evolved with artificial intelligence and machine learning, allowing for the discovery of hidden patterns within data.
AI tools can support decision-making, though they do not replace human judgment and must be carefully validated.
Ethical considerations in data ethics include privacy rights, the ethics of data gathering, and the potential misuse of data.
Informational risks are a key concern, especially when sensitive data is involved or when data can be de-anonymized.
Data breaches and poor data governance policies can lead to the misuse of data and loss of public trust.
The ethics of AI involves considerations of accuracy, potential biases, and the transparency of AI decision-making processes.
AI tools must be designed and deployed with careful attention to privacy preservation, justice, and trust.
The potential for misuse of AI tools, such as mission creep and commercialization, requires robust governance and oversight.
Public trust is crucial for the successful deployment of AI, and misunderstandings about AI can undermine this trust.
The complexity of AI algorithms can make it difficult to explain their recommendations, which is a challenge for transparency and accountability.
Democratic accountability and public communication are essential for the ethical use of AI tools in society.
Good intentions are not sufficient; the design, deployment, and governance of data and AI tools must be ethically responsible.
Transcripts
hi
my name is maggie little i'm a senior
research scholar at the kennedy
institute of ethics
and today i'm going to talk to you about
the ethics of what people sometimes call
the data or digital revolution
so this revolution has been um
analogized as
being as momentous for human society as
10 000 years bc the uh agricultural
revolution when humans move from being
hunter gatherers having settled
sedentary agriculture
that massive of a change so
what exactly are the components of this
current revolution in the 21st century
well
let's divide it into two really it's two
revolutions
both of them um powered by the massive
increase in computational power that has
been achieved in the last
two decades so first there's the
data revolution so we've always gathered
lots of data including on humans and
health which is the context we're
especially interested in today
we can think of bench scientists making
studies of molecules gathering data or
clinical researchers gathering data on
humans
if we're interested in social science
research we think of people doing
surveys
okay but the new forms of data
are revolutionary because for one thing
they're
massive um massive in part because
we've got new technologies for gathering
data uh think of drone planes with
facial recognition technology that are
being used in some countries
to do surveillance of who is breaking
quarantine during covet
okay um
or smart cities uh putting sensors in
all of the
light posts to see what when warm bodies
go by
okay so we're we have new ways of
gathering massive amounts of data
we also have a treasure trove of data
that's being generated
every day in other contexts not about
science or information
but like the consumer digital revolution
so this
mobile phone is as long as it's on
sending out massive waves
of data it's pinging the cell tower
every few seconds telling it
where it's located so the cell phone
companies have the metadata on this
phone and have
enormous enormous amounts of it that we
could
mine to see what we could learn from it
so
that means that much of the data is also
novel it's sometimes called organic data
so
we're finding out data by scraping
people's twitter posts and seeing
when people get especially angry versus
happy with an
election result and finally the new data
revolution has lots of pooled data
so we have different databases out there
that are now being agglomerated and have
made to talk with
each other for instance health records
cross-checked with somebody's social
with people's social media posts
we're going to talk about all of that
well next the revolutionary
question has to do with the revolution
in the way we analyze
data what are the data analytics or data
science methods we have
and here's where we're going to talk
about artificial intelligence and
machine learning
so traditional ways of extracting
information from
raw or structured data was about using
traditional statistics methods which
very very complicated very very uh
powerful and might even be
uh often these days is executed by a
computer program
so using algorithms which are just fixed
sets
of rules but we now have new kinds of
algorithms called machine learning
algorithms that are basically a special
type of algorithm it's still a set
um fixed set of rules but they're able
to
find patterns hidden inside of data
that humans would not be able to see or
discover with traditional
statistics or maybe they could do with
traditional statistics but it would be
too expensive to bother doing
it so machine learning gets
trained on data thinks it has found a
way to
isolate patterns and you validate it
and from that you can build an ai tool
which is something like a predictive
analytic tool it might not make the
decision for you but it could be it's
decision support
and those analytics allow us in concert
with the massive data to
find new things we couldn't see before
so as with any revolution enormous
potential for good
we can now leverage enormous amounts of
data
finding wisdom inside to help with
public health it's being used in covid
right now to
generate um hypotheses for directions or
to track
hot spots uh of of the epidemic
um it's also being used in personalized
medicine it's also being used to more
accurately in some cases
read medical imaging
better than a human can do so these are
enormous potential for
for common good but there's also
enormous potential
peril if the technology
is not designed deployed and governed
for its ethically responsible use so how
should we think about
the ethical issues involved here let's
take the two revolutions in
in turn let's start with data ethics
when you've got
this new treasure trove of data how do
you think of
the ethics of using it of
gathering it keeping it
even if you leave it on a shelf and you
don't do any analytics on yet
just the issue about gathering it and
holding on to it what are those ethics
so the first issue in data ethics is a
very familiar one
question about privacy rights and
one way to put the issue uh
there are privacy rights having to do
with sort of whose data
is it anyway and let's look at two
different cases
so one i mentioned had to do with new
surveillance methods like
drones and smart city sensors here you
might think there are important
questions about whether it's even okay
to
gather data by passersby who can't opt
out
right you can't opt out of a drone going
overhead you can't opt out of
the street lamp collecting information
about you
except by staying in your house so no
escaping if you don't stay in your house
is it okay if it's for good intentions
for massive amounts of information to be
gathered about us or when do we talk
about the limits
of that that's really about the ethics
of surveillance
but there's also fascinating and deeply
important questions about
for data that is already gathered and
we'll we'll assume in a not in in
ethically acceptable ways
like the cell phone uh tower pings of my
of my phone my mobile network operator
has those so that they know for one
thing how to bill me how much data am i
pulling down the like
but it's one thing for the mobile
um phone operator network operator
to use the data for their own billing
purposes let's say
and another for them to sell it or lease
it or give access to it to somebody who
wants to probe it for different use
who should give the permission for that
is it the mobile network operator
who gets to say go ahead and and
mine that data or is it the people who
on the phones this is an unsettled
question so far
but basically the key ethical issue is
you can't assume that because the data
already exists
it's okay to use it for anything this is
actually a familiar issue in medical
ethics
the ethics of secondary data use
now when we start trying to figure out
the contours of privacy rights
one of the really critical things we
need to pay most attention to is issues
about what are called informational
risks let's take a look
so when the data collected
or that's being proposed to be reused is
data about
sensitive information we know that it
has more
informational risks attached to
uh the data subjects is the people that
the data is about
people sometimes say data is just a
number but there's a person behind it
when it's human data
if it's about health records for
instance obviously sensitive information
it's also
important to understand that sometimes a
database has information that isn't
intrinsically sensitive
but if it were joined with another
database that has sensitive information
we could get what sometimes called the
mosaic effect or inferential privacy
risks
that putting the two databases together
would allow me to know
or to identify who's the person
and know their sensitive information
even though in both databases
the subjects have been anonymized so
this is the
risk of de-anonymization and the bigger
the
data and the more it's aggregated with
other data sources the
bigger the risk of de-anonymization
where somebody could
actually look in with just a few
inferences figure out oh it was you
who was at that location in front of
that clinic
um or at that protest in political
circums
uh context and the like so very
important for
those who want to probe data even for
very good
purposes with the best of intentions to
do a sort of informational risk audit
what are the actual risks of doing this
but then you might think those risks
don't really
exist if we just keep the database sort
of sequestered away but we all know
uh um think about it for just one moment
more that that's not really true
so the the
the issues about informational risk is
that that information may
escape as it were um the
the software in which the the database
is encoded so how can that happen well
the one that people talk about most of
all is data breaches right we have to
worry and be stewards of data
and keep hide cyber security so that
others
bad actors can't come in and steal the
data and in fact
um famous example from 2015 anthem
blue cross 78.8 million
patient records were stolen so
risks of data breach are very real but
the worry isn't just about the potential
for a breach of an outside actor coming
in
under cover of cyber night and stealing
the data
they're actually very profound risks if
we don't have good data governance
policies ensconced
in policy that that that
those who are meant to have access to
the database
might end up misusing it so this is
sometimes called
mission creep so imagine you're a
government and you have a
huge database that combines your
population's health records
social media uh phone metadata records
the whole 90 yards
okay that is enough information to be
able to de-anonymize or identify people
in it and know all sorts of things about
them
but imagine that your government says
that's okay i only want to use it for
coven
protection efforts that's all i'm going
to do
that's great but there's a huge
tendency once we've pulled together data
sometimes
they're called data oceans it's so
valuable to you that means it's really
valuable for other purposes and it can
be
very easy to think gosh while we've got
it we could also use it for this good
purpose
and that good purpose but you might not
do as rigorous an analysis of
about those informational risks or
whether it's really an appropriate use
and huge temptations for commercializing
the databases you have
especially in resource-poor countries
and then also
think about regime change so it's one
thing for
a current leader of a government to have
that database but they pass it down
right to whoever is in control next and
depending on the government
political climate you live in that could
be really really problematic
okay let's switch over now from data
ethics
to the ethics of doing those new
analytics the ethics of
ai so here there are a few
things that it's very important to keep
in mind
when thinking about designing and
deciding to deploy
an ai tool in a given context and for
our purposes let's just
assume that we're just just talking
about decision support tools
we're not yet talking about the robot
autonomously doing its own thing
okay or someday becoming conscious we're
just talking about right now the kinds
of tools that are out there
things like predictive analytic tools
that say
we um trained machine learning
algorithms on some
super cool rich data that we couldn't
make heads or tails out of
it found ways to sort for instance like
they did this with pictures of
cats and pictures of dogs right machine
learning algorithms
figured out which parameters in the
pixels
after they gave them millions of images
could decide which is a
dog or cat and now they're pretty good
at seeing a novel a new photo saying
that's a dog or that's a cat
okay so you train it up you say
my my machine learning algorithm is
giving me a really good result
then i verify it on new data to see if
it was still
holds true and then i say now i've got a
predictive analytic tool let's go ahead
and use it
but all of that is a far cry from real
world accuracy
let me give an example so
uh university of pittsburgh developed an
interesting ai tool
using a set of machine learning
algorithms
they wanted to figure out a more
accurate way of
predicting when a pneumonia patient
could safely be discharged rather than
stay inpatient
doctors use their best judgment but can
we do even better than that if we can
see
patterns in massive amounts of data and
train
um an ai tool on it well they
um developed uh an ai tool it was
trained on 750 000 patients from 78
hospitals who had pneumonia and
after doing all of this work the tool
predicted more accurately in that
hospital system
which pneumonia patients could be safely
discharged and when which needed to be
impatient predicted it better than the
doctors in that hospital
system fantastic
one hitch when they looked at the
results
they found that the ai tool was one of
its glitches
it turned out was saying patients that
had
pneumonia together with asthma
are safer to discharge than patients
with pneumonia at
uh alone now that makes absolutely no
sense any of you who are
in the medical field know that's crazy
asthma is a complicating factor to
pneumonia and vice versa
well they look behind the scenes it
turns out that all of that training data
in that hospital system
was um governed was the result of a
a specific hospital policy they used
which was
the patients who came in with pneumonia
and asthma
were front-lined to icu intensive care
well that meant those patients had very
good outcomes because it more than
compensated for the double for the extra
risk of having both conditions
so the computer's looking at that data
and the computer says boy
having asthma and pneumonia together as
a marker for somebody that saved a
discharge
they caught it okay so they don't use it
anymore
but it's an incredibly important ethics
lesson because there are a lot of
for-profit private vendor-driven
ai tools and health now and people
don't ask for clear verification
they don't ask for have you
looked at it in contexts that are
sufficiently general outside of your
training and even validation data
and many of those private companies
regard all of that data as proprietary
so
some people have said it's a little bit
like imagining a drug company says i've
got a great new
drug i'm not going to show you any of
the data and there's no fda to review it
but just trust me it's going to work
great so some people have actually
suggested that the fda expand its
ability to regulate what are called
digital health tools
here's a second thing people need to
worry about
sometimes the inaccuracies or
what we might call statistical biases
right self-selection bad representation
representation
pools and the like sometimes
statistical biases are especially bad
because they're ethically biased and let
me explain what i mean
so to a statistician a bias is anything
that that tilts away from a fully
accurate
generalization but in ethics a
bias is an unfair
disparate impact that has a special
impact on vulnerable populations
so there's very deep worries and lots of
good work that's
begun to be done about biased algorithms
in the ethical sense
um so let me once again give an example
um so uh
there's a study in 2019
of a particular algorithm by optum which
by the way was an algorithm
that was helping to manage care for 200
million people in us
okay this was active and out there
they found in a review of the
algorithm's deployment
that it was less likely to refer black
patients for special medical care
relative to their white counterparts so
equally sick
white versus black patient this
algorithm
uh referred the white patient to special
medical care far more than it would the
equally sick black patient
okay first important thing to point out
is
that's more than just a statistical bias
it's also an ethically infused bias we
care more when the errors are
concentrated on
suspect classifications right places of
historical oppression
and um ones dealing with vulnerable
populations which happen to be
both in the case of black americans in
the united
states right now what had happened by
the way
well um it turns out that
the algorithm had used as a proxy for
how much medical need you you had
how sick how much need for special
treatment you had
how many dollars you spent on health
care
the algorithm is just a proxy if you
were a patient who over a year
spent a lot on health care that must
have meant you needed more care
if you spent less on health care you
didn't need as much
then it turned out that white patients
spent more on health care
for equal sickness than black patients
did
but of course we know the reason for
that was not because
the black patients actually needed less
care so they had less
access to medical care so couldn't
afford it to begin with in the aggregate
and in some communities less trust of of
um
of medical care and social reticence um
so i've got a quote here from raghavan
and barakas and a great report from
brookings institute
reminding us that algorithms by their
nature
don't question the human decisions
underlying a data set
instead they faithfully attempt to
reproduce past decisions which can lead
them to reflect the very sort of human
biases they're intended to replace
so if your data set contains unquitting
but definite bias inside of it it's
going to replicate it and hide it under
the cloak of objectivity
also without ethics of ai we need to
remember
it's not just worries about accuracy and
including inaccuracy that leads to bias
so imagine for a minute
magic wand we're talking about
predictive analytic tools decision
support ai
that's fabulously accurate in
predictions
better than humans and and we've tested
it and
not only do they have high accuracy but
the accuracy is distributed well
across subpopulations right
there are two other things that we need
to keep in mind when thinking about
deploying ai the first one is just like
we saw with data
and and and gathering and maintaining a
rich
and tempting database um
once you've got a predictive analytic
tool like a dashboard let's say
it's subject to misuse
again mission creep let's use it for
something else temptations of
commercialization regime change
um uh to give an example if you have a
predictive analytic that
talks about the risk for instance of
getting coveted or the risk of acquiring
hiv
if even if that's very accurate in fact
especially if it's very accurate
you might worry about who's going to be
able to use that tool to make that
assessment so for instance there are
ai tools under development now to do um
facial pattern recognition for mood
um to including to screen people
for mental health issues now currently
not accurate enough to
be deployed certainly the healthcare
industry but they are being deployed
in employment recruiting and screening
so there are commercial uh
tools being sold now that say we've got
an ai
tool you've got too many people to
interview for the slots you have you
want to get the best ones and we'll do
an analysis of their facial patterns
and tell you who's likely to be a good
co-worker
or to be expensive to hire because of
mental health issues
one worry would be that they're going to
be inaccurate in certain biased ways but
in other words
if they're accurate that's sensitive
health information that it's not the
employer's business to know
and finally but very importantly no
discussion of
ai is complete without talking about the
risk
of mistrust so people in public health
know and remind us all the time that
public trust
is hard won and easily lost and that
without public trust
you don't have anything so in health you
will have people accessing
your hospitals and clinics there's
mistrust of the covered vaccine you're
going to have people
not getting vaccinated which hurts all
of us so public trust is a
really really incredibly valuable
commodity that needs to be stewarded
carefully
but use of ai has some real risks on
public trust it doesn't mean it can't be
overcome but they have to be attended to
one thing is that just
it's hard to understand and and to
explain to people
what ai is and that it's not some sort
of scary or magical
like crystal ball it's a set of computer
algorithms that are very very
technical and are only as good as the
data you train them on
and often fail so they have to be really
highly validated
if if people have misunderstandings
about what ai
is that can really undermine so they're
just issues about almost a kind of
scientific literacy which is hard enough
for the rest for all of us to catch up
with but for large
public health applications we need to be
careful of that
but also ai has a sort of
separate uh challenge that it faces
so i mentioned that ai is uh machine
learning is fundamentally about
um a kind of algorithm that can find
hidden patterns inside of
rich data that humans couldn't see or be
too expensive to find out
the patterns that they find often
are patterns that the engineer herself
could not explain why the why the
algorithm is making the recommendation
it does
because the patterns are so complicated
but imagine what that means in terms of
being able to explain to the public
or to an individual patient why did you
recommend me for this chemo versus that
doc
and you have to say well the ai tool
tells me that's the right idea and it's
based on a lot of training data
what does the ai tool see that you're
not seeing doc i don't know that's the
way ai tools work
so again not that this can't be overcome
that we should never use them
but it would be irresponsible to use
them without
surrounding practices about
communication
transparency and what some people are
calling democratic accountability
um having some governance uh
thoughtfulness around the use of these
tools making sure they're validated
having thoughtfulness around the public
communication around them
and having the kind of oversight to know
when we shouldn't use them
so in summation with the data and
digital revelation
both on the data side and the ai tools
side
good intentions are not enough
how you design them deploy them and
govern them
can have enormous consequences for
whether it's done responsibly
so they need to be designed from the
get-go
for privacy preservation for justice
advancement
and for preserving trust
and you need to ensure robust governance
structures before you start any of this
oversight how would we know if
something's going wrong and as i
mentioned democratic accountability
so that there's some transparency with
the society that you're
trying to help as you use these tools
thanks
Посмотреть больше похожих видео
Future Medicine: Modern Informatics | Richard Frackowiack | TEDxYouth@Zurich
Smart City: How do you live in a Smart City? | Future Smart City Projects | Surveillance or Utopia?
Ai Unveiled episode 2
What do tech companies know about your children? | Veronica Barassi | TEDxMileHigh
Using Open Source Tools to Build Privacy-Conscious Data Systems
Can AI Agents be Ethical? (Ethics of Artificial intelligence in Medical Imaging)
5.0 / 5 (0 votes)