The ethical dilemma we face on AI and autonomous tech | Christine Fox | TEDxMidAtlantic
Summary
TLDRThe video discusses the evolving relationship between technology and policy, emphasizing how rapid advancements in technology are outpacing government regulation. It explores military technology, autonomous systems, the Internet, and the Internet of Things, highlighting both the benefits and unforeseen risks. The speaker raises concerns about vulnerabilities, especially in cybersecurity and autonomous machines, urging a shift in responsibility to companies and consumers. They advocate for proactive measures like corporate social responsibility and 'red teaming' to address potential risks before technologies are released to the public, stressing that we all share the responsibility in shaping our technological future.
Takeaways
- 🤖 Advanced technology, particularly in robotics, can elicit mixed reactions, from excitement to concern about its potential risks.
- ⚔️ The military must comply with the laws of armed conflict, focusing on targeting combatants while protecting civilians, and it strives to improve with the help of technology.
- 🛠️ Precision weapons and drones are remotely piloted and not autonomous, contrary to public perception, and DoD policy mandates human oversight for any lethal actions.
- 🚫 Current policy prohibits fully autonomous lethal weapons, as they cannot yet reliably differentiate between combatants and civilians.
- 📉 The private sector is leading in advanced technology development, surpassing government agencies in research and development spending, creating a regulatory gap.
- 🌐 The internet, while transformative and beneficial, has also introduced vulnerabilities like cyber warfare, hacking, and threats to personal data and national security.
- 📱 The Internet of Things (IoT) brings convenience but also risks, as devices like household robots may not be secure against cyber threats, with little regulation on safety standards.
- 🚗 Autonomous vehicles are already in operation, but they too raise ethical and safety concerns, such as decision-making in dangerous situations and potential hackability.
- 🏢 Companies must embrace a form of corporate social responsibility, ensuring their technologies are secure by applying tactics like red teaming to anticipate risks.
- 🧠 Consumers need to become more informed and proactive, demanding safer, more secure technology as part of their buying decisions, shifting some responsibility onto the public.
Q & A
What are the three phases of reactions people typically have when watching the video mentioned at the beginning?
-The three phases are: 1) Initial amazement and admiration for the technology, 2) Discomfort when the robots are kicked, and 3) Growing unease, feeling that the robots are 'creepy' and questioning whether such technology should be developed.
Who currently decides on the use of military technology, particularly in relation to the laws of armed conflict?
-The government and defense department leadership make decisions regarding military technology, guided by policies ensuring compliance with the laws of armed conflict, which require the military to target combatants and protect civilians.
Why is the use of lethal autonomous systems not currently allowed on the battlefield?
-Lethal autonomous systems are not allowed because current technology cannot reliably discriminate between combatants and civilians, which would violate the laws of armed conflict.
What is one reason the speaker believes the policy on autonomous systems is appropriate for now?
-The speaker argues that if current technology cannot correctly interpret simple text (e.g., spell check errors), it cannot be trusted to distinguish between combatants and civilians in complex warfare environments.
What concerns does the speaker raise about the rapid development of technology in the commercial world?
-The speaker is concerned that commercial technology is advancing so quickly that policymakers have fallen behind and may not be able to catch up, creating risks for society due to the lack of regulation and oversight.
How has the role of the internet evolved from its original conception?
-The internet, initially developed by DARPA for communication and information access, has evolved into a space that includes cyber warfare and vulnerabilities like hacking, which were unanticipated at its inception.
What example does the speaker use to illustrate the unforeseen vulnerabilities introduced by the internet?
-The speaker references several examples, including the 2014 Sony hack linked to North Korea, the theft of personal data from the Office of Personnel Management, and the hacking of the U.S. political process.
What is the 'Internet of Things' and what are its potential risks?
-The 'Internet of Things' refers to interconnected devices like smart homes and autonomous systems, which aim to improve daily life but also present risks such as inadequate cybersecurity and vulnerability to hacking.
Why does the speaker believe that corporate social responsibility is necessary in the context of emerging technology?
-The speaker argues that since policymakers may not keep up with rapid technological advancements, companies should take responsibility by adopting measures like red-teaming to identify and mitigate potential risks before releasing new products.
What role do consumers play in shaping the future of technology, according to the speaker?
-Consumers need to be educated and ask hard questions about the security and safety of technology before purchasing, as their choices influence companies to prioritize protection and ethical practices.
Outlines
🤖 The Complexity of Technology and Human Reactions
The speaker introduces the theme of technology’s complexity using a video as an example. Reactions to the video occur in three phases: amazement, discomfort, and concern. The speaker questions who should decide on the development and control of such technologies, specifically referencing military applications. Military policy is clear on protecting civilians and following laws of armed conflict, with precision weapons improving over time. However, concerns persist about the use of autonomous systems and whether they can be trusted to make life-and-death decisions.
📱 The Spellchecker Dilemma and Technology-Policy Evolution
The speaker shifts to a lighter example, discussing how technology like spellcheck can misinterpret intent, drawing a parallel to more serious military contexts where autonomous systems might fail to distinguish between combatants and civilians. The argument is made that technology and policy must evolve together. The speaker points out that commercial tech, not government, now drives innovation. With commercial R&D vastly outpacing defense budgets, the gap between technology's rapid development and policy regulation is growing, which poses significant challenges.
💻 The Internet’s Promise and Unintended Vulnerabilities
The speaker highlights the dual nature of the internet—its great promise of global connectivity and its unforeseen vulnerabilities, such as cyberattacks. The Sony hack is discussed as a key example of how nations and individuals can exploit these weaknesses, resulting in personal and institutional harm. Additionally, the speaker mentions incidents like the breach of personal information from government databases, emphasizing the risks involved in our increasing dependence on technology.
🏠 The Internet of Things: Friend or Foe?
This section focuses on the Internet of Things (IoT) and how it’s designed to make life easier through interconnected devices. The speaker introduces the concept of a household robot and raises concerns about who decides the level of cybersecurity embedded in such devices. They argue that there is no clear policy regulating the security of these commercial technologies, placing the responsibility on consumers and companies. The speaker urges for corporate responsibility in safeguarding against the potential misuse of these advancements.
🚗 Autonomous Vehicles and Ethical Dilemmas
The speaker discusses the ethical challenges of autonomous vehicles, particularly in life-or-death situations where they may have to choose between the safety of passengers and pedestrians. This dilemma mirrors decisions made in combat scenarios involving distinguishing civilians from combatants. Additionally, the issue of vehicle hackability is raised, with real-world examples like the Jeep hack illustrating the risks of new technologies. The speaker calls for urgent attention to these problems, expressing doubt that policymakers alone can keep up with the pace of innovation.
🏭 Corporate Responsibility in the Age of Technology
The speaker argues for a new form of corporate social responsibility, where companies take proactive steps to secure their technologies against potential threats. This might include adopting practices like 'red teaming,' where experts simulate how new technologies could be exploited by adversaries. By thinking ahead and addressing vulnerabilities, companies can create safer products. The speaker emphasizes that consumers also have a role in this process by demanding secure technologies and making informed decisions.
🌍 A Call for Collective Action in Shaping the Future of Technology
In the concluding remarks, the speaker expresses their love for technology but stresses the importance of guiding its development responsibly. They encourage collective action among corporations, consumers, and policymakers to ensure that technological advancements lead to positive outcomes for society. The speaker remains optimistic that by asking the right questions and addressing challenges head-on, the future of technology can be shaped in a way that benefits everyone, both today and for future generations.
Mindmap
Keywords
💡Technology
💡Autonomous Systems
💡Laws of Armed Conflict
💡Cybersecurity
💡Corporate Social Responsibility
💡Remote Piloted Vehicles (Drones)
💡Internet of Things (IoT)
💡Hackability
💡Human in the Loop
💡Unanticipated Consequences
Highlights
The speaker introduces a video that showcases the complexity surrounding technology today, noting how people's reactions follow three distinct phases.
In military operations, the government, policy makers, and defense department leadership decide on the use of technology, with a focus on protecting civilians while targeting combatants.
Drones, though controversial, are still piloted by humans, with Department of Defense (DoD) policy mandating a human be in the loop for any lethal actions.
Lethal autonomous systems are currently not allowed on the battlefield due to concerns about the technology's ability to distinguish between combatants and civilians.
Precision weapons have significantly improved military capabilities in targeting combatants while protecting civilians.
Commercial entities, not governments, are leading the development of advanced technologies, and policymakers are struggling to keep up with the pace of technological advancement.
The research and development budgets of top defense contractors are dwarfed by those of tech giants like Microsoft, Apple, and Google.
The speaker emphasizes that the policy surrounding military technology and lethal systems is currently appropriate, but acknowledges that it will likely evolve as technology advances.
An app that connects people trained in CPR with individuals experiencing cardiac arrest is an example of how technology is saving lives in simple but impactful ways.
The internet, initially developed by DARPA, has fulfilled its promise of instant global communication but has also introduced unforeseen vulnerabilities, like cyber warfare.
The Sony hack serves as an example of the unanticipated risks associated with technology, demonstrating how vulnerabilities can impact large organizations and governments alike.
Hackers breaching the Office of Personnel Management database and stealing the personal data of millions of Americans showcases the unnerving reality of cyber threats.
As autonomous technologies like self-driving cars and home robots become more prevalent, the speaker raises concerns about their safety, including their susceptibility to hacking.
Companies need to adopt a new form of corporate social responsibility, focusing on the vulnerabilities their technologies introduce, including security measures before release.
The speaker advocates for the use of red teaming, a defense department strategy to imagine how adversaries might exploit new technologies, as a method for addressing vulnerabilities before they arise.
Transcripts
[Music]
[Applause]
see so I wanted to start today with that
video because I think it really captures
the complexity surrounding technology
today now I've looked at that video with
a lot of colleagues and friends and I
find that people's reactions to that
video follow along sort of three phases
so let me describe them and you can ask
if your reactions followed along these
phases phase one wow is that cool right
I mean is that cool those guys are
amazing then they start kicking it
that's kind of
you know that's kind of
mean right I don't know that they should
be doing that I don't think they should
be allowed to kick them like that and
then there's phase three they're kind of
walking up the hill and looking around
and that big guy starts going up the
hill it's kind of H you know they're a
little
creepy I don't know if I'd feel happy
running into them walking in the woods
you know what would they do
I'm not sure we should be building those
things I don't think they should be
allowed to build them so the question
that I want to explore with you today is
who decides
should now I've spent a lifetime
studying military operations as you
heard I was even in the defense
department for a while and I can tell
you the answer to this question for
military operations and technology is
very clear who decides should for
military operations it's the government
its policy its defense department
leadership now one of the things we
expect our military to be able to do is
comply with something called the laws of
armed conflict now the laws of armed
conflict say many important things but
one of the things it says is that the
military should do everything in its
power to Target combatants and protect
civilians in
Conflict now I can tell you the military
tries very very hard to do this I've
studied operations for decades do are
they perfect absolutely not do they make
mistakes yes but they try very hard and
they've gotten better and better and
better at it as technology has improved
one of the things that has improved are
Precision weapons Precision weapons have
made a big difference here in their
ability to focus on combatants and
protect civilians now more recently
we've put Precision weapons on something
called remote piloted Vehicles drones
now I know drones are controversial and
I confess I'm not entirely sure I
understand why they're so controversial
and I wonder if people have the
perception or misperception that they're
actually
autonomous they are remotely piloted the
pilot is not in the aircraft but there
is a pilot on the ground in a distant
location but they have positive control
of that aircraft from takeoff to landing
and certainly for any employment of a
weapon in fact today it is DoD policy
that any employment of lethal capability
have a human in the loop and when it
comes to autonomous systems there's a
special directive governing autonomous
systems that specifically says lethal
autonomous capability is not allowed on
the battlefield today and one of the
reasons for that is a concern that
lethal autonomous systems or technology
is not able to discriminate between
combatants and civilians and so
therefore it would not be in compliance
with the laws of armed conflict now let
me tell you there are a lot of people
who think that this policy is overly
constraining I've heard a lot about it
right it's it's uh holding us back it's
putting men and women in uniform In
Harm's Way needlessly our adversaries
are going to get Advanced uh ahead of us
so it's you know there's a lot of push
back on this directive I've thought a
lot about this directive I personally
think it's just right and I want to ask
you to think along with me using the
example of our friend the spell checker
okay I don't know about you but I often
hit send on a text just after I noticed
that my helpful smartphone changed the
spelling and the meaning of what I
wanted to send I hope that happens to
you and I'm not the only one right okay
so let's ask ourselves if technology
today can't understand the intent of a
few simple lines of text how could we
count on it to discriminate between a
combatant and a civilian in environments
so complex such as Warfare so I
personally think the policy has it right
for now but the technology is evolving
and I also think it will probably evolve
to a point where it can discriminate and
I am confident that the policy will
evolve along with the technology and so
I think this is a nice example of the
relationship between technology and
policy one evolves the other evolves
they stay together and that happens
because technology that I'm talking
about here is under the purview of the
government now there was a time when
advanced technology was developed under
the purview of the government in most
cases but that is not true today today
advanced technology is developed in the
commercial World far more than in
government consider for a moment the
fact that if you add up the research and
development budgets of the top five
defense contractors talking the big ones
lockie Martin and Boeing Etc the top
five add them all up that comes to less
than half of Microsoft's research and
development budget in the year less than
half that doesn't even begin to consider
the R&D budgets of Apple and Google and
so many others right so advanced
technology is no longer the purview of
the government it's the purview of the
commercial world and policy makers are
not governing advanced technology in the
commercial world I think that technology
is amazing today and it is going so fast
that policy makers have fallen behind
and I personally am very dubious that
they can catch up so where does that
leave us I think it leaves us with a bit
of a toxic brew
where we have advanced powerful
technologies that are available to
anyone who wants to buy them with few if
any constraints over their development
and
accessibility now technology is making
our world better okay we we do not want
to regulate technology we want
technology to continue going forward
take for example this very simple but
lovely application of technology today a
a nonprofit Foundation has developed
this app it pairs people with CPR skills
with people in need having acute Cardiac
Arrest this simple application of
technology is saving lives today awesome
right that is awesome we don't want to
do anything to stop
that but there are other applications of
technology that we didn't anticipate
that aren't making our world a better
place and that we aren't able to
govern so I want to continue to explore
this a little bit and I want to work
through an example with you and I want
to work through the example of the
internet so I remember when the internet
was first conceived of and introduced
right was developed by DARPA the defense
department but rapidly went out into the
public and the promise was this amazing
notion I still remember how I felt about
it at the time that we would be able to
instantly communicate with people all
over the world and any information you
wanted would be at your fingertips
coming right up on your computer really
really wow that's amazing look what's
happened that promise has been realized
it's amazing right I think the promise
of the internet has really changed our
lives forever there's no going back I
don't know about you but I'm really
cranky when my Wi-Fi is down for just a
little bit right we expect it now it's
part of
life but when the internet was first
introduced I don't think we really
Envision this future world of
cyber or cyber warfare or the fact that
the defense department would one day
need a cyber command whose whole mission
is to operate in cyberspace and defend
it and defend our
networks so that's an unanticipated
consequence of the internet the internet
has introduced a whole set of
vulnerabilities and those
vulnerabilities are also impacting Our
Lives take for example this um this
group of people that worked at Sony so
you're at Sony you come to work you turn
on your computer you expect your
calendar and your email to come up and
instead this is what you see that
happened to every employee in Sony and
November two years ago that was
happening right before Sony was due to
release a parody a movie about the North
Korean leader Kim Jong-un it was a
terrible movie okay terrible it was it
was their biggest concerns at Sony
before this happened was how big a flop
it was going to be okay but North Korea
didn't think it was so funny they didn't
were they didn't understand that it
might be a flop they didn't care and
this is what happened to Sony they were
hacked information was stolen
embarrassing information and they got
this threat and multiple threats for the
next several weeks and it had a very
devastating effect on Sony and the
people who work
there or perhaps consider that you're
one of the 21.9 million people I bet
some of you here are I certainly am who
had their personal identifying
information stolen out of the office of
personnel Management's
database okay stolen by a hacker they
think in
China well what's that hacker going to
do with my data I don't know you know
it's unnerving it's really
unnerving and then take a much more
recent example where we have a nation
hacking into our political
process the very Foundation of our
government a very big deal
indeed so that's where we've been that's
the story of the internet great promise
all achieved even more so but also
simultaneously introduced some
vulnerabilities that we hadn't
envisioned hadn't counted on and hadn't
prepared for all right so let's stop and
ask ourselves going going forward what's
next where are we now and what do we see
going forward where maybe we can
anticipate a little better do a little
bit better job so I want to talk a
little bit about the internet of things
now the Internet of Things is already
here in some ways right you can already
turn the lights on in your home before
you get there so it's nice and friendly
when you arrive turn the heat up so it's
warm and toasty get your coffee done in
the morning from just to click on your
smartphone wonderful things all aim at
making our lives easier making our lives
better internet of things manufacturers
now are preparing to U include something
like this little guy into our homes this
little guy is going to be your friendly
household servant he's going to help
wake you up in the morning he's going to
get the kids ready for school he's going
to get breakfast ready he's going to
have your favorite news program on your
radio who wouldn't want him I'm I I want
him right my life is hard I really would
love to have some help right I think we
all feel the Press of time today and we
this is a very appealing concept it
would make our lives
better or would
it who
decides the Cyber protection in our
friend the
robot who decides if you're sitting out
there thinking that there's some
government policy somewhere that's going
to make this guy safe I'm afraid that
you're wrong
there is no policy governing it there is
no sense of what's the appropriate cyber
protection to have in this robot before
it goes to Market there's nobody
deciding when it's ready to go to
market this robot is going to come to
Market and we're going to decide to to
buy it and so who decides we do and the
company that builds
it okay so I am concerned about this and
I wish I had an answer because I really
don't think that policy makers are going
to be able to keep up with this
explosion of
technology and you know this has been
the study of academics for a long time
morality ethics legality of tech new
technologies and the introduction of
technology and I want to suggest to you
today that the time for study is over
these things are here today this
technology is exploding today now you
may think that that robot is way far off
and we have a lot of time to get it
right but the manufacturers are
predicting that they'll be ready for
Market in less than two
years now you may be skeptical I can
tell you I'm skeptical but I'm a
terrible predictor of these things two
years ago I said no way are we going to
have autonomous cars on the road in any
near time right there it's going to take
a long time well if you're driving on
the Beltway today there's a very good
chance that there's an autonomous
vehicle driving somewhere in your midst
right now today so you know autonomous
cars is another really interesting thing
right because autonomous cars are also
going to have to make some difficult
choices what if a car autonomous car
gets into a situation where it has to
choose whether to protect the occupants
of the car or the people outside the car
does the car drive into the pedestrians
or the brick
wall it seems to me that's a little bit
like trying to choose between a
combatant and a
civilian yet the cars are on the road
now and what about their hackability if
you don't think you can hack these cars
I encourage you to Google Jeep
hack many of you have I can hear that
you know it's and that's not even an
autonomous car right he's just driving
down the road His Radio goes crazy his
air conditioning goes crazy but his
transmission is cut and he coasts to a
stop in the midst of traffic a very
dangerous
situation so so these things are with us
now and I think there's a sense of
urgency we have to start really
grappling with this problem and I don't
think policy makers are going to solve
it so I think that maybe it calls for
some new form of corporate social
responsibility I think companies need to
take this on it's a little different
version of it but it you could see how
it might apply where companies think
about the vulnerabilities of their
technology as well as the promise and
the coolness of of it and perhaps we
could get companies to adopt something
we do in the defense department called
red teaming when the defense department
comes up with new plans or new
technologies that we want to introduce
we bring people in who are very
imaginative and expert on on what
adversaries might do and we say okay
have at it give us your all use your
imaginations take our plans apart tell
us how this technology can be turned
against us and they do that and that
gives us a chance to fix our plans and
to change the way we use the technology
so we have a much better chance of
success when we need to use these things
perhaps we could Red Team new
technologies before they go on the
street we could get somebody to use
their imagination to ask how that robot
might end up being used against us
instead of helpful helping us make our
lives better now in order for a new form
of corporate social responsibility to
work there has to be a reward system
right so they have to invest in more
protection it's going to take them more
time cost a little more money so the
reward is that you preferentially buy
the products that come with security
that means consumers need to be educated
and they need to ask the hard questions
before they just buy the device and so
that means the answer to who decides
should is us I really think all of us we
have to decide should corporations need
to decide should and consumers need to
decide should now listen I don't want to
be Debbie Downer I love technology okay
I work at the Applied Physics lab I am
surrounded by cool technology every day
and I love to buy it okay but I want a
world where the technology is taking us
to that great place that makes our lives
better and I want to avoid a world where
we're surprised by unintended events and
vulnerabilities that we just didn't take
a moment to anticipate but I'm confident
that if we all pull together and we
start asking the hard questions we can
drive our world in that good direction
not just for us today but for
generations to come thank
you
Browse More Related Video
How to get empowered, not overpowered, by AI | Max Tegmark
Digital ethics and the future of humans in a connected world | Gerd Leonhard | TEDxBrussels
The future of technology and Humanity: a provocative film by Futurist Speaker Gerd Leonhard
Marketing Management by Philip Kotler in Hindi audiobook Chapter 1
Digital Literacy: Realizing the Promise of Technology | Shauna Begley | TEDxRoyalRoadsU
AI & The Future of Work | Volker Hirsch | TEDxManchester
5.0 / 5 (0 votes)