How AI defeats humans on the battlefield | BBC News

BBC News
25 Jul 202421:56

Summary

TLDRAI Decoded explores the controversial use of artificial intelligence in military training and warfare. The program discusses the potential of AI to transform combat, with BAE Systems leading efforts to create AI-powered learning systems for military trainees. Ethical concerns about unmanned military drones and the implications of AI in escalating conflicts are highlighted, featuring insights from experts and a deep dive into the technology's impact on the battlefield.

Takeaways

  • 🤖 AI is increasingly being integrated into military training and combat tools, raising ethical and operational concerns.
  • 🚀 The Guardian refers to this as the 'AI Oppenheimer moment', highlighting the growing interest in AI-powered combat systems.
  • 💰 There is a significant influx of funding towards companies and agencies promising smarter, cheaper, and faster warfare through AI.
  • 🇬🇧 UK-based BAE Systems is leading efforts to develop an AI-powered learning system for military trainees, aiming to make them mission-ready sooner.
  • 🎓 AI is being used to enhance flight simulators and other training tools, providing realistic scenarios and improving pilot skills.
  • 🤔 Ethical considerations include the potential for AI to escalate warfare more quickly and the need for human oversight in decision-making.
  • 🛠 AI's role in training is not limited to aircraft; it is also used to simulate various battlefield scenarios and civilian activities.
  • 🔍 AI can analyze trainee performance data, identify areas of struggle, and refine training materials accordingly.
  • 💡 The development of AI-powered adversaries in training scenarios is challenging even experienced pilots, showcasing the potential of AI in combat.
  • 🌐 The debate around AI in warfare extends to international discussions, with organizations like 'Stop Killer Robots' advocating for a ban on lethal autonomous weapons.
  • 🔎 Transparency and accountability in AI use in warfare are critical, especially considering the classified nature of much military intelligence.

Q & A

  • What is the term 'AI Oppenheimer moment' referring to in the context of the script?

    -The 'AI Oppenheimer moment' refers to the increasing interest in combat tools that combine human and machine intelligence, leading to significant financial investment in companies and government agencies that claim to make warfare smarter, cheaper, and faster.

  • What is BAE Systems' goal in relation to AI in the military sector?

    -BAE Systems aims to be the first in their industry to create an AI-powered Learning System designed to make military trainees mission-ready more quickly.

  • What ethical considerations are raised by the use of AI in military applications?

    -Ethical considerations include the potential for AI to escalate warfare more quickly, the possibility of AI making life-and-death decisions without human intervention, and the implications of AI systems engaging in combat without human pilots.

  • How does AI enhance flight simulators for pilot training?

    -AI can record and analyze trainee performance in flight simulators, providing metrics to measure and score each performance. It can also identify areas where trainees struggle, allowing for more targeted and effective training.

  • What is the concept of 'human in the loop' in military applications?

    -'Human in the loop' is a process where a human being is involved in the command and control of military systems, such as drones, ensuring that there is always human judgment at the heart of selecting the course of action.

  • How does AI contribute to the realism of simulated battlefield environments?

    -AI allows simulated environments to behave more realistically, even replicating civilian activity. It can create complex scenarios for training purposes, making the training experience more challenging and closer to real-life conditions.

  • What is the role of AI in aerial combat training against AI-powered adversaries?

    -AI in aerial combat training serves as an adaptive opponent that learns and reacts to the trainee's actions, providing a challenging and unpredictable training environment that can help refine pilot skills.

  • What is the Tempest project and how does it relate to AI in military applications?

    -The Tempest is a joint collaboration between the UK, Italy, and Japan to develop a sixth-generation stealth combat jet. It will feature advanced radar and weapon systems and the ability to command a mini-squadron of drones, acting as a flying command and control center.

  • What is the concern regarding the speed of AI decision-making in warfare?

    -The concern is that the speed of AI decision-making could lead to faster escalation of conflicts, as AI systems may initiate actions or escalate situations without human oversight or control.

  • How does the use of AI in warfare affect the potential for civilian harm?

    -While AI can improve precision in targeting, the increased speed and reduced cost of engaging targets could potentially lead to more strikes and, consequently, a larger impact on civilian populations, even if individual strikes are more precise.

  • What is the role of the Campaign to Stop Killer Robots, and what is their stance on AI in the military?

    -The Campaign to Stop Killer Robots is a coalition of NGOs that seeks to preemptively ban lethal autonomous weapons. They advocate for human control over life-and-death decisions in warfare and are concerned about the potential for AI to escalate conflicts and cause harm to civilians.

Outlines

00:00

🤖 AI in Warfare: Ethical and Practical Considerations

The script opens with a discussion on the integration of artificial intelligence (AI) in military applications, drawing parallels to the historical figure J. Robert Oppenheimer. It highlights the interest in AI-powered tools that could potentially make warfare more efficient but raises ethical concerns about the use of unmanned military drones. The segment introduces Priya Lakhani, CEO of Century Tech, who emphasizes the importance of addressing ethical issues related to AI in warfare. The script also mentions a report by Mark Chesak on military training and the application of AI by BAE Systems to enhance military readiness.

05:00

🎮 AI-Enhanced Military Training and Simulation

This paragraph delves into the use of AI in military training, specifically in flight simulators. It explains how AI can analyze trainee performance data to identify areas for improvement in training programs. The script also touches on the simulation of various battlefield elements using technology derived from video games, which AI can make more realistic. The narrative includes the perspective of a former RAF pilot, Jim Whitworth, on the realism of simulators and the potential for AI to create complex training scenarios, including aerial combat against AI-powered adversaries.

10:03

🛫 The Future of AI in Aerial Combat and Autonomous Systems

The script discusses the development of AI in aerial combat training, where AI adversaries challenge even seasoned pilots with unfamiliar tactics. It also introduces the concept of the 'AI-aided tactics engine' developed by Cranfield University, which is designed to simulate realistic combat scenarios. The segment raises questions about the implications of AI learning from human pilots and potentially being used to control drones in real-world situations, underscoring the moral and ethical risks associated with autonomous weapons.

15:05

🚀 AI and the Evolution of Military Technology

This paragraph explores the future of military technology, focusing on the Tempest project, a collaborative effort between the UK, Italy, and Japan to develop a sixth-generation stealth combat jet. The Tempest is envisioned to have advanced systems, including the ability to command a squadron of drones from a distance, prompting a discussion on the role of humans in the decision-making process regarding warfare and the potential for AI to escalate conflicts more rapidly.

20:07

🛡️ The Debate on AI in Warfare: Human Control and Ethical Implications

The final paragraph of the script presents a discussion on the role of AI in modern warfare, featuring insights from Mikey Kay, a former senior RAF officer, and Dr. Peter Asaro from the Campaign to Stop Killer Robots. The conversation covers the use of AI in precision targeting and the potential for AI to increase the speed and volume of attacks, which could inadvertently lead to more civilian casualties. It also addresses the need for human oversight in the use of lethal autonomous weapons and the challenges of regulating AI in military applications.

Mindmap

Keywords

💡Artificial Intelligence (AI)

Artificial Intelligence refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is central to the theme as it is being discussed in the context of military applications, such as training pilots and potentially controlling unmanned military drones, which raises ethical considerations.

💡AI Oppenheimer moment

The term 'AI Oppenheimer moment' is a metaphorical reference to the time when scientists realized the potential destructive power of their creation, alluding to the ethical and existential questions posed by the integration of AI in warfare. It is used in the script to highlight the growing interest in and concerns about combat tools that blend human and machine intelligence.

💡Combat tools

Combat tools in the script refer to military equipment and technologies that are being enhanced by AI to improve their efficiency and effectiveness in battle. The influx of money to companies and government agencies developing these tools signifies the trend towards making warfare smarter, cheaper, and faster through AI integration.

💡BAE Systems

BAE Systems is a leading military contractor mentioned in the script that is working on creating an AI-powered Learning System. This company is significant in the narrative as it represents the industry's push towards utilizing AI to make military trainees mission-ready sooner, which is a key example of AI's application in military training.

💡Unmanned military drones

Unmanned military drones are aircraft operated remotely or autonomously without a pilot on board. The script discusses the potential of AI to control these drones, which raises questions about the speed at which warfare could escalate and the ethical implications of decisions being made by AI systems.

💡Ethical considerations

Ethical considerations encompass the moral principles and values that guide decisions and actions, especially in the context of AI's role in warfare. The script highlights the controversy surrounding AI in the battlefield, including the potential for AI to escalate conflicts and the need for human oversight to ensure ethical conduct.

💡Human in the loop

The 'human in the loop' concept refers to the involvement of a human operator in the command and control process of military systems, ensuring that there is human judgment at the heart of decision-making. The script discusses the critical nature of this process from a moral and ethical standpoint, particularly in the context of AI-enhanced training and combat systems.

💡Flight simulators

Flight simulators are training devices that replicate the experience of flying an aircraft. In the script, they are highlighted as integral to a pilot's training, allowing them to gain skills in a safe and cost-effective manner. The integration of AI in these simulators enables the recording and analysis of trainee performance, which can be used to refine training programs.

💡AI-aided tactics engine

The AI-aided tactics engine is a system developed by Crownfield University, as mentioned in the script, which challenges experienced pilots in virtual aerial combat scenarios. This engine represents the advanced use of AI to create realistic and adaptive training environments that can improve pilot skills and readiness for real-world situations.

💡Precision-guided munition

Precision-guided munition refers to a type of weapon system that can maneuver to hit a specific target accurately. In the script, it is discussed as an area where AI is particularly useful, assisting with the identification and selection of potential threats, as well as assessing collateral damage and weapon selection for minimizing harm to civilians and infrastructure.

💡Collateral damage estimate

The collateral damage estimate is an assessment of the potential harm to civilians and infrastructure that might result from a military action. The script mentions how AI can speed up this assessment process, which is critical for making rapid and informed decisions that minimize unintended harm during military operations.

💡Pattern of life

Pattern of life analysis involves the use of surveillance to understand the routine activities and behaviors of people in a given area. In the script, it is mentioned in the context of using drones to observe and assess the environment surrounding a target, which helps in making more informed decisions about weapon selection and the timing of an attack to minimize collateral damage.

💡Rules of engagement

Rules of engagement are the directives that govern when, where, and how military force can be used. The script discusses how these rules are integral to the decision-making process in AI-assisted military operations, ensuring that actions taken are in accordance with legal and ethical standards.

💡Lethal autonomous weapons

Lethal autonomous weapons are systems that can independently select and engage targets without human intervention. The script references the Campaign to Stop Killer Robots, which seeks to preemptively ban such weapons, highlighting the significant moral and ethical risks associated with their use in warfare.

💡Flash crashes

A flash crash refers to a sudden, dramatic drop in the value of a financial market or a particular stock. The script uses this term as an analogy to describe the potential for AI systems in warfare to escalate conflicts quickly if strategic decisions become automated, leading to unintended and potentially catastrophic outcomes.

Highlights

The Guardian dubs the increasing use of AI in combat tools as the 'AI Oppenheimer moment', highlighting the potential for AI to transform warfare.

BAE Systems is developing an AI-powered learning system to expedite military trainee readiness.

AI's role in military training raises ethical concerns, particularly around the potential for unmanned military drones.

The Farnborough Air Show has showcased an increase in unmanned aerial vehicles with military applications.

Human involvement in command and control of military drones is critical for ethical considerations.

AI is being used in flight simulators to train pilots, saving time and money while providing realistic training scenarios.

AI can analyze trainee performance in simulators, identifying areas where training can be improved.

Simulation technology, powered by AI, can replicate complex battlefield environments and civilian activity.

AI-powered adversaries in training simulations challenge experienced pilots, simulating real-world combat scenarios.

AI engines in training are learning and adapting to pilot reactions, providing a dynamic training environment.

The ethical implications of AI in warfare include the risk of escalating conflicts due to automated decision-making.

The Tempest project, a collaboration between the UK, Italy, and Japan, aims to develop a sixth-generation stealth combat jet with AI capabilities.

The debate on AI in warfare includes discussions on the necessity of human involvement in decision-making processes.

AI technology is being used in the battlefield for precision-guided munition processes, assessing potential threats and collateral damage.

The use of AI in warfare could potentially increase the speed and accuracy of military operations, but also raises concerns about civilian harm.

Dr. Peter Asaro from the Campaign to Stop Killer Robots emphasizes the need for human control in lethal autonomous weapons systems.

The potential for AI to escalate conflicts due to increased speed and automation in decision-making is a significant concern.

AI's role in warfare is not limited to training; it is also being integrated into operational capabilities, such as precision targeting and battle damage assessment.

The Campaign to Stop Killer Robots is working towards a treaty to regulate the use of AI in lethal autonomous weapons systems.

Transcripts

play00:00

it is time now for our new weekly

play00:02

segment Aid

play00:05

[Music]

play00:08

decoded welcome to AI decoded it is that

play00:11

time of the week when we look in depth

play00:13

at some of the most eye-catching stories

play00:15

in the world of artificial intelligence

play00:18

now last week we looked at how

play00:20

artificial intelligence could threaten

play00:23

human jobs in the future but what about

play00:25

those on the battlefield well the

play00:28

guardian is calling it AI Oppenheimer

play00:31

moment due to the increasing appetite

play00:33

for combat tools that blend human and

play00:36

machine intelligence this has led to an

play00:39

influx of money to companies and

play00:42

government agencies that promise they

play00:44

can make Warfare smarter cheaper and

play00:47

faster and here in the UK leading

play00:50

military contractor BAE systems are

play00:52

ramping up efforts to become the first

play00:55

in their industry to create an AI

play00:57

powerered Learning System meant to make

play01:00

military trainees Mission ready sooner

play01:03

now our BBC AI correspondent Mark chesak

play01:06

went to meet all those involved we will

play01:08

be showing you his piece in just a

play01:11

moment but with me I'm very pleased to

play01:13

say is our regular AI contributor and

play01:15

presenter Priya laani who's CEO of AI

play01:19

powered education company Century Tech

play01:22

now Priya this is a fascinating area but

play01:26

perhaps one of the most controversial

play01:28

and people have huge concerns about it

play01:31

yeah that's absolutely right because

play01:33

this is using AI to potentially have

play01:35

unmanned military drones what you're

play01:38

going to see in Mark's incredible piece

play01:40

is unmanned military Warcraft

play01:43

potentially and then there's you know

play01:45

all these questions about well hang on

play01:46

you know obviously it's great if there

play01:48

aren't humans being harmed out there on

play01:50

the field but does that mean that

play01:52

actually War could escalate much quicker

play01:54

a decisions then going to be made by

play01:56

these AI systems if both parties have ai

play01:59

system systems what happens then it's

play02:01

sort of a race as to who can escalate

play02:04

further and so there's all sorts of

play02:05

ethical considerations but you're also

play02:07

going to see learning systems and how

play02:09

BAE systems are approaching using AI to

play02:11

improve learning in terms of training uh

play02:14

the military and soldiers so it's a

play02:16

fascinating area and then we'll do a bit

play02:17

of a deep dive into the ethics a little

play02:19

bit later in the program lots to talk

play02:21

about Priya so let's take a look uh as

play02:23

we just talking about uh this uh report

play02:25

by Mark chesak and then stay with us cuz

play02:28

we've got lots to discuss afterwards

play02:34

up down flying or hovering around for 75

play02:40

years the farra air show has showed off

play02:42

aircraft both civilian and

play02:45

Military often inviting Pilots to put

play02:48

their airplanes through their Paces to

play02:51

the Delight of the assembled

play02:54

attendees including plane Buffs even new

play02:57

prime ministers

play03:00

in recent years fber has played host to

play03:02

a lot more of these unmanned air

play03:05

vehicles or drones as they're commonly

play03:09

known drones with military application

play03:11

with fixed wings that behave like an

play03:13

airplane or rotors capable of hovering

play03:16

like a helicopter are in abundance but

play03:20

all have something in common a human

play03:23

being involved in the command and

play03:25

control of these aircraft at some stage

play03:28

it's a process that's called human in

play03:30

the loop it's it's critical from a a

play03:34

moral and ethical point of view to

play03:35

ensure that there is a human judgment

play03:38

that is always at the heart of selection

play03:42

of the course of

play03:43

action military application of AI is

play03:47

extremely controversial images of Killer

play03:50

Robots and the idea of AI run a mck are

play03:53

frequent additions to stories in the

play03:56

Press about the risks the technology

play03:58

poses nevertheless militaries around the

play04:01

world are already using artificial

play04:05

intelligence one area where it's

play04:07

particularly useful is training Pilots

play04:10

to fly

play04:11

aircraft like

play04:16

these flight simulators are an integral

play04:19

part of a pilot's training they save

play04:21

time and money allowing prospective

play04:23

Pilots to gain valuable skills from the

play04:26

comfort and safety of terra firma

play04:29

formerly with the RAF Jim Whitworth is a

play04:32

pilot instructor experienced in flying

play04:35

military jets like the hawk and tornado

play04:38

soon as you see that I want you to just

play04:39

pull the stick back set an attitude as

play04:41

we discussed this simulator rig is for a

play04:44

hawk jet the Royal Air Force's preferred

play04:47

trainer what sort of feedback have you

play04:50

given to the team developing this um in

play04:53

terms of its realism uh so really it's

play04:56

about the feedback from the controls I

play04:58

would like it to feel as much like a

play05:00

hawk as possible where does the AI come

play05:02

into the mix we can record everything a

play05:04

traine does in this environment in this

play05:07

simulator we can give some metrics with

play05:10

which to measure the performance and

play05:11

then score each performance and then as

play05:14

we start to build up data on each

play05:16

trainee artificial intelligence can then

play05:18

start to analyze that data for us and

play05:20

show us where our pinch points in the

play05:21

syllabus are and by that I mean where

play05:24

each traine is struggling where perhaps

play05:25

we might want to refine a piece of

play05:27

training either coare material all

play05:29

technique from the instructor to try and

play05:32

make that training as successful as

play05:33

possible greatest advantage of learning

play05:36

to fly like this is that when I need to

play05:39

get back down on the

play05:41

ground I can hit a few Keys take the

play05:44

headset off and I'm good to

play05:47

go synthetic training isn't exclusive to

play05:51

aircraft nearly every element of the

play05:53

battlefield and its surrounding

play05:55

environment can be simulated the

play05:57

software powering these tools has

play05:59

evolved from the same Tech as video

play06:02

games the addition of AI allows the

play06:04

environments to behave in a much more

play06:06

realistic way even replicating civilian

play06:10

activity how does AI help in

play06:14

simulation it's really difficult to

play06:16

replicate real life scenarios it's very

play06:18

difficult to get enough space to do the

play06:20

training in it's very difficult to get

play06:21

enough assets available particularly if

play06:23

they're on operations we can make them

play06:25

incredibly complicated scenarios and the

play06:27

AI can then create the the complexity

play06:30

that they need to train against when it

play06:32

comes to Aerial combat training new AI

play06:35

powered adversaries are proving to be a

play06:37

challenge even for experienced Pilots

play06:41

definitely puts you through your Paces

play06:43

it puts you in positions that you've not

play06:45

traditionally seen before it fights a

play06:47

different doctrine that we've not

play06:48

necessarily trained against so I think

play06:51

it's going to become become the Future

play06:53

Okay Pi's headset on and put it through

play06:55

his

play06:56

PES Pierce Dudley used to fly the F's

play06:59

most advanced fighter the typhoon he's

play07:03

about to fly a virtual version of the

play07:05

same jet in aerial combat against a

play07:08

System created by developers from

play07:10

crownfield University it's called the AI

play07:13

aided tactics engine yeah so if your

play07:17

opponent is also a human being there's

play07:19

something at stake for both of you your

play07:21

lives are at stake but if your opponent

play07:23

in the real world isn't a human being

play07:26

does that change things for for you as a

play07:29

human pilot the AI is learning as and is

play07:32

adapting to your reactions so therefore

play07:36

it becomes quite difficult to train

play07:37

against if you're tra if you're fighting

play07:39

against other other real world uh air

play07:42

crew you potentially know the training

play07:44

that they've been through you know

play07:45

almost what to expect whereas against

play07:47

this you just don't know what to expect

play07:50

with it the AI engine has come out on

play07:53

top now it's my turn to take on an AI

play07:57

Top Gun

play08:02

where did he go I lost

play08:04

him got to get some altitude out

play08:07

maneuvered at every turn the AI made

play08:09

quick work and this novice pilot is just

play08:15

too

play08:16

elusive Pilots aren't just learning from

play08:19

the AI in turn it's learning from them

play08:22

too it's refining skills which one day

play08:25

may be used to Pilot drones in real

play08:27

world situations

play08:29

a scenario that for many presents a

play08:32

significant moral and ethical risk yeah

play08:35

that risk um associated with technology

play08:38

is a is a a critical area um it's not

play08:41

new um every technology that's been

play08:43

deployed in defense has a risk

play08:45

associated with it and there's a very

play08:47

wellestablished um moral ethical and

play08:50

legal framework around how we evaluate

play08:53

the risk of any new capability alongside

play08:56

the operational capability and the

play08:57

imperative to use it but what what

play08:59

happens if an adversary doesn't play by

play09:01

the rules if they don't play by the The

play09:04

Rules of Engagement or they they don't

play09:06

play by the same ethical Frameworks we

play09:09

don't assume that our adversaries will

play09:10

play by the same rules that we do um but

play09:13

because we understand the technology we

play09:16

understand how you would go about

play09:19

deploying autonomy outside of that

play09:21

framework and when we understand the

play09:23

technology and approaches they would use

play09:25

we can understand the techniques we

play09:26

would use to counter that to defeat that

play09:28

threat

play09:30

this is a glimpse of the future it's

play09:32

called Tempest a joint collaboration

play09:35

between the UK Italy and

play09:38

Japan this proposed sixth generation

play09:41

stealth combat jet will have advanced

play09:43

radar and weapon systems as well as

play09:46

flying with its own mini Squadron of

play09:49

drones The Tempest acting as a flying

play09:52

command and control center at a distance

play09:55

while the drones perform missions

play09:57

semi-autonomously

play10:00

which begs the question how long will

play10:02

the human being remain in the

play10:09

loop I told you it was interesting that

play10:11

was Mark chesak reporting there now

play10:13

coming up we will delve deeper into the

play10:15

issues surrounding AI on the battlefield

play10:18

we will be speaking to Mikey Kay a

play10:20

former senior RAF officer in the British

play10:22

Military and Dr Peter azaro from The

play10:25

Campaign to stop Killer Robots a

play10:28

coalition of non governmental

play10:29

organizations who seek to preemptively

play10:32

ban lethal autonomous weapons join us

play10:35

for all that on aiid decoded after this

play10:39

short break around the world and across

play10:41

the UK this is BBC News welcome back to

play10:44

AI decoded now we just had a glimpse of

play10:48

how defense manufacturers are using AI

play10:51

powered military training tools to train

play10:53

the next generation of fighter pilots

play10:56

but are we in danger of handing too much

play10:59

autonomy to these relatively untested

play11:02

systems we're joining us now are Mikey k

play11:06

a former senior RF military pilot and Dr

play11:10

Peter azaro from the organization stop

play11:12

Killer Robots who's also a professor at

play11:14

the new school in New York where his

play11:17

research focuses on artificial

play11:19

intelligence and Robotics thank you so

play11:22

much both of you for joining us here on

play11:24

AI decoded and Mikey perhaps if I can

play11:27

start with you and ask just to give us a

play11:30

quick rundown of your understanding of

play11:33

how AI technology is being used on the

play11:35

battlefield at the

play11:37

moment uh I think a really good example

play11:40

is um the a process called the kilch um

play11:44

which is a a procedural uh approach to

play11:47

the Precision guided munition process

play11:50

and what that does basically it assists

play11:53

with the

play11:54

identification uh uh and selection of a

play11:57

Potential Threat and then the approach

play12:01

looks at What's called the CDE the

play12:02

collateral damage estimate and it will

play12:04

look in the vicinity of what the target

play12:06

is let's say for example two Islamic

play12:08

State snipers on the second story of a

play12:10

30 story building and then it will

play12:12

assess um potentially what components

play12:17

within a certain radius of that Target

play12:20

uh could be or could form some form of

play12:22

collateral whether that is uh

play12:24

endangering human life or endangering

play12:26

infrastructure um at the same time

play12:29

you've got a significant amount of

play12:30

intelligence that is going into this

play12:32

process whether that's human

play12:33

intelligence which is intelligence you

play12:35

get from informance or whether it's

play12:37

imagery intelligence or electronic

play12:38

intelligence so listening tapping into

play12:40

phones or listening to um radio

play12:42

frequencies or uh looking at imagery

play12:45

from predators um then it will go into

play12:48

weapon selection and it will basically

play12:50

look at what type of weapon whether it

play12:53

be a bomb from a platform like fastjet

play12:56

or whether it's an artillery cell which

play12:57

is precision guided from a tank

play12:59

it will look at what type of weapon um

play13:02

is the most appropriate in order to

play13:03

minimize that collateral and various

play13:05

governments will have various different

play13:07

tolerance policies on that and then it

play13:09

will also bring into all of that what

play13:11

the rules of engagement are uh in terms

play13:13

of being able to prosecute that Target

play13:15

so um AI can across all of those areas

play13:20

including the battle damage assessment

play13:21

which is effectively taking a photograph

play13:23

after the bombs hit it can it can inform

play13:26

all of those components of what what is

play13:28

commonly called in military P to kill J

play13:31

okay let's go to Dr Peter rosaro and

play13:33

just to I mean your your organization is

play13:35

called stop Killer Robots which perhaps

play13:37

gives people an idea of where you're

play13:38

coming from in this the film that we

play13:40

have just seen is about how AI is being

play13:43

used to train uh Pilots um do you have a

play13:47

problem with AI being used in that sort

play13:50

of capacity or do you want AI not to be

play13:53

used at all when it comes to Battlefield

play13:56

training and effectiveness

play13:59

um yeah so I think there's a lot of

play14:01

valid applications of artificial

play14:02

intelligence across many different

play14:05

domains from medicine to healthcare and

play14:07

in even in the military for Logistics

play14:09

and training and things like that uh

play14:11

we're really focused on autonomy in

play14:14

Weapons Systems and ensuring that humans

play14:17

are ultimately making the decision to

play14:19

use lethal force and determining what is

play14:22

a valid and lawful Target in armed

play14:25

conflict um and we've been working at

play14:27

the UN for more than decade trying to

play14:29

get a treaty there uh but there's of

play14:32

course many different kinds of

play14:33

applications and there's been a lot of

play14:35

debate around exactly how to define

play14:38

these systems uh as we just heard from

play14:40

the the video and and the previous

play14:43

speaker the there's a lot of different

play14:45

ways to integrate these into the complex

play14:48

operations of the military which

play14:50

involves a lot of data a lot of

play14:52

computers a lot of people making

play14:54

decisions at different levels of command

play14:56

and control uh so it's is challenging to

play15:00

find ways to really regulate how that

play15:02

happens and ensure that humans remain in

play15:05

control so Mike I have a question for

play15:07

you because presumably one of the areas

play15:10

that the military is trying to achieve

play15:12

here achieve better is like less

play15:14

civilian harm right we know from the UN

play15:16

that the civilian casualty ratio is

play15:18

about 9 to1 so nine civilians to one uh

play15:22

combatant um but making Precision

play15:24

targeting theoretically more possible

play15:26

doesn't necessarily mean that the impact

play15:29

on mitigating risks to civilians is more

play15:31

probable because when it comes to using

play15:33

artificial intelligence it's about speed

play15:35

so if both parties have artificially

play15:39

intelligent uh trained weapons or drones

play15:42

and they're using this technology speed

play15:44

is key in the process and so we saw from

play15:47

for example lavender which is the AI

play15:49

system used by uh the IDF sources

play15:52

alleged that actually they increased the

play15:55

number of civilians that they were

play15:56

permitted to kill when they were

play15:58

targeting a potential low-risk militants

play16:01

to 15 to 20 civilians and they would

play16:04

drop a bomb on an entire house and

play16:07

flatten it to try and achieve their

play16:09

goals so what do you think about AI

play16:12

making War actually more destructive in

play16:14

this sense and not helping us when it

play16:16

comes to reducing civilian

play16:19

harm well PRI what you're talking about

play16:21

there is the collateral damage estimate

play16:23

and the collateral damage estimate

play16:25

varies from government to government um

play16:28

I think it's quite obvious if you look

play16:30

at the tolerance policy of the IDF it's

play16:33

significantly different uh from

play16:35

experience of what the tolerance policy

play16:37

was of say the UK when it was operating

play16:40

um prision guarded munition strikes in

play16:42

Iraq or Afghanistan and I was part of

play16:44

that kill chain process in Baghdad over

play16:46

three tours so I'm incredibly familiar

play16:48

with that and I'm incredibly familiar

play16:50

with the collateral damage estimate and

play16:51

what the rules of engagement are where

play16:53

AI can improve this and you're

play16:55

absolutely right when you talk about

play16:57

speed speed is of the speed is of the

play17:00

essence so if you do have a imminent

play17:03

threat uh to life or to infrastructure

play17:06

neutralizing that threat uh through

play17:09

speed and accuracy is where AI can help

play17:12

improvements so the collateral damage

play17:14

estimate for example AI will be able to

play17:16

speed up that assessment of what the

play17:19

potential collateral damage is and when

play17:21

we're talking about collateral damage

play17:22

I'm talking about a school potentially

play17:24

within the radius of impact of a certain

play17:26

weapon you know or a bus passing by at a

play17:29

certain time of the day and that's where

play17:31

what's called pattern of Life comes in

play17:33

which is effectively drones overhead of

play17:34

Target looking at what the pattern of

play17:36

life is of various components

play17:38

surrounding that so speed speed is

play17:41

critical selection of the weapon and

play17:43

speed at which the weapon can be

play17:44

selected from The Rules of Engagement

play17:47

from the collateral damage estimate is

play17:49

is critical so AI for me um will speed

play17:53

up and make that process more accurate

play17:55

but ultimately the very high top level

play17:58

tier has to be the government's

play18:00

tolerance policy on what it's willing to

play18:02

accept in terms of loss of life and at

play18:05

the moment we talked a lot about human

play18:06

in the loop so the slowest and therefore

play18:09

the weakest link here would arguably be

play18:12

the human in the loop is there then a

play18:14

risk that the human will be cut out of

play18:16

the

play18:17

process well you're talking about speed

play18:20

and potentially the human could become

play18:23

the slightly slower component of that

play18:25

but then what's a critical comp

play18:27

component to think about is is accuracy

play18:29

and ethics and AI isn't there yet will

play18:34

it ever get there um I'm not sure there

play18:37

there are those that argue it will there

play18:39

were there were those that argue that

play18:40

you will always need uh a human in the

play18:43

loop to give that overlay of what the

play18:45

ethics are what the rules of engagement

play18:47

are um the scenarios are very different

play18:50

you know Prosecuting different targets

play18:52

in different environments with different

play18:53

platforms different weapons you know I

play18:56

gave the example of two Islamic State

play18:57

snipers on a two on a second flooor of a

play19:00

30 story building we the human has the

play19:03

ability to be able to select a weapon um

play19:06

through technology through machine

play19:08

learning but also put for example a

play19:10

steel tip on the top of that weapon

play19:12

called a penetrator so it can go through

play19:15

28 floors to the second floor with a

play19:17

delayed fuse on it and just take out

play19:19

what's on the second floor without

play19:22

destroying anything else so it is a

play19:25

massively massively complex procedure of

play19:29

which AI will be learning how to do that

play19:31

but my my advice and certainly the way I

play19:34

would approach this is is that a human

play19:35

in the loop right now is imperative in

play19:38

order to minimize that collateral and

play19:40

minimize potential mistakes and mistakes

play19:43

do sadly happen quite a lot yeah and we

play19:46

haven't talked about the transparency of

play19:48

that either in the sense that a lot of

play19:49

this is classified intelligence you know

play19:51

W defense cont contractors be Peter I've

play19:53

got a very quick question for you we

play19:54

running out of time but war games show

play19:57

that the use of machines are to result

play19:59

in Conflict escalating quicker than it

play20:01

would otherwise what are your thoughts

play20:02

about

play20:04

that well as you said with the speed

play20:06

right uh decision making happens in

play20:08

shorter and shorter time frames the real

play20:11

difficulty is when uh more and more uh

play20:14

strategic decision making and engage

play20:16

decisions to engage a Target or initiate

play20:19

an operation become automated then you

play20:22

actually would have humans that would

play20:24

not be in control of the overall

play20:26

planning the decisions to go to war the

play20:28

decisions to escalate a conflict could

play20:31

all just sort of happen automatically

play20:33

and we've seen this already with online

play20:34

trading and Flash crashes that have

play20:37

occurred in stock markets where

play20:39

different algorithms will interact with

play20:40

each other and lead to uh you know a

play20:43

stock market crash and they have to turn

play20:44

off the whole system uh we don't want

play20:46

this happening with autonomous systems

play20:48

in Warfare um but I think uh to the

play20:52

question you asked before about

play20:53

Precision weapons I what we know is this

play20:55

is Automation and automation increases

play20:57

speed it also red reduces cost by

play20:59

reducing the cost of bombing each

play21:01

individual Target that means you can

play21:03

afford to bomb a lot more targets so if

play21:06

you were only killing you know a certain

play21:08

percentage of civilians with each strike

play21:10

but now you can strike many many more

play21:12

things you can actually wind up having a

play21:14

much larger impact on the civilian

play21:17

population uh even though you've

play21:18

increased Precision so it's it's not

play21:21

automatic that these systems will

play21:22

improve Warfare and any uh uh impact on

play21:26

civilians Dr petaro I'm gonna have to

play21:29

stop you there I'm sure we could talk

play21:31

about this all evening it's an

play21:32

absolutely fascinating subject we really

play21:34

appreciate your time Dr Peter aaro Mikey

play21:36

K thank you and here in the studio uh

play21:39

priia thank you so much for joining us

play21:41

that's it we are out of time AI decoded

play21:44

will be taking a welld deserved break

play21:46

for the month of August but don't worry

play21:49

we will be back in full force at the

play21:51

beginning of September so do please join

play21:54

us then

Rate This

5.0 / 5 (0 votes)

Etiquetas Relacionadas
AI EthicsMilitary TrainingAutonomous WeaponsHuman in LoopCollateral DamageAI WarfareDrone TechnologyEthical FrameworkPrecision MunitionsAI Decision Making
¿Necesitas un resumen en inglés?