Skip to main content
Scroll For More
listen   &   read

Lethal Autonomous Robots

Ron Arkin and his robots

Roboticist and robot ethics expert Ron Arkin explains what the rise of lethal autonomous systems will mean for the way wars are fought.

Initial discussions in the UN in recent years have focused on the need to retain some form of human control, as well as considering the potential risks and benefits. Ron Arkin has worked extensively on the military applications of robots. In his work, he investigates exactly how scientists can reduce human inhumanity to others. 

Listen to Ron in his talk at UNSW Sydney where he scrutinises the big ethical questions; Should robots be soldiers? Can they make war safer for civilians?

Listen on iTunes >

 



This podcast was recored live at the event, Lethal Autonomous Robots and the Plight of the Non-Combatant at the Michael Crouch Innovation Centre.

Transcript

Ann Mossop: Welcome to the UNSW Centre for Ideas. 

Rob Brooks: Welcome, we are having a very interesting, and I'm sure, very provocative discussion tonight. It's wonderful that Professor Ron Arkin is joining us. He's Regents’ Professor at Georgia Tech, in fact, I believe is the first if not still the only Regents’ Professor of Computing? He was the first Regents’ Professor in Computing at Georgia Tech, which highlights the standing that he has within the community. He is a very well respected roboticist, and also someone who's thought a great length about the ethics of building autonomous systems. Many of you have seen that I've spoken on this topic, maybe UNSOMNIA, maybe some other events, and probably you understand how I feel about this topic. There is one argument that I respect and think is worth, very much, engaging with, which is the one that I'm sure Ron is going to share with us today, which is for the necessity to have autonomous weapons. And it's one I think that, that is a very, is the most interesting, most engaging, most important idea that you should consider, when you're trying to decide a position on this. So it's wonderful that we have probably the leading opponent of this idea to come and talk to us tonight. Thank you, Ron. 

Ron Arkin: It's an honour to be here, thank you for the ability to share my ideas with you. I would already start with one correction. I am not a proponent of any type of lethal weapon system. Okay? That's the first thing, I'm often labelled as that. But I do not favour killing human beings in any way, shape, or form. And you'll see where my argument comes from, to get a better understanding of where my concerns lie, while human beings continue to kill each other since time immemorial and warfare. So what I'm trying to say here is the following: that we should not have wars. I'm not arguing that we should have wars, we should not have wars. And for those of you who may be pacifist out there, I encourage you to try and find a way to make warfare stop. Kahn said that you could make peace among devils. Unfortunately, all human history kind of points to the fact that we are not very successful in preventing warfare, in the present day, and in the future.

So there are two underlying assumptions in my research. One is that warfare will continue. And then the question becomes, if it is to continue, what role should robotics play in it? If any? That's where I'm coming from. So don't think I'm arguing for warfare, far from it. Quite the opposite. And the real question is, why did I get interested in this? I've been a roboticist. For almost 40 years now. I've been working in robotics driven by curiosity driven research, as most roboticists are, and scientists for that matter. Trying to see, could we do something in this particular field? Well, eventually, the community that I was a part of started succeeding. We were actually able to make robots that did useful things. And they started, maybe escape is too strong a word, but they started moving out of the laboratory and into the real world. And in the United States, one of the major funding organisations is the military. I should tell you, I don't do any classified research, I don't have a security clearance. And I'm free to talk to you about anything that I do. But I've worked for the military in a research capacity for the Office of Naval Research, for DARPA, for the Army Research Laboratories, as well as Sony for AIBO, and other, Samsung, for robotics, a variety of different things during my career. I've often said I play in anybody's field if they would just support the research. But I don't do classified work and nor do I have lethal autonomous robots in my laboratory. That's not one thing that we do. 

But how I got interested was the concern arose from the fact that the research was moving out. And then in 2004, I was invited to a meeting in San Remo Italy, organised by Professor Veruggio, one of the founders of the robot ethics community, where it was a collection of philosophers and social scientists and military people. We didn't know what we were getting into. And I was sitting in the audience and listening to people from Geneva, telling us the right calibre of bullet to kill people with. I didn't know there was a right calibre of bullet to kill people, the ethical calibre, which happens to be slow fat bullets, as opposed to small high speed bullets, which has to do with the size of the exit wound and the ability to recover from a wound. I didn't know this stuff, I'm hearing that, listening. The Vatican Representative was there. He talked about the appropriate role of humanoid robots, which should be to enhance the human condition, not to replace the human condition. The Pugwash Institution was there as well too – organisation, I guess they're referred to as – the Russell-Einstein Manifesto, and they pointed out all the dangers associated with scientists running amok, perhaps, I guess, is one way to put it. So this got me deeply concerned among other things, as well, but it made me feel I had to start to do something. 

So I started working with our predominant society. By the way, the biggest robotics conference in the world was just held in Brisbane. International Conference in Robotics and Automation just completed last week. Very successful, you guys should be proud of your country. Made a great impression on over 3000 roboticists that came here. Maybe you heard some press, there was a lot of press, as well, too. And I got engaged, we started up, I was one of the founding Co-Chairs of the Technical Committee in Robot Ethics and worked, and still am a liaison to the Social Implications of Technology Society, served on their Board of Governors, and a variety of other things. I'm also part of the IEEE Global Initiative, on the ethics of autonomous and intelligent systems. I think that's the current name for that, serve as the Affective Computing Chair, as well. I steered clear of the reframing lethal autonomous systems folks in that particular case. And, but, other more personal things came about, which was, one video which I saw at a navy workshop, we were looking into the future, and I'll talk a little bit more about that later. I won't show the video because it's a bit gruesome. But I will share some of those thoughts with you.

So why is the military interested in lethal autonomous weapons? Why do they want them in the first place? Well, a variety of reasons. One is, wouldn't it be good to have one soldier be able to do the work of three or four, the United States we used to have a draft, I don't know if you ever had one of those in Australia. If you did, not a very pleasant thing to be subject to. Also an all volunteer force, as opposed to a…one that is not, affects morale, among other things and a proficiency in the fighting force. So you can have fewer soldiers carry out at least the same, or perhaps more, tasks. The notion of being able to conduct combat over very large areas. You see this in the predators and the reapers, where we're fighting with people back at Creech at Nellis Air Force Base in Nevada, all over the world, for better or for worse, most maybe for worse, I'll leave that up to you. Yemen and and all over the Middle East, Afghanistan, Syria, a variety of different places where these systems are deployed. And also for the individual warfighter. It deals with a notion of a warfighter being able to extend their own ability to see in the battlespace, to go around the corner and an urban setting, for example, which is extremely dangerous. If you stick your head out, you can get killed. Also to act if there is a lethal component to that particular system, and engage a target. Bottom line is reducing friendly casualties. The military wants to keep their people healthy, as best they can in warfare scenarios. And also, the one thing to keep in mind is the military — at least our military — is always thinking one war after next. It's not even the next war they're thinking about, it's the war after the next war, and how can technology be developed to support that case. And the notion of AI and robotics for actually being concerned about the plight of the noncombatant being trapped in the space, was really not articulated in any meaningful way. In the early days, I'm thinking it's beginning to rise into the consciousness of several militaries. And we'll see perhaps some examples of that. But these systems are out. Most of these are what we'll call semi autonomous, or under human control, directly, before engaging a particular target, or requiring a human to certify a target. But they still have the capability, in many cases, to engage the target without additional human intervention. There's all sorts of different things Israel is a major purveyor of the Harpy system and other fire and forget systems, which you can let go and pick a target. A classic example of one system that's been like this for decades is the CAPTOR sea mine, which launches torpedoes at a particular signature. It's so old it’s been mothballed. It's not even in the inventory. Although someone was telling me it was just resurrected, I guess, again, more recently. I don't know if that has to do with what's going on in the South China Sea, or what have you. But they don't ask anybody. They have to ask someone to go out there. But they don't ask for confirmation. You can't do that with undersea because of the nature of communications, the bandwidth required to get certification is not appropriate for that. But Russia and China and many, many other nations are engaged in the development of a whole series of different types of systems with the ability to project kinetic force, if you will, which is lethal force. 

But if you want to get your own, as well too. There's Dota, has a combat robot, this is off the internet, a lethal combat robot, with your choice of four different weapons that you can put on it. And what's interesting is it has autonomous detection and manual autonomous firing with safety. I'm not sure what that means. But you'd have to probably get your government's permission before you tried to buy this, before you imported it as well, too. But these systems are there. They're again, intended for governments, but many of these systems, for example, have been bought by Middle East countries, in small UAE and alike as well, too, for example. Now, this is an older slide, but it shows basically, what the US has been doing in the context of unmanned systems. By force bearing that means they have the, you know, we dance around this term of, I don't like the term killer robots, because that invokes pathos. I know the NGOs are in the campaign against killer robots, does that. But we need a reasoned argument. Force bearing is extremely in the other direction as well, too. So we need to find… I think the UN, which has used the term lethal autonomous weapons systems for these things, or laws, is probably at the right level. But there's many different things. Some of these are research-y, and since 2009, there's a whole lot more, as well too. 

One of the things that I used to do when these roadmaps for the future of the US military came out is I would search them for the word ethic, just to see if ethic appeared anywhere. And then I finally got a hit in 2009. And one of the things they said the decision to fire will not likely be fully automated, until legal rules of engagement and safety concerns have all been thoroughly examined and resolved. And when I started pointing that out, they removed it from the next years, as well, too, because that implies obviously that they are going to engage these kinds of systems at some point. If you look also, this extends out to 2034, who makes a plan for 35 years? I mean, that's not rational. But the point is that it shows the long term commitment. And this is a pillar of the US Pentagon in terms of research, the notion of autonomy for these kinds of systems. But the air force was not to be outdone, they went 48 years into the future. And they didn't have a roadmap, obviously, they had a flight plan. So, but they said the same sorts of things. It's contingent upon political and military leaders resolving legal and ethical questions. And these decisions must take place in the near term, rather than allowing the development to take its own path, apart from this critical guidance. This is important that a position was taken on these sorts of things. And it's been extended further. More recently, the Joint Operating Environment Document said they will make autonomous decisions deliver lethal force, and they invoked a new argument which I hadn't seen previously, that autonomous lethal decision making capabilities that adversaries are more likely to deploy is the rationale for the development of these systems. They're going to do it, so we better do it, is part of the argument that's here. This argues perhaps for regulation of these kinds of systems, which I am a believer in, as well. 

Now, I was at the initial meeting of the International Committee of Robot Arms Control, I forget exactly what year that was. I served as an advisor, an expert. I was not a part of that. I argued at that time for a moratorium, I still argue for a moratorium rather than a pre-emptive ban. I don't even know what a pre-emptive ban is, and maybe we'll have that discussion later. Because I don't know what we're banning. But 30 Plus organisations, I think are the current count. Most visible one is probably Human Rights Watch, which are calling for a pre-emptive ban on autonomous, lethal autonomous weapons and what was the number of nations you said today? 26 nations have stated that they are supportive of that, at this point in time. At the United Nation Convention of Certain Conventional Weapons Systems, I had the honour of speaking to them. In the first year, they had discussions on this, and this has been happening initially annually and now twice a year. Whether it yields fruit in terms of either a pre-emptive ban, or regulations, time will tell. I am a firm believer that these systems need to be regulated. And you'll see why. We cannot let them go unregulated out into the future. 

The United States Department of Defense in 2012 actually came what I call a moratorium of sorts. A quasi moratorium is a language that I use. They basically said, for these particular classes of systems, which were largely dealing with human targets, as opposed to ships and other types of targets. We are not going to deploy these and develop these for the next 10 years. And we'll revisit this in five years. Well, that time has passed, I don't know what the results are. If they extended it… they have extended. Okay, thank you. You know more about the US than I do. They have extended that as well, too. Which is a good idea. But it's still, also, I used to think it would be hard to override it, if you read the language in that, because it requires very high up people in the Department of Defense to certify that. I have reason to believe it might not be quite as hard to do that as some might think, in some of my discussions. And the Human Rights Council initially called for a moratorium, they later, this was the gentleman from South Africa, Christof Heyns, initially argued for a moratorium, he then switched over based on deontological, or human rights arguments to a pre-emptive ban for this. 

So my question is, let's look at what happens in war. And let's look what happens not only to combatants, which you sign up for, okay, I'm not as… to me, this isn't about winning wars. To me, if we're going to kill each other in the battlespace, we have to find ways to better protect non combatants. And I believe technology must play a role in that. And we are not addressing that particular problem. Look at all the people that have been slaughtered in Syria, for example, and you can find every single war, innocent people are being killed. And let's talk about why these things happen. It's not just the US, every warfare has these, these are old things, you can go back since time immemorial, before there were laws. Even recently, we had bail, who was actually coming up for review trying to get off the hook for what he did in that time, as well, too. But notice, they argue here, because it puts your own service members at risk, it can hurt morale. This is not taking into account the loss of innocent life, these people that are trapped in the battlespace. And to me, that is the primary motivator for the research and my arguments, here, is how can we better protect non combatants? There are many other reasons as well, too, as was mentioned there, but can we use technology to effectively reduce the negative outcomes? The really negative outcomes that occurred in warfare with respect to non combatants being trapped in this particular space? So this is basically, the basis for my argument is I find it utterly unacceptable, what is happening to non combatants, currently, in the battle space. It's utterly and wholly unacceptable as well, too. And I believe technology can play a role in doing that. If we design these systems in specific ways, I believe that they can outperform human warfighters with respect to adherence to international humanitarian law, or as I used to say, until I got beaten up by it as well too, perhaps behave more humanely in the battlefield than human warfighters can. And I guess I'm still saying it even though I get beaten up by it from time to time. But the point is, this technology has to be where the killing occurs. It can't be somewhere else. And so for those who argue that let's not have, let's have pre-emptive bans on this technology. I would argue, maybe there are certain cases that we can do better than human beings? And I would further argue then, okay, tell me how you're going to better protect non combatants, as well, too. That's important to me. Okay? Give me alternatives. Show me better ways. Don't let continue what is currently going on, the slaughter of innocents in the battlespace continuously. 

So I state, I stated from the beginning, I'm not averse to a ban. I need to be convinced of what it is that we're banning, and why it's we're banning it and why the current state is better than if we have these other approaches. And I'll talk more about why I think this approach can do better. Where also I argued, as I said earlier for a moratorium, I believe that if we just slow down and put the brakes on things, maybe, as opposed to just not do anything, maybe we could find certain circumstances where these systems can actually help non combatants. I often use the argument that these are the next generation of precision guided munitions. That precision guided munitions have been able to protect non combatants lives in the battlespace, and Human Rights Watch argues for that. They say if you have them in your armamentarium, you must use them if you are engaged in a urban warfare. I would contend that these similarly may have value in that particular situation. But most importantly, don't make decisions based on unfounded fears. I could have a debate, we may, as well too. I don't know if you've seen the movie Slaughter Bots, which was shown at the United Nations? Filled with pure pathos, science fiction, some would argue, and as Stuart Russell does at the end, this technology exists. But the point is we need to have rational discussions of these sorts of things. So, let's not be subject to the hype of Hollywood and other science fiction scenarios and investigate this rationally. And if it leads us to a complete, pre-emptive ban, like I said, I still need to know what that means. And I'll talk about why we have difficulty with this in a bit. Then, so be it. But let's do it rationally. Ethics is based on reason. It's not based on fear. 

This is the second assumption. I say lethal autonomy is inevitable. Noel Sharkey wrote a paper that said it's evitable. At one point, I had not used that word, heard that word used before, in response to that. But the reason I say it is, it's already out there, it already exists. There are many examples of these things, which to a roboticist, satisfy the requirements of something autonomous. Which it has delegated a decision and it goes out and makes it. It’s given a target signature, and you have given the authority to engage that particular target. This is done for cruise missiles, the Phalanx system, they say auto mode on the Phalanx system on the agents class cruisers, when a sea skimming supersonic missile is approaching that ship, the commander, not like World War Two, should we call up… call the commander, what should we do? You have to make the decision in advance to be able to blow that thing out of the sky because by the time you've reached the commander, your ship is gone. This decision making is being forced towards the tip of the spear more and more by the speed of technology, changing the ways in which we fight warfare. And many refer to this advent of technologies as a new revolution in military affairs. It will change warfare the same way the battleship changed it, the way gunpowder changed it, the way the longbow changed it, the way aircraft carriers changed it. All these things change the ways in which we fight war. And robotics is doing that now. As we speak. Lethal systems may play a role in that. People make mistakes as well, too. But notice, again, you can prevent this assumption potentially coming true through prohibition or intervention. I don't know what you're going to do in terms of grandfathering existing systems, that may be hard if it fits the definition of autonomy. And I like to talk about what should be banned in the context of, this specific system should be banned. If you tell me the Terminator should be banned. I sign up. Okay? I'm ready. I don't want the Terminator. I don't think anybody wants a Terminator. That's a fully autonomous, almost sentient, and the point is, others that we get into this… into a discourse, definitional discourse about things like autonomy. Philosophers attribute moral agency or freewill to autonomy. And that's their term, we absconded it as roboticists, they initially had it. But no roboticist thinks of those terms. Most of us aren't worried about the singularity when machine intelligence exceeds human intelligence, and these systems will become truly responsible for their own actions, despite what the European Union is doing for liability reasons right now, and that's another story. Forgive me for digressing. But roboticists are concerned with where a decision is being made. And the notion, other notion that's being bantered about in the UN, with limited success, in terms of definition, is meaningful human control. I think everybody agrees we want meaningful human control, but we can't really describe what that means. Is that done for authorization of every single trigger pull? Is that done at the level of a command structure where I authorised the system to go and take that building using whatever force is necessary, instead of calling it an F16, and dropping a bomb on it and killing everybody inside? What's the right answer? These are choices that we as a society, and as nations have to make. 

So I ask this question, it's a rhetorical question, so you don't have to answer it. And please don't make me have to answer it, as well, too. Should soldiers be robots? What do we do when we train soldiers? We teach them things that are not natural for them. Killing is one of them. They have to be trained. I have not served in the military, but I've worked with the military for many years. And I respect and honour those who do their military duty in our nation. And I further would say, I believe it's a moral imperative for scientists like me to provide the technology, the best technology that we have for our young men and women in the battlespace that is consistent with international humanitarian law. That's an important distinction as well, too. And then the other question is, should robots be soldiers? Could they be better soldiers than human beings? And I am not talking about replacing soldiers, one for one with robots. Get that Terminator vision out of your mind. No military is pursuing that as well, too. These are adjuncts working alongside what they call embedded as organic assets, doing specific, narrow missions, specific tasks that certain agents find at great risk to themselves, but perhaps more importantly, lead to really bad decisions by soldiers in the battlespace, and this is where I believe this technology can play a role, in these narrow sets of circumstances, not in everything. There are times, for example, in that, when they're dealing with the predator and the reaper, you have a bunch of people standing around a screen, maybe even a lawyer, we'll talk about that later, making this decision, that's far better than a robot in any current state could ever do. But when bullets are flying, and you have to make decisions really quickly, you may get careless, you may have unnamed fire, or, God forbid, you may commit an atrocity, in those cases, as well, too. And unfortunately, a percentage of war fighters do this. 

This was a bold move by the United States in 2006. It was a survey that had the people coming back from Kuwait do both a mental health, but also their ethical behaviour. This provides a benchmark for how good or bad people have performed in the battlespace with respect to ethical compliance. There are things like here; 45% of soldiers and 60% of marines did not agree that they would report a fellow soldier if he had injured or killed an innocent non combatant. That's a war crime right there. You have an obligation under international law to report those things. Other things as well, too, they wouldn't report a team member for unethical behaviour, less than half. 70% of soldiers and marines agreed or strongly agreed that all non-combatants should be treated as insurgents. 70% you think is small, but think how many war fighters are out there as well, too. We can kill anybody that's not on our side. These are staggering numbers. This just blew me away. When I first read this particular report. It's available online. You can check it out if you're curious about it. There are other less scientific studies as well, too. In, I think it was 100% of all soldiers in Vietnam, 33% of those in heavy combat, 33% in Medium Combat, and 17% in light combat in the Vietnam War, had either seen or abetted an atrocity. That’s not good. So this argues to me that we can do better. That's the same argument that is being forwarded for self-driving cars. People get angry and frustrated on the road and road rage manifests itself. You drunk drive, drink drive I guess it is here, I forgive me the language but we drunk drive in the US, you drink drive here. You're texting, you're doing all these things. And what happens? How many deaths do we tolerate on the highways? Incredible numbers, and then one Tesla kills somebody, and it's the end of the world. It's bad. Yes, it is bad. But those systems won't be perfect. And these robotic systems won't be perfect. But what if they outperform substantially human warfighters? Not everywhere, but in certain narrow bounded circumstances. These are some of the reasons why these atrocities occur. And the bottom line is, well that's the bottom line, but a different bottom line is that people are human. We have these tendencies, we're putting human beings into situations where human beings never evolved to function. I have spent my entire career studying animals in their natural environment, ethology, as the basis for robotic systems. It was frogs and toads, I'm studying the slow loris and the tree sloth right now. We studied human beings, pet ownership for AIBO. And we studied animals, human animals, in a natural setting, which is warfare as well, too. That was some of the most depressing reading and studying I have done. It led me to the firm conclusion, my own personal conviction, that we can do better, in certain narrow, constrained circumstances. So we have to perhaps give that a try. And this is what I argued to my colleagues as well, too. I say, don't we have a responsibility? Not to turn a blind eye and say, let's not do anything, or let's ban this from occurring, because bad things would occur. What can we do to do better? How can we improve this in humanity? And where is it worse than on the battlefield? It's horrible. So I believe that ethical military robots can and should be applied towards achieving this end, in narrow circumstances. This is the regulation that I talk about, and the moratorium until we can define that as well, too. 

I talked about this.. keep in mind, again, these systems are not only deciding when to fire, they're deciding when not to fire, they're making decisions. I gave it this target, you go kill that guy. And the system says, I can't do that. And it has to explain why. But that brings up the 2001, I can't do that Dave. Sorry, Dave. Scenario as well, too, which could be potentially troublesome from a variety of different perspectives. They will also change warfare using different tactics, different strategies, hopefully assuming far more risk on behalf of non combatants than any human soldier would in their right mind. They don't have any inherent right to self defence, as human war fighters do. Now, that doesn't mean they'll necessarily be used appropriately. And I will provide lots of counter arguments, which we will no doubt go over as we get further downstream. But I had a project — and I could talk about the evolution of that project, but I'm not going to here — in 2006 to 2009, from the Army Research Labs — again, unclassified — with an underlying hypothesis that we may be able to do better, under these sets of circumstances. But keep in mind, again, they will kill civilians, they will make mistakes. No doubt, just as the Tesla is killing people as well, too. These systems will not be perfect. But I tend to be a consequentialist for those ethicists out there, I am concerned with outcomes. And if we can save more lives, to me, that is an important agenda to be able to pursue. There are human rights arguments, which we will talk about later, as well, I'm sure. So there you have that. So, I was interested in providing these capabilities for a system, where a robot could refuse the right to engage in a target — the military were not necessarily happy for that, but they paid me to study it nonetheless — also to tell on human warfighters that may be committing ethical infractions, and as appropriate, not all the existing laws of war that's impossible to do, nor to give them human level moral reasoning capabilities, that's impossible to do, anytime in the near to mid term, if ever. But rather, to incorporate those for narrow bounded situations where bounded morality applies. We only incorporate those particular aspects which are relevant for those particular scenarios. And if it exceeds and goes outside that scope, you just don't fire. It's as simple as that. 

 

So, when I started this work, this was kind of the tongue in cheek picture that The Economist, which wrote a nice article in this particular search, put in, where do we plug in the ethics upgrade for these kinds of systems? How do we ensure that when in use by human warfighters they comply? And there's lots of advances. We were laughing earlier about all the deep learning work that's going on in this particular space, which is important. It is the AI dujour, I guess, is the best way to describe it, and it's producing significant advances. Most of these things are very narrow problems, by the way, you see the rock paper scissors robot there as well, too. Don't ever play rock paper scissors with a robot. It's a Japanese robot, and it will beat you every time because it sees what you're going to do before you do it. Videos are pretty impressive. And also, there's a colleague of mine at Yale, who made a rock paper scissor robot that cheated as well, too. So, it did it occasionally. And people attributed more intelligence to that particular robot than the one that didn't cheat. But that's actually related to some philosophical arguments that along with higher level intentionality, you get the ability to deceive. Dan Dennett, for example, argued that particular case. So where I say these systems potentially could be used, okay? Are circumstances such as this, again, very narrow missions, specific types of robots. I mean, you have the IEDs for PAC bots, right now, that's a narrow mission. It's not a lethal mission, but it is a particular mission. Building clearing operations were in circumstances like  Ideta, and other places. It's extremely dangerous human warfighters can be angry in certain cases, or frustrated, or fearful, and make more mistakes in these kinds of situations, counter snipers, also in the DMZ. Hopefully, I don't know what's happening with that as well, too, like, I don't know, I, I'm speechless, I guess, in that. But we studied the DMZ. I'll try and be optimistic. I'm trying to be optimistic about what could happen there as well. But also in interstate warfare. You know, it used to be people would say that all warfare’s are going to be counterinsurgencies from now on. Well, Mattis, I had the honour of hearing him talk at a Joint Center of Unmanned Systems. And he argued that that's exactly what they were saying, after the Boer War, in the late 1890s, I guess it was. And there were a few wars in the 20th century, which were not counter insurgencies, in that case. Keep in mind again, war after next, we don't know what that will be. And also don't use it, where there's likely to have significant numbers of civilians, there will always be civilians, but you can do things. And actually international humanitarian law requires you to do certain things to minimise and allow civilians to escape before you move through this area, and alongside soldiers, not as replacements. And this is important for two reasons. One is this provides a capability to enhance IHL compliance. But also, it's important to maintain a human presence in the battlefield, from my perspective, due to understanding the horror of warfare, we need that. Now, this is me saying this, it's not necessarily all the militaries that are saying this, but I believe that that's crucially important. 

Now, these are the reasons why I believe it can succeed, the ability to act conservatively, they can assume far more risk on behalf of non combatants, they will be able to see better than human beings can, they can leave the emotions that cause human beings to mess up, or to commit crimes out of the, their design. Although I'm not going to talk about it, but we actually integrated guilt into the ethical adapter of our architecture, to deal with a proactive management of events that ended up badly. This particular psychological problems, I won't go into that here as well, too. As the battlefield becomes progressively more internetted, there's more information flowing from many more sources that no human being can functionally put together quickly enough, and also to be able to monitor the performance of human warfighters as well, too. And by their mere presence, knowing that you're being watched, you will be perhaps more likely to not engage in criminal acts. Now, this is my current list. It started much shorter. But after I gave this talk, for a while it kept growing and growing and growing. It hasn't grown much yet, in a while, but the notion of responsibility, I have what I believe are good answers to all of these. It's never the robot. And if you want to know why, you asked me during the Q & A. You said Bellum, which is adventurism, the immoral, if we got them, we're gonna use them, we paid for him, so we're gonna go off to war. Well, that's an argument that applies to any form of asymmetric advantage, including cyber warfare and other aspects as well, too. The answer is stop all research, military research, and I don't think that's going to happen anytime soon because of the advantage that has been conferred to those that have done that. Some just say it's too hard. It is hard to discriminate, but there are ways around that I believe that can be used. Effect on squad cohesion, a bunch of these things as well, too. One of the biggest ones I'm worried about is mission creep. Because our soldiers are really smart, you give them a piece of equipment, and they learn how to use it in ways for which it was not intended. And as such, they may stray into grey areas, for that. 

These are the kinds of things that have to be integrated, derived from just war theory dating back to St. Augustine in the Middle Ages, going on and being enshrined in the Geneva Conventions and The Hague Conventions, as well, too. Have to have a reason to kill. You can't do it in an inhumane way, you must use only the force that's appropriate, and no more. And you have to be able to tell combatants from non combatants. They led to codification of the war, we use a variant of action based machine ethics, which is a school of thinking in the machine ethics community, which deals with a form of deontic logic in its most elaborate case, where the system must have an obligation given to it by a human to engage a target, and it must have a set of prohibitions, all of which must be satisfied before the target is engaged, using a logical mechanism to accomplish that. This is the approach that we took. We developed this architecture, there's a whole book on the subject, I'll show it if you're interested, in the mathematics and the underpinnings of that as well, too. It's a bit technical, but others have been working in this space, as well, around the world. Trying to get a better understanding of how we can make robots more ethical in general, but in this particular case, with respect to military systems. For components, the ethical governors, the one which gets the most visibility, but the others deal with moral emotions, such as the ethical adapter, and the responsibility advisor addresses some of the issues associated with responsibility. We try and… we do put in components for overrides, which in our mind allows for meaningful human control, maybe not in everybody's mind. That's why I always argue we need to talk about specific systems, and then consider whether they fit the criteria. And there's a series of scenarios, let me show this first one. Whether you agree with that decision or not. It's easy to implement. You use kill zones and kill boxes, and you say, using GPS, if you try and shoot something in here, you pull the trigger, it doesn't fire. It's as simple as that. Part of the whole goal is to introduce friction into the human decision making process so that we don't do things faster than we absolutely need to be able to do. Transpose that – and by the way, the rules of engagement might not have been exactly that, rules of engagement are classified — but it transpose that to the Taliban in the mosque praying, or a Taliban in a school, a culturally protected property under the Geneva Conventions. You can't engage that particular target, unless it's being abused. If they started shooting at that, or they had stored weapons in that, then it would be a different, different scenario. But they were engaged in a funeral. 

Other scenarios include this one, which is a quick one, I'll try and go through it relatively quickly, to get to conclusions here and have time for discussion. This one dealt with a video of war porn, or a brag video, where there were three, I'll call him combatants, who were planting improvised explosive devices by the side of the road. They were exhibiting hostile intent in this case, which makes them a legitimate target. Although I always say I am not a lawyer, I learned the acronym for that, it is IAMAL, if you want to know, but I am not a lawyer. But in any case, the first one, an Apache, is a helicopter. It's equipped with a chain gun, which has small mini cannon shells. It's very devastating to human flesh, so I don't show the video with this. And the first target was engaged, the second guy neutralised, which is the politically military correct term to use. The second target was frozen like a deer in the headlights. You still didn't know what was going on the Apache or the standoff distance is relatively quiet from the front. And that individual was neutralised, the third guy climbed under the truck, and the discussion went here. Want me to take the other truck out? And actually, I shouldn't say he fired by the side of the truck after that. Remember, these are cannon shells as well too. Then he said, there were two people here, it's hard to tell where they were. One is the gunner and the other was either the pilot or someone off the craft. Want me to take the other truck out? And he said Roger, wait for move by the truck. He saw the guy getting out from under the truck, staggering to maybe five metres and collapsing on the side of the road. Movement right there. Roger, he is wounded, that person has been declared, he’s been declared wounded by that other soldier, which renders him equal to a noncombatant, parsed out of that language, that means you can't engage him, and he deserves the status of a noncombatant, and technically, you're supposed to call for help for that individual at that point in time the same as you would do for your own troops, according to international humanitarian law. But the other voice says, without hesitation, hit him. The other voice was still targeting the truck, which was probably legitimate war materiel at that particular case, it could take it out, voice says hit the truck, and him, go forward of it and hit him. The crosshairs are moved, the other guy is engaged. And the wounded man has been killed. And it says Roger after that, as well, too. Imagine walking up and blowing the guy's brains out, as well, too, if you were told to kill this guy, and it's a pistol on the ground, as opposed to the standoff distance. These are summary executions, in this particular case, and international humanitarian law deserves better. I presented this many times in front of JAG lawyers, military lawyers who deal with the rules of engagement. And all with the exception of one, and I mean, large numbers, all with the exception of one have agreed when I asked them, should my robot take that shot? They have said, no. The scary thing was and Chatham House rules I can't tell you more is this individual who did agree with that was kind of high up in the echelons of the JAG lawyers, as well, too. So I'm not a lawyer, I can't make these particular decisions. But someone, if the systems are going to do that needs to make that. 

The other example is the Samsung Techwin, which was fielded in the DMZ, which is still an active state of war. There is a military demarcation line, which you can cross and get shot, it's as simple as that. There are warnings on the fences, and everyone else knows, stay out of here, if you're a civilian. It's appropriately demarcated, this can see I believe, five kilometres at night, and three in the daytime. The machine gun can't go that far. And it can engage these targets without additional human intervention. They feel it tested it there, they pulled it back from the border and they're not using it anymore. They were looking at a mobile version at one point, as well. So it can track, if you see the bounding box there see a little.on the head of that individual as well, too. And if you give it autonomous capability, it can take that guy out, you don't have to have a human being give a command to do that. This technology exists to be able to do it. And that's part of the argument that I make. And there's other scenarios, which potentially, these systems may excel at. If you're interested, there's many videos on our website to be able to see that the most important thing is, there are many, many, many unanswered questions about how to do this, and the research is not necessarily being done. At this particular point. All kinds of things have to be looked at before these systems should be fielded, hence my reason for the moratorium until we get better answers of whether we can do this or not. We shouldn't, my bigger fear is that we will find ourselves in warfare and then rushing these things out into the battlespace before they have been effectively tested. Other countries, Israel, was looking at ethical algorithms according to a classified report that Wired got a few years back, so they could avoid pesky war crimes tribunals, at least according to them as well, too. I think that's a bit extreme. 

And finally, if you just don't like the thought of robots killing people, there's hope in international humanitarian law, it's called the Martens clause, which basically says that we can ban this or restrict it based on the requirements of the public conscience. This has never been used, although it's been included from the very beginning of international humanitarian law. And part of it is, just as we've had trouble defining autonomy and meaningful human control, what the dictates of the public conscience is, and how we measure that, something surveys, this is being explored by the campaigns as well, too, as a possible means leading to a potential ban. So all these things give rise to the belief that we can potentially use these systems in ways that will protect civilians, but still, there is great risk associated with their usage. I hope we can have meaningful discussions about the restrictions. By the way we get dual use out of ethical mediators that came out of the military work. Sometimes you get benefits like the Internet and other things as well, too, which came out of military research, DARPA research, not Al Gore, and the like. And bottom summary again, the status quo is unacceptable. We got a lot to do before we can actually make any claims about the systems being safe to deploy. We have to be proactive of these systems, whether it's arguing for bands, arguing for moratoria, arguing for regulations, and you guys have to make these decisions. I am not prescriptive. I'm not going to tell you what the right answer is, unlike some of my colleagues in the field, that we should ban, or we should do this, or we should let it go laissez faire. You need to make those choices, and you need to express them to the powers that be, whatever they may be. So speak up and get engaged in the discussion. And I still believe that until shown otherwise, that we need to do something about noncombatant lives, and we need to do it sooner rather than later. So if you're interested, there's plenty of publications, my book on the subject and alike as well, too. And thank you for listening and take care.

Speakers
t

Toby Walsh

Toby Walsh is Chief Scientist of UNSW.AI, UNSW Sydney’s new AI Institute. He is a strong advocate for limits to ensure AI is used to improve our lives, having spoken at the UN and to heads of state, parliamentary bodies, company boards and many others on this topic. This advocacy has led to him being "banned indefinitely" from Russia. He is a Fellow of the Australia Academy of Science and was named on the international "Who's Who in AI" list of influencers. He has written four books on AI for a general audience, the most recent is Faking It! Artificial Intelligence in a Human World.

For first access to upcoming events and new ideas

Explore past events