Monthly Archives: January 2016

A Few Thoughts on “Killer Robots”

I’ve been following the discussion over on IEEE Automaton about the letter from the AI conference in Brazil, (for, against, rebuttal, etc.) , and since this is the kind of thing that normally gets me very engaged (or enraged), I was surprised to find myself ambivalent on the subject.

The original letter ends, “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”  but this gives us several propositions to consider:

  • “Starting a military AI arms race is a bad idea”

I think we can agree that this is true.

  • “… a military AI arms race … should be prevented”

This is also something we can agree on.  But if we look at the underlying assumptions behind the complete statement, we get a few more propositions that are less easy to respond to.

  • A military AI arms race can be prevented by a ban on the development of specific types of weapons

I disagree with this.  It seems to be saying that the only research that would support a military AI arms race is research into specific types of weapons, specifically weaponized autonomous robots.  It seems eminently plausible to me that research into many technologies other than offensive autonomous weapons would contribute to a military AI arms race:  research into improved mind-machine interfaces for non-autonomous robots and other systems; research into more intelligent surveillance systems.  A military AI arms race would result in many terrifying things other than autonomous weapons.  (For example, anything involving public utilities or transportation networks.)  The underlying technologies that would be developed in this arms race are still likely to be developed for other applications.  It is entirely plausible that you could conduct a military AI arms race without ever reaching for a weapon.

The only aspect of the problem that requires a weapon is the part where the robot learns how to control the aiming and initiation of that weapon, and that isn’t a problem for artificial intelligence research, that’s a low level controller problem that’s already being solved for many types of systems.  The artificial intelligence question that contributes to a military AI arms race is “when should I fire”, and you don’t need a gun to study that one.  You don’t even need a problem space that includes guns.  You just need an analogous non-military problem and you can work on that technology without ever violating a ban on autonomous offensive weapons.

  • “offensive autonomous weapons” are at a sufficiently early stage of their development that banning them is practical

I’m not at all sure this is true, since there are numerous examples of armed military robots, and even individual non-experts that have already assembled terrifying combinations of robots and weaponry.  The step from this RC quadcopter to an autonomous Lego Mindstorms weapon requires skills and technology available to high school students.

  • “offensive autonomous weapons” are a sufficiently concrete subject that they can meaningfully be banned

Bear in mind that this letter is coming from the AI community, not the robotics community.   They are not suggesting a ban on the technology they are most concerned with (the decision-making and possibly the perceptual elements of the software), but are instead suggesting a ban on other peoples’ work (specific combinations of software and hardware).

  • “beyond meaningful human control” is a reasonable way of specifying the particularly troublesome weapons that we ought to ban

We don’t even have measures for how much human control is associated with “autonomous”.  How on Earth would we ever figure out whether a given autonomous robot is actually “beyond” “meaningful” human control?

The discussions so far seem to have missed a key distinction that we need to make in order to talk sensibly about this topic.  We are discussing at least three separate questions.

  1. Should the international community be doing anything to prevent their constituent militaries from exploring this as an option for future combat between countries?
  2. Should there be restrictions or laws in place in individual countries governing what individuals choose to do with equipment that they legally purchase or create?
  3. Are we discussing the development of specific technology, or are we discussing applications of technology that has been developed for other purposes?

The original letter is explicitly aimed at the first question – they say that this application of technology is both sufficiently dangerous and sufficiently unpredictable that the international community should be treating it like certain types of mines and banning its development entirely.  But it asks us to ban technology development, not specific applications of technology, and that is a core problem.

It is appropriate to discuss a ban on specific applications of autonomous robots, in the same way we have bans on specific applications of other technologies, both at the national and international level.  But we should not conflate that with banning development of any technology that might eventually be able to be used for those applications, and we should not assume that banning those applications will have any more effect than the “severe stigmatization” mentioned in the for article.  The whole purpose of robotics as a technology is its flexibility and adaptability and the ease with which we can alter its uses and goals.

Not putting a ban in place until we understand the potential benefits we are giving up, as suggested in Dr. Arkin’s article, is an eminently reasonable proposition, but his response ignores a core aspect of question 1.  It addresses only cases where the situations involving the autonomous robots are in some way comparable to the situations involving the use of soldiers.  This is unlikely, as two of the main reasons autonomy is used instead of automation or teleoperation are (1) to increase the safety of the system as it performs the task in an unpredictable environment (automation is ineffective when the obstacle positions are unknown; teleoperation is ineffective when communications latency is high or bandwidth is low and the task is urgent) and (2) to allow systems to perform tasks that were not previously possible.

We cannot assume that the robots will be performing the same kinds of missions as human soldiers, and we cannot assume that the kinds of missions they perform will reduce casualties compared to humans if those missions weren’t possible before.  Instead, it is just as likely that there will be additional casualties as the robots fail to figure out who the target is, and as the target humans become adept at disguising themselves as non-combatants.  I agree that we do not yet know what those trade-offs are, and we will make much better decisions about what needs to be banned once we know more, but I disagree that the only way to discover those trade-offs is by continuing research into explicitly dangerous robots.

We can explore perception and ethical robot operation without bringing in battlefield robots – there are plenty of other areas in which these would be beneficial capabilities (child care, search and rescue of stranded hikers, elder care, crowd control, even disrupting a bar fight).  Banning the application doesn’t prevent us from estimating what the benefits of the banned approach might be.

But even if estimating the benefits were only possible with continued development, preventing a ban on a technology because of the affect on our risk assessment of that technology seems like a bad idea.  Banning a technology because it might eventually be used to develop something bad is a worse one.

Which takes us to question 2.  Robotics isn’t like chemistry, where bans on specific things have created sanitized chemistry sets until they have very little of interest left in them and have criminalized human curiosity (and I worry that this is happening in robotics), or like nuclear weapons, where one specific element of the weapon (in this case, radioactive elements) is tightly regulated.

Evan Ackerman makes an excellent point early in his article against banning killer robots.  He points out that the barrier to entry for this particular technology is so low that international bans cannot be a valid answer, any more than international bans on land mines prevent individuals from creating improvised explosive devices and putting them on roads to blow up their enemies’ trucks.

But I would take that argument a step further – it’s not just that the barrier to entry is unreasonably low.  It’s that the only element we could ban that would be associated uniquely with autonomous killer robots is the weapons, and that banning the weapons is insufficient to prevent the development of autonomous killer robots (note: edited on 2/5/16 to clarify argument).  I’m not arguing that an autonomous robot with a gun isn’t a fundamentally scarier and potentially a more dangerous entity than a gun by itself (or even a gun in the hands of a human).  I’m saying that the thing that makes the robot scary is the gun, but that banning the gun won’t prevent scary robots from existing.  Robots with knives could be scary, but right now they’re chopping vegetables.  Robots with legs, even when they stop falling over, even if they can run, are still at the mercy of the software that defines their goals.

Autonomous robots don’t have a physical property that is the difference between dangerous and safe. The only concrete difference between an “autonomous killer robot” and a perfectly normal, safe-for-everyday-purposes autonomous robot is a sign error in the goal definition.  A perfectly safe consumer good (an autonomous car) becomes an autonomous killer robot when the acceleration associated with the “don’t hit pedestrians” goal goes from negative to positive.

To address the third question:  there are only two things we can regulate on the consumer end:  the weapons themselves (which is not going to prevent people from making killer robots) and the software they run (even excluding learning and adaptation, we do not have tools that would enable us to determine definitively whether a given robot was capable of harming a human intentionally – in almost all cases, the robot wouldn’t know either).

Even assuming that we could overcome the deep attachment that the public and industry has for the many peaceable uses of robots, there is no way to ban an autonomous robot that can kill without also banning robots.  An autonomous car running you over is just as fatal as a robot soldier shooting you with a gun.

The difference between an “offensive autonomous weapon”  and an autonomous non-killer robot is that the non-killer robot doesn’t have a goal that says “kill”.