Enough Inappropriate Ethical Analysis Already!

I’ve reached my threshold for uninformed articles about the supposed ethical conundrums that roboticists should be taking into account when they design autonomous cars.  At the end, I’ll be adding more questions to the list of potential research topics, but first, I want to address why these articles are so infuriating.

Grrrr.  Begin rant.

The articles don’t seem to have been vetted at all.  Exhortations to follow Asmiov’s three laws, or (worse yet) implement additional laws and address philosophical dilemmas about which group of people to kill in some avoidable concatenation of tragic events – they all incorporate assumptions about how robots are built and designed that are (at least currently) untrue.

These situations all involve an unrealistic view of a robot’s ability to perceive its environment.  Anyone who designs the software that robots use to perform tasks knows what a big problem perception is.  Sometimes advances in sensing make a big difference, but often improved sensors simply result in more data available to the robot, not in an improved understanding of its surroundings.  And all of these ethical and philosophical dilemmas fundamentally rest on perception.  The decision about which group of people to kill is only an ethical dilemma if, first, it is possible for the car to find itself in that situation, and if, second, it is actually possible for the car to differentiate between the two groups in some way.

Asimov’s Laws (often taken as an example of where to start) rest on the assumption that the robot can project the ramifications of its actions and determine whether an action it takes is likely to result in harm to a human or to itself.  The whole reason we have autonomous systems instead of automated systems is because the real world is chaotic and unpredictable.  We are just barely able to identify humans in well-defined environments with adequate lighting; predicting that a human will fall over if pushed is at least within the realm of possibility (assuming we ignore the complexities of the surface they’re standing on, the composition of their shoes, an at best inaccurate estimate of their weight, their physical strength, and any other objects that may be in the vicinity).  Determining whether a human is far enough away from an explosion to bound the probability that they will be injured would require a good understanding of the exact placement of the bomb, the masses, materials, and robustness to stress of every object in the environment, the properties of the human, including its clothing, and the precise composition of the bomb.  On top of this, the robot needs to compute the probable impact of any actions it takes, which implies awareness of its own composition and any time constraints.  Humans, with all our millenia of successful life or death decisions and extremely effective sensory post-processing, are incapable of deciding what a “correct” set of rules would look like – if we knew what the “correct” set was and could apply it consistently, we wouldn’t need judges and juries.  Even if we could implement them, how would we know what to implement?

We can’t even build a robot that would be able to perceive the world sufficiently to allow us to develop a robot that complied with Asimov’s Laws.  We might, theoretically, eventually be able to build a robot that could face the trolley problem (see the background reading below) in a meaningful way, if both the sensors and the perceptual software were mature enough, but why would we?

Fundamentally, the job of a car is to get its occupants safely from one place to another while leaving its environment in substantially the same condition it was when it started.  Sensors for these systems need to operate very, very quickly, which means efficient sensing and minimal processing.

In order to make sensing efficient, as a designer, you derive the smallest set of things you need to identify as important elements of the environment.  For a car, this means you pay attention to objects that are in the road and objects whose movements indicate that they are likely to end up in the road before you’re past them.  You also pay attention to signs and features that are there to help you decide what to do:  stop signs, yield signs, traffic lights, turn signals, brake lights and reverse lights, and so forth.

But the net result of this is that telling the difference between a human, an old human, a child, a dog, or a cow becomes simply a matter of answering “how fast is it moving,” “is it going to be in the road,” and “am I moving slowly enough to avoid it”.  There is no reason to give a car the ability to tell the difference between one object and another, except to determine whether the road is permanently blocked or whether the object is likely to move out of the way.  In no case should the car be expected to hit anything.

A human on a bicycle is faster than a toddler, but a toddler is more maneuverable and less predictable.  The simplest solution is simply to assume that all moving objects have the potential to unpredictably swerve around like a toddler as fast as someone on a bicycle and behave accordingly.  There is no incentive driving a car manufacturer to build in the kind of awareness of the environment that would enable a car to distinguish between an old man and a young boy beyond recognizing that they are both capable of movement.  Just as human drivers are sublimely unaware of whether the people they drive past are saints or murderers, an autonomous car would be unlikely to know or care about the age of the humans in its vicinity.

These ethical dilemmas aren’t actual dilemmas the robots are likely to face.  Instead of providing commentary that will help policy-makers understand the actual problems associated with robots, the writers use arguments based on incorrect assumptions to conclude that roboticists should be implementing ethical constraints that are fundamentally impossible to enunciate in a form that the robots will be able to understand.

And in the meantime, roboticists have actual ethical dilemmas that need to be faced.

What the media can do about it:

Before you run a story about scary things a robot might do and how people who design robots should really be thinking about this stuff, just call a robotics professor at your local university.  There will be at least one.  Just go to the main page, search for “robot”, and look for the professor associated with the articles.  Call them up and ask if they can look it over and tell you whether the article makes sense.  They’re probably busy, but if you ask and they don’t have time, they may have a grad student that can help you.  You might even consider offering to put the name of whoever helps in the paper for helping you vet the article (e.g. “reviewed by X at University Y” in a footnote).

What we should be worrying about:

We have bigger ethical problems than made-up stories about hypotheticals that, in most cases, aren’t even likely to be possible.

For instance, we know just how difficult it is to figure out whether a given robot is going to behave the way we want.  Most of the time, we haven’t even figured out what we want (see the “correct” rules problem, above)  We build it to do something that seems obvious (“don’t hit things”), and then we run the robot somewhere new and discover that we didn’t just want it to not hit things, we wanted it to also avoid things it could fall off of (“avoid stairs”) and to avoid the fringe on a rug because it gets caught in the wheels (“avoid fringe”) and to somehow get back to its charger before it runs out of power so we don’t have to rescue it from the middle of the floor.  Even when we think we have finally done enough experiments and figured out what we really want it to do, how sure are we that it is ready to operate in someone else’s house?  If we have built a robot to help take care of someone elderly who has difficulty opening pill bottles and keeping track of their medication, how certain do we have to be that we have covered the typical or even rare use cases?

Where is the ethical boundary around providing the elderly with telepresence robots when they become too infirm to participate in their childrens’ and grandchildrens’ lives?  Are we encouraging families to sacrifice physical contact for virtual participation?  Will they be more connected to their lives, or less?  (I’m guessing it will depend largely on the person.)

What are the ethics of robot-assisted childcare?  If we are talking about robots that allow institutionalized children with communicative and movement disorders to finally interact with their peers through robot avatars, surely that is a good use of robots.  Allowing sick children to attend school through a robot avatar, probably a good thing (although an argument could be made for allowing sick children to rest and recuperate instead, catching up once they’re better, or for customizing each child’s school experience so they learn at a pace that fits their circumstances, whatever those may be).

But what about perfectly healthy children whose parents create a worldview where the safest thing for their child is to stay home and only ever attend school virtually?  Already children rarely play in their front yards, or wander through their own neighborhoods unsupervised – both things that were common when I was a child.  When I was 7, I was expected to walk 2 blocks to school along my residential street, whether my neighbor (also 7) was there to keep me company or not.  By the time I was 10, I was expected to walk or bike to and from school on my own and encouraged to go to the park and play or take my bike to the nearest playground.   By the time I was 14, I was allowed to bike anywhere I could get to. And this was in an era where being able to call home meant getting to a pay phone and using quarters.  I never actually needed those quarters in my pocket, and mostly I didn’t even bother carrying them, since if I couldn’t walk, I wouldn’t be able to use them, and if I could walk, I could always get where I needed to go eventually.

When my son was in elementary school, almost none of the kids walked to and from school.  I walked an 11-year-old home because her parent hadn’t come to pick her up and she was too uneasy to walk home alone because of the potential for predatory strangers.  It was 6 blocks, through a leafy, reasonably prosperous, well kept up residential neighborhood with almost no traffic and no history of predatory strangers.

If we build a robot that a sick or disruptive child can use to attend school without endangering his or her classmates, there is nothing stopping a parent from using that robot to prevent the classmates from endangering their child, and it seems to be becoming more and more likely that some parents will choose to do so.

What kinds of constraints is it ethical to implement in the robot?  What are the ethics of socializing children through actual physical interaction versus socializing them entirely through virtual means, especially when (if the robot is designed for use by disruptive children) the robot disallows certain kinds of behavior?  And potentially even more worrying, how are we warping these childrens’ ability to explore new ways of thinking by constraining them in this way?  Are children who interact with others through an artificially constrained platform less likely to innovate as adults, or are they likely to have inbuilt blind spots where certain kinds of actions simply don’t register as options?

What about the ethics of medical robots?  A few years ago, a company put up a billboard advertisement asking “Would you swallow a robot?” whose subhead said that they didn’t make the hardware, they made the software that made it possible.  At a conference workshop, about half those present (primarily robotics researchers) agreed that they would swallow a robot given appropriate circumstances, such as a doctor’s recommendation, and proper certification procedures.  When it was specified that the robot included on-board software, that number dropped to about ten percent, even with certification authority and medial authority safeguards.

And this doesn’t even get into the ethics of military robots.

Some (many?) of these questions are questions for ethicists and policy-makers, but as roboticists we could at least be trying to address the elements that relate to the design of our systems.

Should we be building in hooks to support specific types of safety caging on our systems, so that when the policy-makers decide the ethical boundaries of acceptable action, our tools are ready to accept them?  Shoud roboticists be researching the ethical options that are even available to robots?  Should we be contemplating what additional sensors our systems should have to enable them to be sufficiently aware of their context to act ethically?

And, bringing it back to the hypothetical ethical decisions that cars are unlikely to ever have to make, should we be developing a language that enables us to describe the areas in which it is possible for a given system to operate ethically, and the areas in which it is, fundamentally, just a machine, doing exactly what we told it to do?


Background reading:

The Trolley Problem:

http://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/ –   “Programmers still will need to instruct an automated car on how to act for the entire range of foreseeable scenarios, as well as lay down guiding principles for unforeseen scenarios. So programmers will need to confront this decision, even if we human drivers never have to in the real world.”

http://www.wired.com/2014/08/heres-a-terrible-idea-robot-cars-with-adjustable-ethics-settings/ – “one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible”


http://www.forbes.com/sites/timworstall/2014/06/18/when-should-your-driverless-car-from-google-be-allowed-to-kill-you/ – “It’s seems clear and obvious that the technology is going to get sorted pretty soon….The basic Trolly Problem is easy enough, kill fewer people by preference.”

http://www.nature.com/news/intelligent-robots-must-uphold-human-rights-1.17167 – “We should extend Asimov’s Three Laws of Robotics to support work on AIonAI interaction. I suggest a fourth law: all robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood and sisterhood.”

And a couple that actually seem to be taking a more sensible approach to the problem:


http://robohub.org/an-ethical-dilemma-when-robot-cars-must-kill-who-should-pick-the-victim/, and the slightly more disturbing followup:  http://phys.org/news/2014-08-ethics-driverless-cars.html

Situations in real life where children actually get hit:

Bystanders apparently get run over by potentially negligent parents (link), and occasionally a child gets hit by a police car responding to an emergency (link) or run over while chasing an ice cream truck on a bicycle (link), and sometimes children get run over because they’re close to a car when it starts moving and the driver doesn’t know they’re there (link, link, link), especially in a parking lot (link).  Crosswalks are apparently dangerous (link), as are apartments (link).

This is close to the runaway toddler scenario (link), in a parking lot with a distracted driver and an environment with poor visibility, but an autonomous vehicle wouldn’t be distracted and would be going slowly because of the uncertainty in its surroundings.  Instances of toddlers running abruptly into traffic on roads, rather than in parking lots, seem to be vanishingly infrequent.

Furthermore, parents today who simply let their children walk home from the park alone can end up on probation with child protective services (link).  Culturally, we’ve been shifting away from responsibility and towards safety for a long time.

Leave a Reply

Your email address will not be published.