Author Archives: Signe Redfield

Gaps

What is the biggest gap in robotics research?

There are two major gaps that we should at least investigate as part of the process of defining robotics as a discipline.

1.     We don’t have positive laws.  Our history is still largely trial and error and scavenging from other disciplines, rather than the development of foundational design principles or laws governing the creation or operation of robots in general.

We do have recognizable (tongue-in-cheek) negative laws, roughly equivalent to Murphy’s Law.

The First Law of Robotics:  You’re wrong (you set the parameters wrong, the user doesn’t need the robot to do whatever it was you thought they wanted, your model of the environment was wrong, etc.);

The Second Law of Robotics:  On a good day, half the robots in your lab are actually working (alternate version:  Your robot is working a little over (or under) half the time.  This has actually been demonstrated in two longitudinal studies);

The Third Law of Robotics:  All demos are faked or staged in some way (if you want to demonstrate a particular behavior or capability, you need to stage the demo so that that specific behavior or capability is required);

The Fourth Law of Robotics:  The likelihood of something going wrong with the demo is proportional to the importance of the people you’re putting it on for;

Corollary to the Fourth Law:  The thing that goes wrong with your demo will be unrelated to what you are trying to demonstrate, but will completely prevent you from demonstrating the important part of your work.

2.     We have trouble defining the space in which we work.  For example, autonomy means to operate without outside control, but the human intervention axis is demonstrably not equivalent to the autonomy axis.

Since one of the most common categorization tools for levels of autonomy does not make this distinction clear, a simple example is provided.

We can define four extreme types of robots:

  • Clockwork Automaton:  robots that do the same thing, every time you start them, without any feedback from sensors (or even communications) about their environment.  They are incapable of changing their actions, but require no human intervention once activated.  A wind-up toy is the simplest example of these, but extremely complex mechanisms have been built.
  • Remote Control:  robots that need to be given explicit direction from a human operator at all times; without continuous input from the operator, they will simply stop moving.
  • Mixed Initiative: robots that interact with their human operators to determine the best course of action – these are autonomous systems that can make suggestions to their operators and sometimes act on their operators’ suggestions and sometimes act on the basis of their own decisions.
  • Autonomous: robots that can perform tasks robustly in unconstrained environments  based on local sensing without any human intervention.  These include everything from a Roomba(TM) to Chappie.

On the human intervention axis, clockwork and autonomous robots lie at the “less” end of the scale, while remote control and mixed-initiative systems both involve significant human interaction.  On the autonomy axis, clockwork and remote control robots lie at the “none” end of the scale while autonomous and mixed-initiative systems demonstrate significant autonomy.

AutonomyVsInteraction

Mixed initiative robots require less interaction than remote control robots, but the quality of that interaction requires more expertise and understanding of the problem and the environment.  The purpose of the human in a remote control system is to provide the low level feedback loop in the controller linking the sensory inputs and the motor outputs.  The purpose of the human in a mixed initiative system is to work with the robot to jointly develop a solution to a complex problem and to provide the additional context not available to the robot.  No robots have no interaction with a human.  Even clockwork automata and fully autonomous robots have at least one interaction with a human – neither can start until a human activates a mechanism or sends a command.

But the core problem associated with both gaps is that the space represented by this diagram involves many, many different kinds of robots, performing many, many different kinds of jobs.

In some ways, this is like mathematics — both robotics and mathematics involve the study of many apparently disparate approaches to understanding the world, unified by a philosophical approach to problem solving, and both result in tools that are useful to many disciplines outside their own.

In mathematics, we have logic and proofs, algebra, geometry, and calculus, set theory and  probability.  In robotics, we have physical platform designs, perception, actuation, and manipulation, planning and decision making.

In mathematics, we have a philosophical approach to understanding the world that revolves around the definition of number and the definition of object.  In robotics, we have a philosophical approach to understanding the world that revolves around the actions required to achieve a goal.

In mathematics we have interest rates and statistical analysis supporting the financial industry, we have Fourier transforms describing electrical and acoustic waveforms, enabling us to listen to recorded music and communicate with people in space and around the world, and we have boolean logic, providing the underpinning for all the binary manipulations occurring in every computer we make.

In robotics, we have vacuum cleaners in peoples’ homes, we have robots that package, robots that assemble, and robots that deliver, and we have robots that swim and fly and tunnel, gathering information about hurricanes and pipes and volcanos.

What mathematics has that robotics doesn’t have is clearly spelled out foundations.  In mathematics, we know that before you can do mathematics rather than arithmetic, before you can really think like a mathematician, you need a bunch of fundamental tools.  Before you can really understand calculus, it’s important to understand algebra, and before you understand algebra, it’s important to understand the basics:  addition, subtraction, multiplication, division, fractions, and graphs.  The mathematics community largely agrees on what those fundamental tools are, just like the electrical engineering community largely agrees that Ohm’s Law is a fundamental tool in understanding electricity.

So I suppose I should change my answer:  the largest gap in robotics as a discipline is the lack of agreement on what constitutes the fundamental tools of our trade.

Another Brick From The (Research) Wall

In every research project, you eventually run into a wall.  The wall can take many specific forms, but in general it represents a point at which it seems like your idea can’t work.  There are at least four possible responses.

First, you can walk away.  You can abandon your approach to the problem, backtrack until you find an alternate path, and follow that path to its wall.

Second, you can keep battering at it until it gives way.  This only works if the wall isn’t a function of fundamental mechanisms of physics or mathematics.  Problems that are a function of processing power or available memory can be susceptible to this approach — if you wait long enough or have enough money, someone will build a computer capable of implementing your idea.  Sometimes the brute force approach is sufficient.

Third, you can revolve the problem in your mind, looking at it from all sides.  You can investigate other, apparently unrelated fields, you can attend talks on tangentially related work, and then you can come back to the problem and revolve it in your mind some more.  You can go talk to the people who will be using it, to see if there are any underlying assumptions you’ve missed, or something you’ve misunderstood.  And eventually you’ll find a path through the brambles around the side of the wall and come out on the other side.  You haven’t gained any understanding of the wall itself, but you have managed to find a new way to think about the problem, and you are likely to end up with an effective solution.  This is what happened in robotics in the 1980s.

Until that point, the software side of robotics was focused primarily on artificial intelligence approaches.  When they tried to make their robot operate even in a relatively simple environment, they hit a wall in terms of the required processor power because their interpretation of the problem assumed a high degree of object recognition and understanding of the environment.  Brooks’ key insight was that it is not necessary to understand the world in order to function within it.  The robot doesn’t need to differentiate between tables and chairs in order to avoid them.  It only needs to differentiate between floor and not-floor.  He sidestepped the wall (capturing and interpreting the complexity of the environment) and found a path through the brambles (abstracting the complexity of the world into only the things the robot needs to know).

Fourth, you can disassemble the wall, brick by brick, until you understand it.  This is the most theoretically sound approach, and often results in the cleanest solutions.

After the Wright brothers side-stepped their wall by discovering that wings needed to be rounded instead of sharp on the leading edge, engineers disassembled the leading-edge-shape wall until they thoroughly understood it, leading to modern aerodynamics.

The ViCoRob lab at the University of Girona ran into the wall of environmental factors when they attempted to stitch together imagery of the same undersea environment at different times.  Instead of hitting the problem with more and more computer time, or sidestepping the problem by mandating specific vehicle behaviors, they disassembled the wall into many individual image processing elements, each dealing with a specific confounding factor.  They compensated for brightness variation due to light placement, they compensated for particulates in the water, they compensated for chromatic variation as a function of distance, they compensated for bright and dark spots caused by light refracting through ripples on the surface, they compensated for vehicle motion estimate errors, and they compensated for scaling factors between images.  They took apart the wall, understanding it as they went, and were able to create stunningly beautiful images of large swaths of sea floor.  This approach is generally expensive and time-consuming.

What to do when you hit a wall is obviously going to be a function of many factors — how much time you have, how complex the problem is, how much money and hardware you have, and how you intuitively approach problems.  Some people will naturally gravitate towards each of these approaches, and an approach you’re good at is likely to be more effective for you than an approach you’re not good at.  It is, however, worthwhile to attain at least limited skill in each approach, if only so that you are aware of alternatives when your preferred approach fails.

Terminology

Electrical engineers have nice definitions for the foundations of their field:

We can define

and so forth.  But in robotics every term is subject to change without notice.  There is no common definition of robot, or autonomy, or intelligence.  The only common element is that researchers are still, after three or four decades, able to argue about what the definitions ought to be.

Thankfully, the discussions have died down from religious wars into cocktail party conversation, and the heated debates have largely transitioned from the definition of “robot” to the definition of “humanoid” or even whether topic X is “finished” or not.

It’s unclear whether the cessation of hostilities is due to weariness, to progress, or to maturity.

We could just be tired of going round in circular arguments, from “it has to do useful work” to “entertainment is a job” to “but it’s a toy, not a robot”.

We might have reached a common core definition, so the arguments are in the details around the parameters of the definition.  Instead of arguing about the definition of robot as “physically instantiated self-contained machine that performs a task for a human” we’re arguing about the definition of “task” and the definition of “self-contained”.

Or we might have concluded that what’s important is not that there be a single definition, but that individuals with something to say have the ability to say it clearly.  Instead of arguing when someone uses a non-standard definition, we simply accept it for the duration of the talk, or the paper, or the conversation, and try to understand the meat of what they’re trying to say.

The trouble is that whichever of these is the true answer, we are no closer to having an introductory textbook in robotics with a single definition that we can all agree is close enough to correct for it to be taught to beginning students.

The Core of Robotics

What is the core of Robotics as a discipline?

You could argue that robotics is an engineering discipline, because we work on the design, building and use of robots.

You could also argue that robotics ought to be more a science than an engineering discipline alone, since we provide new kinds of tools and models for other disciplines in much the same way mathematics does (with the admittedly large difference that our tools exist as physical objects).

Or you could argue that mathematics and robotics lie on either side of engineering, with mathematics providing the inputs to engineering and science and robotics coming out the other end as the ultimate synthesis of engineering and scientific fields.

Or you could even argue that robotics sits to the side of traditional classifications – roboticists tend to have significant experience in a wide range of fields not limited to engineering.  Those fields can include electrical engineering, mechanical engineering and computer science, as well as some degree of familiarity with the fields in which their robots were supposed to be operating, with deep expertise in individual pockets of each of those fields dependent on the details of the robots and the problems in question.  When a roboticist moves from problem to problem within robotics, they can view all the breadth of their expertise as potential opportunities to develop new pockets of deep expertise in some new area within that breadth, rather than something outside their expertise to be hired out to someone else.

An individual might start working on a project requiring their expertise in visual sensors for ground robots in urban environments, and then discover a need to apply that skill to working on visual sensors in aerial environments.  Their next project might involve machine learning to support improved perception, which could in turn lead to work on machine learning for decision making.  As part of that work, this individual might move to the design or improvement of specific behaviors the robot is deciding between, from there to low level control of some actuator on the system to improve a specific behavior, and finally to a mechanical redesign of the arm, leg or propeller.

That individual will have worked through electrical engineering to computer science to a different branch of electrical engineering to mechanical engineering, but in context, the path is instead a seamless trajectory through robotics.

And every element of that path is more fundamentally part of robotics than part of the discipline in which it is currently based, because every element of that path is rooted in the core of robotics as a field:  how active machines interact with the world.  Robotics is not just about designing the robot; it is about understanding the environment in which the robot is expected to operate.  It is not just about building the robot, it is about ensuring it is capable of the desired range of behaviors.  It is not just about using the robot, it is about understanding and improving the relationship between the robot and the user.

We worry about signal processing not as an abstract tool in the electrical engineering toolbox that can be used to clean up signals for human consumption, but because it enables the robot to obtain better information about its environment.

We worry about machine learning because the world is a large and unpredictable place, and we can’t always constrain the environments our robots need to operate in to the environments we can fully model and understand (and we can’t fully model and understand any but the simplest environments right now).

We worry about the individual behaviors because we build robots to do things, and they should be able to do those things well.

We worry about low level control because that determines how effectively it can act in its environment, and we design new tools and legs because we want it to be capable of doing new things.

Every step in that path is about the robot performing a task, in a complex and potentially unknown environmentThat is the core of robotics.

A Few Thoughts on “Killer Robots”

I’ve been following the discussion over on IEEE Automaton about the letter from the AI conference in Brazil, (for, against, rebuttal, etc.) , and since this is the kind of thing that normally gets me very engaged (or enraged), I was surprised to find myself ambivalent on the subject.

The original letter ends, “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”  but this gives us several propositions to consider:

  • “Starting a military AI arms race is a bad idea”

I think we can agree that this is true.

  • “… a military AI arms race … should be prevented”

This is also something we can agree on.  But if we look at the underlying assumptions behind the complete statement, we get a few more propositions that are less easy to respond to.

  • A military AI arms race can be prevented by a ban on the development of specific types of weapons

I disagree with this.  It seems to be saying that the only research that would support a military AI arms race is research into specific types of weapons, specifically weaponized autonomous robots.  It seems eminently plausible to me that research into many technologies other than offensive autonomous weapons would contribute to a military AI arms race:  research into improved mind-machine interfaces for non-autonomous robots and other systems; research into more intelligent surveillance systems.  A military AI arms race would result in many terrifying things other than autonomous weapons.  (For example, anything involving public utilities or transportation networks.)  The underlying technologies that would be developed in this arms race are still likely to be developed for other applications.  It is entirely plausible that you could conduct a military AI arms race without ever reaching for a weapon.

The only aspect of the problem that requires a weapon is the part where the robot learns how to control the aiming and initiation of that weapon, and that isn’t a problem for artificial intelligence research, that’s a low level controller problem that’s already being solved for many types of systems.  The artificial intelligence question that contributes to a military AI arms race is “when should I fire”, and you don’t need a gun to study that one.  You don’t even need a problem space that includes guns.  You just need an analogous non-military problem and you can work on that technology without ever violating a ban on autonomous offensive weapons.

  • “offensive autonomous weapons” are at a sufficiently early stage of their development that banning them is practical

I’m not at all sure this is true, since there are numerous examples of armed military robots, and even individual non-experts that have already assembled terrifying combinations of robots and weaponry.  The step from this RC quadcopter to an autonomous Lego Mindstorms weapon requires skills and technology available to high school students.

  • “offensive autonomous weapons” are a sufficiently concrete subject that they can meaningfully be banned

Bear in mind that this letter is coming from the AI community, not the robotics community.   They are not suggesting a ban on the technology they are most concerned with (the decision-making and possibly the perceptual elements of the software), but are instead suggesting a ban on other peoples’ work (specific combinations of software and hardware).

  • “beyond meaningful human control” is a reasonable way of specifying the particularly troublesome weapons that we ought to ban

We don’t even have measures for how much human control is associated with “autonomous”.  How on Earth would we ever figure out whether a given autonomous robot is actually “beyond” “meaningful” human control?

The discussions so far seem to have missed a key distinction that we need to make in order to talk sensibly about this topic.  We are discussing at least three separate questions.

  1. Should the international community be doing anything to prevent their constituent militaries from exploring this as an option for future combat between countries?
  2. Should there be restrictions or laws in place in individual countries governing what individuals choose to do with equipment that they legally purchase or create?
  3. Are we discussing the development of specific technology, or are we discussing applications of technology that has been developed for other purposes?

The original letter is explicitly aimed at the first question – they say that this application of technology is both sufficiently dangerous and sufficiently unpredictable that the international community should be treating it like certain types of mines and banning its development entirely.  But it asks us to ban technology development, not specific applications of technology, and that is a core problem.

It is appropriate to discuss a ban on specific applications of autonomous robots, in the same way we have bans on specific applications of other technologies, both at the national and international level.  But we should not conflate that with banning development of any technology that might eventually be able to be used for those applications, and we should not assume that banning those applications will have any more effect than the “severe stigmatization” mentioned in the for article.  The whole purpose of robotics as a technology is its flexibility and adaptability and the ease with which we can alter its uses and goals.

Not putting a ban in place until we understand the potential benefits we are giving up, as suggested in Dr. Arkin’s article, is an eminently reasonable proposition, but his response ignores a core aspect of question 1.  It addresses only cases where the situations involving the autonomous robots are in some way comparable to the situations involving the use of soldiers.  This is unlikely, as two of the main reasons autonomy is used instead of automation or teleoperation are (1) to increase the safety of the system as it performs the task in an unpredictable environment (automation is ineffective when the obstacle positions are unknown; teleoperation is ineffective when communications latency is high or bandwidth is low and the task is urgent) and (2) to allow systems to perform tasks that were not previously possible.

We cannot assume that the robots will be performing the same kinds of missions as human soldiers, and we cannot assume that the kinds of missions they perform will reduce casualties compared to humans if those missions weren’t possible before.  Instead, it is just as likely that there will be additional casualties as the robots fail to figure out who the target is, and as the target humans become adept at disguising themselves as non-combatants.  I agree that we do not yet know what those trade-offs are, and we will make much better decisions about what needs to be banned once we know more, but I disagree that the only way to discover those trade-offs is by continuing research into explicitly dangerous robots.

We can explore perception and ethical robot operation without bringing in battlefield robots – there are plenty of other areas in which these would be beneficial capabilities (child care, search and rescue of stranded hikers, elder care, crowd control, even disrupting a bar fight).  Banning the application doesn’t prevent us from estimating what the benefits of the banned approach might be.

But even if estimating the benefits were only possible with continued development, preventing a ban on a technology because of the affect on our risk assessment of that technology seems like a bad idea.  Banning a technology because it might eventually be used to develop something bad is a worse one.

Which takes us to question 2.  Robotics isn’t like chemistry, where bans on specific things have created sanitized chemistry sets until they have very little of interest left in them and have criminalized human curiosity (and I worry that this is happening in robotics), or like nuclear weapons, where one specific element of the weapon (in this case, radioactive elements) is tightly regulated.

Evan Ackerman makes an excellent point early in his article against banning killer robots.  He points out that the barrier to entry for this particular technology is so low that international bans cannot be a valid answer, any more than international bans on land mines prevent individuals from creating improvised explosive devices and putting them on roads to blow up their enemies’ trucks.

But I would take that argument a step further – it’s not just that the barrier to entry is unreasonably low.  It’s that the only element we could ban that would be associated uniquely with autonomous killer robots is the weapons, and that banning the weapons is insufficient to prevent the development of autonomous killer robots (note: edited on 2/5/16 to clarify argument).  I’m not arguing that an autonomous robot with a gun isn’t a fundamentally scarier and potentially a more dangerous entity than a gun by itself (or even a gun in the hands of a human).  I’m saying that the thing that makes the robot scary is the gun, but that banning the gun won’t prevent scary robots from existing.  Robots with knives could be scary, but right now they’re chopping vegetables.  Robots with legs, even when they stop falling over, even if they can run, are still at the mercy of the software that defines their goals.

Autonomous robots don’t have a physical property that is the difference between dangerous and safe. The only concrete difference between an “autonomous killer robot” and a perfectly normal, safe-for-everyday-purposes autonomous robot is a sign error in the goal definition.  A perfectly safe consumer good (an autonomous car) becomes an autonomous killer robot when the acceleration associated with the “don’t hit pedestrians” goal goes from negative to positive.

To address the third question:  there are only two things we can regulate on the consumer end:  the weapons themselves (which is not going to prevent people from making killer robots) and the software they run (even excluding learning and adaptation, we do not have tools that would enable us to determine definitively whether a given robot was capable of harming a human intentionally – in almost all cases, the robot wouldn’t know either).

Even assuming that we could overcome the deep attachment that the public and industry has for the many peaceable uses of robots, there is no way to ban an autonomous robot that can kill without also banning robots.  An autonomous car running you over is just as fatal as a robot soldier shooting you with a gun.

The difference between an “offensive autonomous weapon”  and an autonomous non-killer robot is that the non-killer robot doesn’t have a goal that says “kill”.

Are “Animal Robots” Robots At All?

IEEE Spectrum had an article from a Chinese lab on “animal robots”. When I was back in grad school, we worked on a proposal for a similar project. We were going to connect moth antennae to a robot and use them as sensors to drive the robot around. Various people have worked on similar ideas over the years, and they’re a little disturbing but generally very informative.

This paper had a diametrically opposed approach. Instead of replacing the animal’s actuators with a robotic interface, they mounted a controller on the back of a rat and connected it to the rat’s motor neurons. Instead of creating a robot with biological sensors, they created a remote control animal.

I am profoundly uncomfortable with this, especially as the research community moves away from insects and towards mammals. While I have no problem with doing horrible things to mosquitoes (largely in retaliation for the severe discomfort they have caused me over the years), rats are intelligent and interesting creatures. The thought of an intelligent animal being forced into actions via the equivalent of a muscular tic is repugnant.

However, that is not the larger question we need to address. This is:

Is a biological animal with a controller attached to its motor neurons a robot?

Setting aside my discomfort (to the best of my ability), I think that these should not count as robots. In essence, a robot is a machine that does a task for a human. We do not consider police horses or working dogs to be robots, because they are not machines. Just because we have traded reins for a computer and a bridle for wires into the brain and as a result have reduced their ability to act independently does not mean that the animal has suddenly become a robot.

We have a word for organisms that have been merged with mechanisms until it is difficult to tell where the animal ends and the machines begin: we call them “cyborgs”.

Cyborg research is most closely related to robotics, so I expect that researchers developing cyborgs are going to be part of the robotics community for some time to come. But we shouldn’t expect them to stay part of our community forever, and we certainly shouldn’t accept this hijacking of our terminology. We have just managed to pry the word “robot” out of the hands of the computer science community (yes, robots have to have bodies); I’d hate to have to have that argument all over again…

Enough Inappropriate Ethical Analysis Already!

I’ve reached my threshold for uninformed articles about the supposed ethical conundrums that roboticists should be taking into account when they design autonomous cars.  At the end, I’ll be adding more questions to the list of potential research topics, but first, I want to address why these articles are so infuriating.

Grrrr.  Begin rant.

The articles don’t seem to have been vetted at all.  Exhortations to follow Asmiov’s three laws, or (worse yet) implement additional laws and address philosophical dilemmas about which group of people to kill in some avoidable concatenation of tragic events – they all incorporate assumptions about how robots are built and designed that are (at least currently) untrue.

These situations all involve an unrealistic view of a robot’s ability to perceive its environment.  Anyone who designs the software that robots use to perform tasks knows what a big problem perception is.  Sometimes advances in sensing make a big difference, but often improved sensors simply result in more data available to the robot, not in an improved understanding of its surroundings.  And all of these ethical and philosophical dilemmas fundamentally rest on perception.  The decision about which group of people to kill is only an ethical dilemma if, first, it is possible for the car to find itself in that situation, and if, second, it is actually possible for the car to differentiate between the two groups in some way.

Asimov’s Laws (often taken as an example of where to start) rest on the assumption that the robot can project the ramifications of its actions and determine whether an action it takes is likely to result in harm to a human or to itself.  The whole reason we have autonomous systems instead of automated systems is because the real world is chaotic and unpredictable.  We are just barely able to identify humans in well-defined environments with adequate lighting; predicting that a human will fall over if pushed is at least within the realm of possibility (assuming we ignore the complexities of the surface they’re standing on, the composition of their shoes, an at best inaccurate estimate of their weight, their physical strength, and any other objects that may be in the vicinity).  Determining whether a human is far enough away from an explosion to bound the probability that they will be injured would require a good understanding of the exact placement of the bomb, the masses, materials, and robustness to stress of every object in the environment, the properties of the human, including its clothing, and the precise composition of the bomb.  On top of this, the robot needs to compute the probable impact of any actions it takes, which implies awareness of its own composition and any time constraints.  Humans, with all our millenia of successful life or death decisions and extremely effective sensory post-processing, are incapable of deciding what a “correct” set of rules would look like – if we knew what the “correct” set was and could apply it consistently, we wouldn’t need judges and juries.  Even if we could implement them, how would we know what to implement?

We can’t even build a robot that would be able to perceive the world sufficiently to allow us to develop a robot that complied with Asimov’s Laws.  We might, theoretically, eventually be able to build a robot that could face the trolley problem (see the background reading below) in a meaningful way, if both the sensors and the perceptual software were mature enough, but why would we?

Fundamentally, the job of a car is to get its occupants safely from one place to another while leaving its environment in substantially the same condition it was when it started.  Sensors for these systems need to operate very, very quickly, which means efficient sensing and minimal processing.

In order to make sensing efficient, as a designer, you derive the smallest set of things you need to identify as important elements of the environment.  For a car, this means you pay attention to objects that are in the road and objects whose movements indicate that they are likely to end up in the road before you’re past them.  You also pay attention to signs and features that are there to help you decide what to do:  stop signs, yield signs, traffic lights, turn signals, brake lights and reverse lights, and so forth.

But the net result of this is that telling the difference between a human, an old human, a child, a dog, or a cow becomes simply a matter of answering “how fast is it moving,” “is it going to be in the road,” and “am I moving slowly enough to avoid it”.  There is no reason to give a car the ability to tell the difference between one object and another, except to determine whether the road is permanently blocked or whether the object is likely to move out of the way.  In no case should the car be expected to hit anything.

A human on a bicycle is faster than a toddler, but a toddler is more maneuverable and less predictable.  The simplest solution is simply to assume that all moving objects have the potential to unpredictably swerve around like a toddler as fast as someone on a bicycle and behave accordingly.  There is no incentive driving a car manufacturer to build in the kind of awareness of the environment that would enable a car to distinguish between an old man and a young boy beyond recognizing that they are both capable of movement.  Just as human drivers are sublimely unaware of whether the people they drive past are saints or murderers, an autonomous car would be unlikely to know or care about the age of the humans in its vicinity.

These ethical dilemmas aren’t actual dilemmas the robots are likely to face.  Instead of providing commentary that will help policy-makers understand the actual problems associated with robots, the writers use arguments based on incorrect assumptions to conclude that roboticists should be implementing ethical constraints that are fundamentally impossible to enunciate in a form that the robots will be able to understand.

And in the meantime, roboticists have actual ethical dilemmas that need to be faced.

What the media can do about it:

Before you run a story about scary things a robot might do and how people who design robots should really be thinking about this stuff, just call a robotics professor at your local university.  There will be at least one.  Just go to the main page, search for “robot”, and look for the professor associated with the articles.  Call them up and ask if they can look it over and tell you whether the article makes sense.  They’re probably busy, but if you ask and they don’t have time, they may have a grad student that can help you.  You might even consider offering to put the name of whoever helps in the paper for helping you vet the article (e.g. “reviewed by X at University Y” in a footnote).

What we should be worrying about:

We have bigger ethical problems than made-up stories about hypotheticals that, in most cases, aren’t even likely to be possible.

For instance, we know just how difficult it is to figure out whether a given robot is going to behave the way we want.  Most of the time, we haven’t even figured out what we want (see the “correct” rules problem, above)  We build it to do something that seems obvious (“don’t hit things”), and then we run the robot somewhere new and discover that we didn’t just want it to not hit things, we wanted it to also avoid things it could fall off of (“avoid stairs”) and to avoid the fringe on a rug because it gets caught in the wheels (“avoid fringe”) and to somehow get back to its charger before it runs out of power so we don’t have to rescue it from the middle of the floor.  Even when we think we have finally done enough experiments and figured out what we really want it to do, how sure are we that it is ready to operate in someone else’s house?  If we have built a robot to help take care of someone elderly who has difficulty opening pill bottles and keeping track of their medication, how certain do we have to be that we have covered the typical or even rare use cases?

Where is the ethical boundary around providing the elderly with telepresence robots when they become too infirm to participate in their childrens’ and grandchildrens’ lives?  Are we encouraging families to sacrifice physical contact for virtual participation?  Will they be more connected to their lives, or less?  (I’m guessing it will depend largely on the person.)

What are the ethics of robot-assisted childcare?  If we are talking about robots that allow institutionalized children with communicative and movement disorders to finally interact with their peers through robot avatars, surely that is a good use of robots.  Allowing sick children to attend school through a robot avatar, probably a good thing (although an argument could be made for allowing sick children to rest and recuperate instead, catching up once they’re better, or for customizing each child’s school experience so they learn at a pace that fits their circumstances, whatever those may be).

But what about perfectly healthy children whose parents create a worldview where the safest thing for their child is to stay home and only ever attend school virtually?  Already children rarely play in their front yards, or wander through their own neighborhoods unsupervised – both things that were common when I was a child.  When I was 7, I was expected to walk 2 blocks to school along my residential street, whether my neighbor (also 7) was there to keep me company or not.  By the time I was 10, I was expected to walk or bike to and from school on my own and encouraged to go to the park and play or take my bike to the nearest playground.   By the time I was 14, I was allowed to bike anywhere I could get to. And this was in an era where being able to call home meant getting to a pay phone and using quarters.  I never actually needed those quarters in my pocket, and mostly I didn’t even bother carrying them, since if I couldn’t walk, I wouldn’t be able to use them, and if I could walk, I could always get where I needed to go eventually.

When my son was in elementary school, almost none of the kids walked to and from school.  I walked an 11-year-old home because her parent hadn’t come to pick her up and she was too uneasy to walk home alone because of the potential for predatory strangers.  It was 6 blocks, through a leafy, reasonably prosperous, well kept up residential neighborhood with almost no traffic and no history of predatory strangers.

If we build a robot that a sick or disruptive child can use to attend school without endangering his or her classmates, there is nothing stopping a parent from using that robot to prevent the classmates from endangering their child, and it seems to be becoming more and more likely that some parents will choose to do so.

What kinds of constraints is it ethical to implement in the robot?  What are the ethics of socializing children through actual physical interaction versus socializing them entirely through virtual means, especially when (if the robot is designed for use by disruptive children) the robot disallows certain kinds of behavior?  And potentially even more worrying, how are we warping these childrens’ ability to explore new ways of thinking by constraining them in this way?  Are children who interact with others through an artificially constrained platform less likely to innovate as adults, or are they likely to have inbuilt blind spots where certain kinds of actions simply don’t register as options?

What about the ethics of medical robots?  A few years ago, a company put up a billboard advertisement asking “Would you swallow a robot?” whose subhead said that they didn’t make the hardware, they made the software that made it possible.  At a conference workshop, about half those present (primarily robotics researchers) agreed that they would swallow a robot given appropriate circumstances, such as a doctor’s recommendation, and proper certification procedures.  When it was specified that the robot included on-board software, that number dropped to about ten percent, even with certification authority and medial authority safeguards.

And this doesn’t even get into the ethics of military robots.

Some (many?) of these questions are questions for ethicists and policy-makers, but as roboticists we could at least be trying to address the elements that relate to the design of our systems.

Should we be building in hooks to support specific types of safety caging on our systems, so that when the policy-makers decide the ethical boundaries of acceptable action, our tools are ready to accept them?  Shoud roboticists be researching the ethical options that are even available to robots?  Should we be contemplating what additional sensors our systems should have to enable them to be sufficiently aware of their context to act ethically?

And, bringing it back to the hypothetical ethical decisions that cars are unlikely to ever have to make, should we be developing a language that enables us to describe the areas in which it is possible for a given system to operate ethically, and the areas in which it is, fundamentally, just a machine, doing exactly what we told it to do?

 

Background reading:

The Trolley Problem:

http://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/ –   “Programmers still will need to instruct an automated car on how to act for the entire range of foreseeable scenarios, as well as lay down guiding principles for unforeseen scenarios. So programmers will need to confront this decision, even if we human drivers never have to in the real world.”

http://www.wired.com/2014/08/heres-a-terrible-idea-robot-cars-with-adjustable-ethics-settings/ – “one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible”

http://www.popsci.com/blog-network/zero-moment/mathematics-murder-should-robot-sacrifice-your-life-save-two

http://www.forbes.com/sites/timworstall/2014/06/18/when-should-your-driverless-car-from-google-be-allowed-to-kill-you/ – “It’s seems clear and obvious that the technology is going to get sorted pretty soon….The basic Trolly Problem is easy enough, kill fewer people by preference.”

http://www.nature.com/news/intelligent-robots-must-uphold-human-rights-1.17167 – “We should extend Asimov’s Three Laws of Robotics to support work on AIonAI interaction. I suggest a fourth law: all robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood and sisterhood.”

And a couple that actually seem to be taking a more sensible approach to the problem:

http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you/

http://robohub.org/an-ethical-dilemma-when-robot-cars-must-kill-who-should-pick-the-victim/, and the slightly more disturbing followup:  http://phys.org/news/2014-08-ethics-driverless-cars.html

Situations in real life where children actually get hit:

Bystanders apparently get run over by potentially negligent parents (link), and occasionally a child gets hit by a police car responding to an emergency (link) or run over while chasing an ice cream truck on a bicycle (link), and sometimes children get run over because they’re close to a car when it starts moving and the driver doesn’t know they’re there (link, link, link), especially in a parking lot (link).  Crosswalks are apparently dangerous (link), as are apartments (link).

This is close to the runaway toddler scenario (link), in a parking lot with a distracted driver and an environment with poor visibility, but an autonomous vehicle wouldn’t be distracted and would be going slowly because of the uncertainty in its surroundings.  Instances of toddlers running abruptly into traffic on roads, rather than in parking lots, seem to be vanishingly infrequent.

Furthermore, parents today who simply let their children walk home from the park alone can end up on probation with child protective services (link).  Culturally, we’ve been shifting away from responsibility and towards safety for a long time.

Robotics Also Needs Historians

What would enable to faster progress in robotics research?

Part 2:  Historians

One of the other things that would help us progress faster is a better understanding of what ideas have already been tried.

Trite, I know, and readily apparent, and something that applies across most (all?) scientific domains.  We’re not an established field like Biology or Chemistry, and we’re effectively still at the very beginning of defining the laws and principles that will be our foundations.  If we don’t know what approaches have been tried and failed, and have some sense of why, we won’t be able to define those laws and principles.

There are many researchers without a good sense of the different branches of robotics.  Because we need breadth across disciplines simply to function in robotics, it is easy to lose sight of the fact that we actually need more breadth, rather than less.

Each researcher is focusing on a small part of the whole.  On graph based planning tools, or case based reasoning, or SLAM, or the vacation snapshot problem.

Each researcher’s small part is in a small part of the overall robotics ecosystem.  In the world of path planning algorithms, or the world of long endurance underwater robots, or the world of air robots for agriculture, or the world of industrial robot arms or the world of androids designed to look exactly like humans.

And each of those small parts is in turn a small part of the larger robotics research effort underway around the globe.  Just within the aerial robotics community there are separate subgroups that communicate with each other very little.  The fixed wing agricultural robotics community has little to say to the researchers working on indoor operations with small quadcopters.  The groups contributing to the automation of commercial and military fixed wing aircraft often don’t interact with the medium size rotary wing community.

We have breadth across the disciplines of electrical engineering, mechanical engineering, and computer science, and we recognize that every robot is the result of a group of people each contributing components from their own expertise.  But we often fail to recognize that the field of robotics itself is even broader and more disparate than the disciplines that contribute to it.

And beyond this breadth of current research, there is the depth of historical perspective.

The problem is that the history of a given tendril of robotics may be well understood by its practitioners, but as these vines thicken and mature, they begin to run into problems that were addressed and solved (or at least partially solved) in other tendrils.

For example, the industrial robotics research community broke off from the research community in the 1970s, with the first push to automate factories. In the late 1980s, the research community split as one set of more theoretically-minded researchers focused on developing artificial intelligence and the other, more concerned with practical implementations, focused on developing systems that would work in the physical world, while the industrial robotics community still saw themselves as the core of the robotics community.  In the intervening thirty years, the industrial robotics community has achieved significant development and penetration into industrial operations, but has largely failed to recognize the huge strides made in the less rigorous autonomous mobile robotics community, and the similarly huge strides made in the more rigorous artificial intelligence-based robotics community.  Often, instead of taking advantage of this progress, they are instead rediscovering algorithms originally developed in the late 1980s in the reactive robotics community.  Very few researchers know the history of work done outside their own specialties, and their own systems suffer as a result.

Better awareness of the history of robotics as a whole would prevent researchers from duplicating approaches and algorithms without realizing it and allow us to benefit from all the work done in the past.  We need courses, or textbooks, or webisodes – something that will allow newcomers and the experienced alike to learn the history of work that might apply.  We need historians to curate that information and make it available, to analyze it and understand how the pieces fall together, in order to define the principles behind the robots that work.

Robotics Needs Librarians

What would enable faster progress in robotics research?

Part 1:  Librarians.

There are several answers to this question, but one of the most important things robotics needs is the ability to find existing solutions, and one key element of that is providing cataloging tools.

Every robot is built up of bits and pieces of other peoples’ algorithms.

The researchers focused on a given problem know what the state of the art in that area is.  But each robot is an assemblage of components – no matter which piece a given researcher is focused on, the rest of the robot must be present in order for that piece to be tested.  Unfortunately, no one researcher has time to be an expert in every component of a robot.

If a new SLAM loop closure algorithm needs to be demonstrated and tested on a robot, it will probably utilize a low level controller provided by the manufacturer and a sensory perception algorithm inherited from a previous student.   The designer of the loop closure algorithm is unlikely to have time to ensure that every component of his or her robot is using the optimal algorithm for that task, tuned properly for both the robot and the environment.

If the researcher is focused on multi-vehicle cooperation, the obstacle avoidance algorithm is likely to be whatever came with the robot, or whatever might have been installed on the robot by the last graduate student who used it, or the algorithm developed by the last researcher in that lab who worked on that problem, rather than the algorithm most suited for that robot operating in that environment.

Of course, we also need tools that will let us reuse algorithms across robots, and progress is being made in that area, and we need tools that will tune algorithm parameters so that for any given combination of algorithm and robot and environment, the algorithm is providing the best performance possible.  But right now, we don’t even know what algorithms are available and which ones are most suited to which applications without extensive research and a good dollop of luck.

Luck oughtn’t to be necessary, but unless you’re an expert in a given area, it’s not always abundantly clear that you have even managed to figure out the correct keywords to let you find the algorithm you’re looking for.

If we go to the ROS archive (http://ros.org) and search for SLAM, we find over 20 packages submitted by 12 people or groups, while “navigation” brings up 11 metapackages and 51 packages, none of which use “SLAM” in their text.  There is no way for the non-expert to tell which algorithm he or she should use. “avoid” brings up 12 packages, most of which don’t even implement obstacle avoidance but merely have the word “avoid” used in their descriptive text.  Neither “survey” nor “coverage” bring up any results, even though many algorithms exist outside of ROS that address those problems.

There’s a reasonably comprehensive list of MOOS modules available at http://oceanai.mit.edu/moos-ivp/pmwiki/pmwiki.php?n=Site.Modules, but the organization is done by hand and the modules primarily address marine vehicle applications.

The options for compiled code with a reasonable level of certainty that it will work are limited. Even finding algorithms through technical papers is distressingly hit or miss.  If you happen to choose the wrong keywords, it’s entirely possible you’ll fail to find the most relevant papers.  Non-native English speaking researchers are at a significant disadvantage, and sometimes form clusters of researchers working on problems in parallel with the native English speakers, since they’re using a different set of keywords to describe the same thing.  So if you don’t know precisely what keywords to use, you’ll probably find a community of researchers, using the keywords you think are intuitive, with a variety of solutions addressing your problem.  You have no way of knowing whether this community represents the state of the art or whether this community believes it represents the state of the art but itself missed the correct keywords.

Even within the English-speaking world, robotics is a parochial discipline.  Researchers in one branch of robotics may use entirely different keywords and be entirely ignorant of existing work addressing the problem they’re looking for.  Researchers in industrial robotics and researchers looking at problems facing commercial fixed-wing aircraft are unlikely to know what solutions the ground vehicle or the underwater vehicle community have developed.   Researchers with a background in controls will assume that any half-decent undergraduate robotics textbook should contain recursive Newton-Euler equations, while researchers with a background in behavior-based robotics still manage to find roboticists that have never heard of Braitenberg vehicles.  Without experience in controls, there is no guarantee an artificial intelligence specialist will know to look for recursive Newton-Euler equations to control their robot, and without experience in behavior-based robotics, a controls specialist is unlikely to consider a Braitenberg approach to behavior generation (the exception).

Robotics needs some form of taxonomy or Library of Congress organizing principle so that researchers can find the relevant behaviors and algorithms for their problem and robot.

We need librarians.

First Question … Nomenclature

What should our discipline be called? Should it be called Robotics Science, or Robotics Engineering, or neither? Why?

Neither. It should be called Robotics.

We don’t say “Chemistry Engineering” or “Physics Science”.  Robotics is not Robotics Engineering or Robotics Science.  But the problem isn’t really one of nomenclature.  The real problem is that Robotics doesn’t fit smoothly into these categories.

We could acknowledge the existing separations in the discipline between the hardware-centric researchers in Mechanical Engineering and the more theoretically oriented Computer Science groups, but the process that seems to be occurring is one of merging rather than separation.

The study of computing has effectively been separated into two disciplines:  Computer Science for those interested in software and Computer Engineering for those interested in hardware.  But that is, in some ways, an artificial distinction – both groups are solving problems with algorithms and patterns.  We make a distinction between hardware designers who work on processors and bootloader programmers and language or operating system developers and application authors, but underlying all of those tasks is a fundamental problem associated with how you design entirely new sets of rules, and what those rules might lead to.  It’s just that Computology, Computistry, and Computics all sound a little weird, and we don’t have a good name for something that bridges the gap between science and engineering.

Robotics already has a nice name, an excellent descriptor of what we do as researchers – just as Physics is the study of the rules that govern physical properties of the world and how objects with those properties interact, Robotics should be the study of the rules that govern effective robots (which is more of a philosophical question, appropriate to the sciences) and how to design robots that interact with the world in ways we want them to (which is the engineering side of the equation).

Engineering generally answers “how” questions, while the sciences focus more on “what” and “why” questions, on the causes of observed mechanisms rather than on the difficulties of designing new ones.  But Robotics is doing both – it not only gives us lots of “how” questions to answer and gives us tools to answer “how” questions, but it also gives us lots of “what” and “why” questions.  Not just “how do we learn?” but “why do we learn?” and “what should we learn?”.  Robotics can even help answer questions in other scientific disciplines.

In Biology, there are often many hypotheses about what the rationale behind a given animal’s behavior might be.  In at least two cases, researchers have been able to demonstrate that a specific behavior can be explained with simpler mechanisms than are usually attributed to it, by demonstrating that the simpler mechanism, implemented in a robot, is sufficient to drive the observed behavior.  Robots are providing tools across the scientific disciplines in much the same way that improved sensors and new mathematical algorithms are good tools to support scientific research.

The primary difference between the engineering disciplines and the sciences is that the sciences have Philosophy as an underpinning.  They have a philosophical component that is largely missing from the engineering disciplines.  In Engineering, a solution that works is prized, regardless of whether it demonstrates some underlying truth, but in the sciences, solutions that lead to better descriptions of a perceived underlying truth are more highly prized.

Robotics can, I believe, do both.  It is clearly a discipline that is concerned with functional solutions.  Solutions that work are prized.  But in that search for functional solutions, we are learning fundamental truths about ourselves, about other animals, and about complex systems.  There are fundamental truths available through the exploration of Robotics that we will have difficulty finding in other ways.  The fact that Robotics can take us to a better understanding of ourselves and other organisms in our world places it squarely in the sciences.

Robotics is more an engineering discipline than anything else (we are, after all, trying to build systems that work), but it is not only an engineering discipline.  And it should reflect that by being called Robotics.