Friday, October 30, 2015

How ethical is your ethical robot?

If you're in the business of making ethical robots, then sooner or later you have to face the question: how ethical is your ethical robot? If you've read my previous blog posts then you will probably have come to the conclusion 'not very' - and you would be right - but here I want to explore the question in a little more depth.

First let us consider whether our 'Asimovian' robot can be considered ethical at all. For the answer I'm indebted to philosopher Dr Rebecca Reilly-Cooper who read our paper and concluded that yes, we can legitimately describe our robot as ethical, at least in a limited sense. She explained that the robot implements consequentialist ethics. Rebbeca wrote:
"The obvious point that any moral philosopher is going to make is that you are assuming that an essentially consequentialist approach to ethics is the correct one. My personal view, and I would guess the view of most moral philosophers, is that any plausible moral theory is going to have to pay at least some attention to the consequences of an action in assessing its rightness, even if it doesn’t claim that consequences are all that matter, or that rightness is entirely instantiated in consequences. So on the assumption that consequences have at least some significance in our moral deliberations, you can claim that your robot is capable of attending to one kind of moral consideration, even if you don’t make the much stronger claim that is capable of choosing the right action all things considered."
One of the great things about consequences is that they can be estimated - in our case using a simulation-based internal model which we call a consequence engine. So from a practical point of view it seems that we can build a robot with consequentialist ethics, whereas it is much harder to think about how to build a robot with say Deontic ethics, or Virtue ethics.

Having established what kind of ethics that our ethical robot has, now consider the question of how far does the robot go toward moral agency. Here we can turn to an excellent paper by James Moor, called The Nature, Importance and Difficulty of Machine Ethics. In that paper* Moor suggests four categories of ethical agency - starting with the lowest. Let me summarise those here:
  1. Ethical impact agents: Any machine that can be evaluated for its ethical consequences.
  2. Implicit ethical agents: Designed to avoid negative ethical effects.
  3. Explicit ethical agents: Machines that can reason about ethics.
  4. Full ethical agents: Machines that can make explicit moral judgments and justify them.
The first category: ethical impact agents, really includes all machines. A good example is a knife, which can clearly be used for good (chopping food, or surgery) or ill (as a lethal weapon). Now think about the blunt plastic knife that comes with airplane food - that falls into Moor's second category since it has been designed to reduce the potential of ethical misuse - it is an implicit ethical agent. Most robots fall into the first category: they are ethical impact agents, and a subset - those that have been designed to avoid harm by, for instance detecting if a human walks in front of them and automatically coming to a stop - are implicit ethical agents.

Let's now skip to Moor's fourth category, because it helps to frame our question - how ethical is your ethical robot? At present I would say there are no machines that are full ethical agents. In fact the only full ethical agents we know are 'adult humans of sound mind'. The point is this - to be a full ethical agent you need to be able to not only make moral judgements but account for why you made the choices you did.

It is clear that our simple Asimovian robot is not a full ethical agent. It cannot choose how to behave (like you or I), but is compelled to make decisions based on the harm-minimisation rules hard-coded into it. And it cannot justify those decisions post-hoc. It is, as I've suggested elsewhere, an ethical zombie. I would however argue that because of the cognitive machinery the robot uses to simulate ahead to model and evaluate the consequences of each of its next possible actions combined with its safety/ethical logical rules to choose between those actions, then the robot can be said to be reasoning about ethics. I believe our robot is an explicit ethical agent in Moor's scheme.

Assuming you agree with me, then does the fact that we have reached the third category in Moor's scheme mean that full ethical agents are on the horizon? The answer is a big NO. The scale of Moor's scheme is not linear. It's a relative small step from ethical impact agents to implicit ethical agents. Then a very much bigger step to explicit ethical agents, which we are only just beginning to take. But there is a huge gulf then to full ethical agents, since they would almost certainly need something approaching human equivalent intelligence.

But maybe it's just as well. The societal implications of full ethical agents, if and when they exist, would be huge. For now at least, I think I prefer my ethical robots to be zombies.


*Moor JH (2006), The Nature, Importance and Difficulty of Machine Ethics, IEEE Intelligent Systems, 21 (4), 18-21.

No comments:

Post a Comment