Saturday, February 20, 2016

Could we make a moral machine?

Could we make a moral machine? A robot capable of choosing or moderating its actions on the basis of ethical rules..? This was how I opened my IdeasLab talk at the World Economic Forum 2016, last month. The format of IdeasLab is 4 five minute (Pecha Kucha) talks, plus discussion and Q&A with the audience. The theme of this Nature IdeasLab was Building an Intelligent Machine, and I was fortunate to have 3 outstanding co-presenters: Vanessa Evers, Maja Pantic and Andrew Moore. You can see all four of our talks on YouTube here.

The IdeasLab variant of Pecha Kucha is pretty challenging for someone used to spending half an hour or more lecturing - 15 slides and 20 seconds per slide. Here is my talk:


and since not all of my (everso carefully chosen) slides are visible in the recording here is the complete deck:



And the video clips in slides 11 and 12 are here:

Slide 11: Blue prevents red from reaching danger.
Slide 12: Blue faces an ethical dilemma: our indecisive robot can save them both.


Acknowledgements: I am deeply grateful to colleague Dr Dieter Vanderelst who designed and coded the experiments shown here on slides 10-12. This work is part of the EPSRC funded project Verifiable Autonomy.

7 comments:

  1. Good rule, thanks. Azimov's rule 2 needs modification to not obey proven evil people without being certain the orders are good, not just apparently neutral. For example, not obeying the characters in Revelation 13 in case it's not smart enough to know all forms of harm to prevent.

    ReplyDelete
    Replies
    1. Thanks for your comment Kirk. Robots capable of making judgements about the orders given to them are I think well beyond what we can currently envisage.

      Delete
    2. I'm not sure how far it is beyond what could be envisaged.
      Recently Tay the Microsoft chatbot began to talk in a highly inappropriate way by learning from the conversations it had with random (but ill-intentioned) people online.
      Being able to moderate its language based on ethical rules could have helped prevent some of that.
      So human instructions may not always be trusted?
      In the future, perhaps there could be some individual who often gives instructions to a machine that its ethical controller then has to override. Perhaps giving a driverless car last-minute instructions at difficult times...? Or around the house, to block the loo or kick the cat.

      There must be an audit trail of when the controller has to intervene. Should the machine be able to update its responses to certain individuals based on its previous encounters?

      All highly speculative... but just thought I'd mention it!

      Delete
  2. Very well said, to keep a balance in this world between human and robot. Of course we will not let happen the war between human and machines in the future.

    ReplyDelete
  3. Very nice talk! But I've been suspicious in the past about making too much progress towards robots capable of modelling their own actions, however well-thought-out and suitably controlled, because the insights and solutions in such experiments could be reused by someone with less responsible intentions. Cultural evolution has ample evidence of technological progress in one area winding up in a quite different one. However, I think you're right to conduct these experiments, and there is really no other way to proceed; the alternative is simply machines that will end up being used regardless in circumstances where they are unable to act as ethically as we would like when it is needed. So how to prevent unsuitable cultural evolution? There need to be principles, like the EPSRC's (but stronger), and laws which ensure that research is responsible and products are suitable.

    ReplyDelete
    Replies
    1. Thank you Paul! You are right to be cautious. In fact we have new work coming out soon which explores the question of unethical robots. Will report on this soon!

      Delete