My vacuum cleaner also brakes for ladybirds

Or: Why lying bots help when it comes to morals - Interview with Oliver Bendel

Do war drones need morals? Do autonomous combine harvesters have to act ethically? What would happen if a customer disclosed his intention to commit suicide to his digital assistant? Why do we need to ask these questions when we think about artificial intelligence? Oliver Bendel considers these questions. And he has his vacuum cleaner brake for insects. But not all of them – out of principle.

Born in Ulm (on the Danube) in 1968, Oliver Bendel studied philosophy, German and information sciences, before gaining a doctorate in business information systems from the University of St. Gallen. He has worked at the Institut für Wirtschaftsinformatik (the institute for business information technology), at the Hochschule für Wirtschaft FHNW, since 2009.


The issues upon which you focus include animal/machine ethics? What is the purpose of this?

Machine ethics is about creating machines that can behave and react in a morally appropriate manner. For a couple of years, I have been trying to combine machine ethics with animal ethics and also animal protection. This means that I try to think about moral machines, and sometimes to build them as well: machines that focus on the welfare of animals, so that animals are not injured or damaged.


Animal ethics concerns animals that can feel. Now you have created a robot with the wonderful name “Ladybird”, which is also designed to protect ladybirds. Can ladybirds really feel?

Since Jeremy Bentham’s work, a key argument in animal ethics is the ability of animals to feel. His theory was that it wasn’t relevant whether they could think or speak, but rather if they could suffer. I take that one step further and examine the animals’ existence, rather than their ability to suffer or feel. I am not saying that we should protect and save every creature: if you have an infestation of cockroaches in your house you would do better to exterminate them.
 
For me it is about the principle. I am concerned with demonstrating that such animal-friendly machines are possible: that we can, for example, use decision trees to realise them. It is possible to incorporate moral reasoning within these decision trees. The example with the ladybird illustrates this very clearly. The machine we are building has particular colour sensors that can recognise patterns and images, and really does recognise ladybirds, sparing their lives. Allow me to define the principle more clearly: it is possible to build machines that follow specific ethical models and moral rules.


But is there such a thing as a universal moral code?

There are things that bind us all over the world. However, many situations do not demand universal morals. If my robot vacuum cleaner is cleaning, then I want it to clean the same way that I would. The interest lies in creating machines that behave in a particular way when they encounter animals. I want to have a combine harvester that recognises a fawn and brakes before reaching it; or wind turbines that switch off if bats and flocks of birds are approaching. Whether this moral imperative can be justified against operational safety or any subsequent downtime remains open. Or what about self-driving cars that break for certain animals or perform avoidance manoeuvres? We put up signs to warn about hedgehogs and toad crossings, yet nothing really changes: drivers just drive on. What we have to think about here is whether we could build cars that brake independently for animals – for moral or other reasons.

The idea is not to build cars that don’t move. There is no point in building a car that brakes for every single insect. I was in Madeira once. After one day of driving around, my car was covered in butterfly wings. That’s the price of mobility. However, we could undoubtedly build cars that capable of braking for toads, if this didn’t pose a hazard to other road users or any cars that might be following.


Ethics always has a cultural context, dependent on the function of the AI. A military drone will operate within a very different ethical context to the combine harvester we mentioned earlier.

You are quite right: It wouldn’t work to give all of the world’s robots the same morals. There is no universal moral standard, and different types of robots have different requirements. There is no sense in a battle robot that doesn’t kill. However, a nursing or service robot should never harm anyone. This is why I focus on relatively straightforward areas, such as the home. Certain types of robots can be used there – like the robot vacuum cleaner – and for these, I can define a particular moral code. I don’t see any problems this respect. If somebody else’s moral standards differ, they should buy a different robot vacuum cleaner.


The artificial intelligence developed by Deutsche Telekom is a digital assistant, a chatbot. Why should we be concerned about moral standards in a totally virtual being?

If we use autonomous systems for customer relationships, then we should think about how these are designed. We shouldn’t design them so that they abuse, insult or attack users or say anything like, “I don’t care if you want to commit suicide, let’s change the topic.” At the moment we are quite innocent in how we have approached chatbots. Hardly any of them will disclose, for example, which sources they have consulted. If I ask Google Assistant the question, “Who is Rihanna?” It will begin by saying, “Wikipedia says....” I think this is a good method because, by answering in this way, the assistant cites is source. This way, I can decide whether I believe the source – and, in turn, the assistant – or not.


You have already built both moral and immoral bots. What was your intention?

We built two chatbots, the Goodbot and the Liebot. The Goodbot recognises the user’s problems when they are expressed vocally. In contrast, the Liebot lies systematically. We have taught it automated lying strategies and refer to it as an immoral machine. Ultimately, it should help us in developing reliable and trustworthy machines. Developing the Liebot has allowed us to prove that it really is possible to teach machines to lie. The Liebot employs a variety of strategies to manipulate statements it believes to be true or which it has obtained from so-called trustworthy sources. Everything is possible when it comes to building chatbots.

I don’t want to say that the Goodbot is already the best solution. It could possibly create new problems. Perhaps people might place too much trust in a machine that appears to behave morally. However, at the final point in the escalation chain, our Goodbot does do something important – it gives out the emergency hotline number. If we use autonomous systems, we have to give a great deal of attention to their design and ensure that they do not harm people either physically or emotionally.


It currently makes no difference whether I say to Tinka: “Oy! Extend my contract,” or if I say, “My darling lady, would you please do me the honour of helping me to extend my contract?” Are we encouraging brusque communication by using bots?

It could be a problem if we speak to and treat robots or assistants however we choose. This problem of brutalisation with regard to animals was already recognised by antique philosophers and great thinkers like Kant. They argued that, as a rule, we could treat animals as we wanted; however, we also should not because we then become brutal and treat other people badly. It was a liberation, when Jeremy Bentham said that it was about the animal itself. Indeed, there are also Chatbots that react to insults and respond by saying, “Don’t do that. Behave please!” Of course, you can also teach them this.

Prof. Dr. Oliver Bendel is an expert in e-learning, knowledge management, social media, mobile business, avatars and agents, information ethics and machine ethics


How do you teach morals to a machine? What formula do morals have?

As a rule, most types of bots are rule-based. You teach them certain rules, which they then strictly follow. It is also possible to feed them with cases and then have them compare these. I have personally developed decision trees that integrate moral reasoning. There is enormous scope in how a machine can take moral decisions. You could theoretically also associate them Facebook likes, and determine behaviours accordingly. The machine would act according to the number of likes each option receives. In the case of Ladybird, it’s very simple. As soon as it recognises something that is a ladybird, it takes the decision not to vacuum it up based on a predefined rule. When it comes to models for assisted driving systems and self-driving cars, things become more complicated. The machine needs to answer various questions: How big is the animal? How old is the animal (to the extent the machine can recognise this)? How healthy is the animal is? A decision is ultimately taken based upon on the responses: Emergency stop, normal stop, or keep driving. For example, if the machine recognises that the animal is very small, possibly also old or sick, then it runs over it. Whether this is good moral behaviour is a matter of dispute. However, we can add clarity to morals and operationalise them.
 

In this whole discussion about animal-machine ethics and human-machine ethics, what is the situation with machine-machine ethics? After all, there are machines that interact with other machines.

Earlier on you made the distinction of animals that can suffer and feel. You’re right, opinions differ as to whether insects can suffer or not. Many experts say that insects can’t suffer, and ladybirds would fall into this category. However, it is my belief that cockroaches, ladybirds and other insects have certain interests, possibly also a will to live. I would dispute this for machines. Machines are not moral creatures.  I don’t believe that we have to behave morally towards machines. For example, we mentioned brutality. In this example, it is the human being that is the subject of morals, not the machine. Neither do I believe that we should treat machines in any old fashion, not because of the machine itself, but because this brutalises the human being. Perhaps we should still teach machines to treat each other with care, or to pay attention to each other. However, there are economic and technical reasons for this. Machines should treat each other well, so they don’t break down.


There are also robot-ethics specialists, who uphold robots’ rights. What is your view of this?

At present, I don’t see any way of granting robots’ rights. It doesn’t make any sense – I would even go so far as to suggest that we should regard many types of robots more as slaves.
 
I would make a distinction between a robot’s existence and that of an animal. I don’t believe that a robot has a will to live. I would also never grant any rights to plants. I might see them as having a value – as I also would mountains – but not rights. Mountains and plants can’t have any rights. In this respect, a robot is more like a mountain than an animal.

 

Kommentare
Add a comment
Sorry

your browser is not up to date
to enjoy this website you will need to install a modern browser.
we recommend to update your browser and to install the latest version.

iOS users, please male sure you're running at least iOS 9.

Mozilla Firefox Google Chrome Microsoft Edge Internet Explorer