top of page

Machine Ethics

Over centuries, the science of ethics has been busy trying to answer questions about morality. What ought a human to do in any given situation from an ethical point of view? What set of principles should dictate its behavior? Now the advent of robots is forcing science to make some rapid progress on that front since the same question will need to be answered for robots. Azimov’s famous three laws of robotics. (1942) represent only a crude starting point which must be complemented by a more comprehensive normative system as robots get more intelligent, more autonomous and increasingly integrated into human lives, such as a driverless car.


Normative ethics


In a number of interesting papers, including “Towards the Ethical Robot” (1991), “The Utilibot Project” (2005) or the excellent “Machine Ethics” (2007), utilitarianism is promoted as a platform for machine ethics. According to this thinking developed in the 18th century, the morally “right” behavior is the one which maximizes utility, i.e. brings about the greatest amount of good for the greatest number. Under that principle, all humans are treated perfectly equally, which is why some philosophers refer to “impartial morality”. Since demoting subjectivity and amiable to quantification, utilitarianism appears to represent a valuable starting point when it comes to normative ethics, and, relying upon it, the designing of artificial intelligence systems.


The Trolley problem


Utilitarianism has its limitations though, as brilliantly demonstrated in a paper by Judith Jarvis Thomson entitled “The Trolley Problem” (1985). Faced with an ethical dilemma, a utilitarian response, whilst mathematically correct from a cost-benefit perspective, produces a result which may be deemed to be unfair by most people. To illustrate the point, consider the following simple trolley scenarios.


  • The switch scenario—a runaway trolley is headed for five railway workmen who will be killed if it proceeds on its present course. You can save these five people by hitting a switch that will turn the trolley onto a sidetrack. Unfortunately there is a single workman on the sidetrack who will be killed if you hit the switch. Under this scenario, a utilitarian course of action is generally deemed to the right course of action—Hitting the switch is permissible for the greater good

  • The footbridge scenario—you are now standing on a footbridge spanning the tracks, in between the oncoming runaway trolley and the five workmen. Standing next to you is a man with a huge backpack. The only way to save the five people is to push him off the footbridge and onto the tracks below. The man will die as a result, but his body and the backpack will stop the trolley. As in the previous case, the utilitarian objective of greater good would suggest that the man with the backpack be pushed over. The equation is simple, as in the switch scenario—one human being vs. five. And yet that course of action feels (and probably is) wrong, almost self-evidently. And our brain is not mistaken since showing a very different reaction to each scenario. Why?

There are countless variations on the same theme considered by Mrs. Thomson in her thesis. They all illustrate the disturbing complexity faced with when coming up with a set of robust ethical rules—as soon as an ethical principle seems to make universal sense, it is invalidated by a new scenario.


From Trolleys to self-driving cars


The Trolley problem has been already identified by car manufacturers as leading to a number of thorny dilemmas for self-driving cars. How should the car be programmed to act in the event of an unavoidable accident? Should it adopt a utilitarian approach and seek to minimize the loss of life, even if it means sacrificing its occupants? For example, should the car with two occupants voluntarily crash into a wall instead of running into a bunch of pedestrians unexpectedly crossing the road? Would many buy a self-driving car obeying that rule? Should the owner be allowed to modify that rule after purchasing the car? This is only the tip of the iceberg of moral issues which will need to be addressed to allow self-driving car, and more generally robots, to gain broad acceptance.


Science of ethics vs. technology


The history of the science of ethics shows that mankind has been having trouble agreeing upon what is moral or not when going beyond the obvious. This does not bode well for programming the algorithms of the new generation of robots, and may trigger a need to voluntarily limit their capabilities until relevant ethical questions are satisfactorily answered. Artificial intelligence may therefore only grow as fast as the slowest of technological capabilities and the science of ethics.


Notes: Azimov’s rules


I. A robot may not injure a human being or, through inaction, allow a human being to come to harm

II. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law

III. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

7 views0 comments

Recent Posts

See All

A Bet On Europe

2016 has been a blatantly bad year for opinion pollsters. The on-going prediction crisis is seen by many as particularly concerning for...

System Dynamics

About sixty years ago, Professor Jay Forrester, who passed away last week, worked with General Electric’s household appliances team to...

The Twilight Zone

”There is a fifth dimension beyond that which is known to man. It is a dimension as vast as space and as timeless as infinity. It is the...

Comments


bottom of page