Philosophy Now
If a train continues on its current course, it will kill a workcrew of five down the track. However, a signalman is standing by a switch that can redirect the train to another branch. Unfortunately, a lone worker will be killed if the train is switched to the new track. If you were the signalman, what would you do? What should a computer or robot capable of switching the train to a different branch do?
You are hiding with friends and neighbors in the cellar of a house, while outside enemy solders search. If they find you, it is certain death for everyone. The baby you are holding in your lap begins to cry and won’t be comforted. What do you do? If the baby were under the care of a robot nurse, what would you want the robot to do?
Philosophers are fond of thought experiments that highlight different aspects of moral decision-making. Responses to a series of different dilemmas, each of which poses saving several lives by deliberately taking an action that will sacrifice one innocent life, illustrate clearly that most people’s moral intuitions do not conform to simple utilitarian calculations. In other words, for many situations, respondents do not perceive that the action that will create the greatest good for the greatest number is the right thing to do. Most people elect to switch the train from one track to another in order to save five lives, even when this will sacrifice one innocent person. However, in a different version of this dilemma there is no switch. Instead, you are standing on a bridge beside a large man. You can save five lives down the track by pushing the man to his certain death off the bridge into the path of the onrushing train. With this variant, only a small percentage of people say they would push the man off the bridge.
Introducing a robot into these scenarios raises some intriguing and perhaps disturbing possibilities. For example, suppose that you built a robot who’s standing next to the large man. What actions would you want the robot to consider? Would you have programmed the robot to push the large man off the bridge, even if you would not take this action yourself? Of course, the robot might come up with a different response to achieve a similar end – for example, by jumping off the bridge into the train’s path: a rather unappetizing solution for us humans.
Given that ‘driverless’ trains are already common at airports and in the metro systems of London, Paris, and Copenhagen, it’s not unlikely that a computer system will some day face a challenge similar to the example at the top of this article. While the prospect of a robot nurse’s actions affecting the lives of many is less likely, the development of service robots to help care for the elderly and homebound, and eventually for children, places such scenarios also within the realm of possibility. Of course, computer systems can be responsible for harming people too. Present-day harm to a number of people is most likely to arise from computer systems that control power grids, financial networks, or lethal weapons systems.
(Ro)bots is a term my colleague Colin Allen and I coined to represent both physical robots and virtual agents (bots) roaming within computer networks. The new field of machine morality or machine ethics focuses on the practical challenge of building (ro)bots which explicitly engage in making moral decisions. The values programmed into (ro)bots are largely those of the engineers and designers who build the system, or those of the companies for which they work. For simple applications, the designers and engineers can anticipate all the situations the system will encounter, and can program in appropriate responses. However, some method to explicitly evaluate courses of action will need to be programmed into any (ro)bot likely to encounter circumstances the designers could not anticipate. Machine morality is concerned with the values, principles and mechanisms that support these evaluations. It is not necessary that (ro)bots simulate human moral decision-making. However, the ability to be sensitive to the ethical considerations informing the choices humans make will certainly be front and center as criteria for evaluating the actions of ‘Artificial Moral Agents’ (AMAs).
Ensuring that autonomous (ro)bots are safe has made building moral decision-making faculties into systems a practical necessary. Either that, or we must stop autonomous (ro)bots being developed to encompass an ever-broader array of activities, including weapons systems and service robots.
To date, machine morality has largely been a series of philosophical reflections peppered with a few experiments implementing aspects of moral decision-making within computer systems. But the field touches upon a broad array of philosophical issues and controversies. These include questions regarding:
• The function of ‘top-down’ theories of ethics. Do rule-based ethical theories such as utilitarianism, Kant’s categorical imperative or even Asimov’s laws for robots, provide practical procedures (algorithms) for evaluating whether an action is right?
• (Ro)botic moral development and moral psychology. How might an artificial agent develop knowledge of right and wrong, moral character, and the propensity to act appropriately when confronting new challenges?
• The role of emotions. Will (ro)bots need simulated emotions in order to function as adequate moral agents? How? For what purpose? When? Perhaps more obviously philosophical, how can one reconcile the negative impact of emotions on moral decisions (as emphasized by the Greek and Roman Stoic philosophers) with the motivating power of moral sentiments (David Hume) and the apparent need for emotional intelligence?
• The role of consciousness. Can machines be conscious or have (real) understanding? Would an experience-filled consciousness be necessary for a machine to be a moral agent?
• Criteria for ascribing moral agency. What faculties does an agent require in order to hold it morally responsible or legally liable for its actions? Should society grant rights to those agents it deems responsible for their actions?
Machine ethics approaches these and other questions with a consideration of the practical challenges entailed in building and evaluating AMAs that function within specific contexts. Such practical necessity forces at least some discipline upon philosophical thought experiments. As Daniel Dennett noted in a 1995 paper:
“These roboticists are doing philosophy, but that’s not what they think they’re doing… In philosophers’ thought experiments, the sun always shines, the batteries never go dead, and the actors and props always do exactly what the philosophers’ theories expect them to do. There are no surprises for the creators of the thought experiments, only for their audience or targets. As Ronald de Sousa has memorably said, much of philosophy is ‘intellectual tennis without a net’. Your [roboticists’] thought experiments have nets, but they are of variable height. ‘Proof of concept’ is usually all you strive for.” [from ‘Cog as a Thought Experiment’]
In building AMAs, a dialectic emerges between the theories of philosophers and the experimental testing of these theories within computational systems. Computers are beginning to serve as testbeds for the viability or practicality of various theories about decision-making and ethics.
A Comprehensive Approach to Ethics
Overall, machine ethics is itself a grand thought experiment; an enquiry into whether moral decision-making is computationally tractable. And building moral machines forces both philosophers and engineers to approach ethics in an unusually comprehensive manner.
Philosophers commonly approach moral decision-making as an activity based in the mind’s capacity to reason. Applying rules and duties or considering the consequences of a course of action, is essential for determining which actions are acceptable and which are not. However, for most moral challenges that humans respond to in daily life, the capacity to reason is dependent on a vast reservoir of knowledge and experience. Emotions, conscience, an understanding of what it means to be an actor in a social world and an appreciation of the beliefs, desires and intentions of others, contribute to working rationally through challenges, especially where values conflict. Furthermore, unconscious processes often drive responses to many challenges.
When considering human moral behavior, we tend to take for granted the underlying thought mechanisms that support the ability to reason morally or exercise a virtue such as courage or honesty. However, computer scientists understand that building systems to perform even simple tasks requires the painstaking implementation of the underlying mechanisms that support complex functions. Part of the engineer’s art lies in recognizing what subsystems are necessary and what architecture will support the integration of these subsystems. The control architecture of a (ro)bot ensures that its actions fall within safe parameters. This control architecture must increasingly cover cases where sensitivity to moral considerations is essential. And (ro)boticists have learnt through experience that overlooking even one small consideration can be the difference between whether a project is successful, fails to perform adequately, does not function at all, or functions destructively.
Machine ethics is a multi-disciplinary field that requires input from computer scientists, philosophers, social planners, legal theorists and others. The different contributors to the design of an Artificial Moral Agent are likely to focus on very different aspects of moral decision-making. A moral philosopher would stress the importance of analyzing the computational requirements for implementing a theory of ethics, such as Kantianism, within the (ro)bot. An evolutionary psychologist would underscore the way in which evolution has forged innate propensities to act in a social manner, perhaps even forged an innate moral grammar [see last issue]. A developmental psychologist might seek a method for educating the (ro)bot to be sensitive to moral considerations by building on one stage of learning after another. A mother, on the other hand, might emphasize the importance of the machine being empathetic and caring. Gaining access to and interpreting cases that illuminate the application of law to differing circumstances would be important from the perspective of a lawyer or judge. It would also be desirable for the system to have a way to evaluate its actions and learn from past mistakes.
In particular, a roboticist would want to know how the system can acquire the information it needs to make good decisions. What sensors and memory systems will the (ro)bot need? How will it integrate all its information in order to forge an adequate representation of each situation? Which responses to features in the environment can be built into the system, and how will it recognize challenges it will need to deliberate upon?
The engineers building the (ro)bot will design different modules or subsystems to handle each of these tasks. However, combining these into a working system whose behavior is safe and honors human values requires a more thorough understanding of the mental mechanics of ethical decision-making than presently exists.
People are rather imperfect in their ability to act morally. But even if moral (ro)bots will need some form of emotional intelligence, they need not be subject to the desires, prejudices or fears that get in the way of people honoring their better angels. This raises the possibility that artificial entities might be more moral than people. Furthermore, (ro)bots may be less bounded than humans in the number of options they can consider in response to a moral challenge. They might select a better course of action than their human counterpart could conceive. However, it’s also possible that human moral decision-making is facilitated by something essential that can not be simulated in (ro)bots. Some theorists argue that (ro)bots are not the kind of entities that can be true moral decision-makers because they lack consciousness, free-will, moral sentiments, or a conscience. However, even if this is correct, it does not obviate the practical necessity of implementing some aspects of moral decision-making in robots in order to ensure that their choices and actions do not harm humans or other entities worthy of moral consideration.
Top-Down, Bottom-Up, and Supra-Rational
Three broad categories are helpful for teasing out various dimensions of moral decision-making important for machine ethics. These are:
• Top-down approaches. ‘Top-down’ refers the use of rules, standards or theories to guide the design of a system’s control architecture (eg the Ten Commandments, the utilitarian maxim, or even Isaac Asimov’s ‘Three Laws of Robotics’). But what, for example, would be the computational requirements for a computer to follow Asimov’s laws?
• Bottom-up approaches. Rules are not explicitly defined in bottom-up approaches – rather, the system learns about them through experience. In a bottom-up approach, the (ro)bot explores various courses of action, is rewarded for morally praiseworthy behavior, and learns. The theory of evolution inspires some of the bottom-up techniques adopted by computer scientists. With the advent of more sophisticated learning algorithms, theories of moral development, such as those of Jean Piaget, Lawrence Kohlberg and Carol Gilligan, will inspire other bottom-up approaches to building Advanced Moral Agents.
• Supra-rational faculties. ‘Supra-rational’ refers to mental faculties beyond the ability to reason. Agents require faculties in addition to the capacity to reason in order to act morally in many situations. Emotions, consciousness, social acumen, and embodiment in an environment are among the supra-rational faculties essential for much moral decision-making.
In his robot stories, sci fi writer Isaac Asimov proclaimed three Laws that he said should guide the behavior of robots (don’t allow humans to be harmed, obey humans, and self-preservation). Asimov’s Laws are what many people think of first when they think about rules for robots. However, in story after story Asimov demonstrated that even these three rather intuitive principles arranged hierarchically can lead to countless problems. For example, what should the robot do when it receives conflicting orders from different humans? Asimov’s stories illustrate the limits of any rule-based morality.
While the history of moral philosophy can be read as a long debate about the limitations inherent in the various ethical theories proposed, top-down theories are nevertheless an obvious starting place for discussing the prospects of building AMAs. The Golden Rule, utilitarianism, and Kant’s categorical imperative are among the attempts to make all ethical rules subservient to a single over-riding principle. However, theorists are discovering that implementing such principles in a computer system is by no means a straightforward exercise. Even the utilitarian proposal that one calculate the net benefit of different courses of action to determine which maximizes the greatest good for the greatest number is far from trivial. To perform such a calculation the computer would require extensive knowledge about the world, about human psychology, and about the effects of actions in the world. The computational load on the system would be tremendous. One cannot reduce ethics to a simple algorithm.
Bottom-up approaches also have their limitations. Artificial life experiments, genetic algorithms and robotic assembly techniques inspired by evolution are far from producing the complex and sophisticated faculties needed for higher-order cognitive processes such as moral decision-making. The learning algorithms computer scientists have developed to date are far from facilitating even the kind of learning we see in very young children. However, the future promise of taking an artificial agent through a process of moral development similar to the way children learn about right and wrong, is alive, even if the technologies required to do so are not yet available.
David Hume famously warned against deriving an ought from an is. Some moral philosophers take this to mean that one cannot determine what is right and good from moral psychology, from the way people actually make decisions. These philosophers struggle to keep at bay the game theorists and evolutionary psychologists who propose that evolution has built inherent biases into the structure of the mind determining much of what people believe to be right and good. Such philosophers are correct in their desire to separate reasoning about what we ought to do from the study of the psychological mechanisms that influence decisions. However, their excessive stress on the importance of moral reasoning has contributed to a fragmented understanding of moral decision-making.
The reasoning skills of (ro)bots will need to be supported by an array of other cognitive mechanisms that will serve as sources of essential information and will help frame the essential features of any challenge. Engineers have already come to recognize that emotional intelligence, sociability and having a dynamic relationship with the environment are necessary for (ro)bots to function competently in social contexts. For example, a (ro)bot will need to read facial expressions and other non-verbal cues in order to understand the intentions and beliefs of people with whom it is interacting. This understanding necessitates the (ro)bot having functional skills that are often associated with being aware (as opposed to being merely computational, merely running a program). The (ro)bot will also need a theory of mind – ie, need to appreciate that others have minds which differ from its own and so will have beliefs, desires, and intentions that differ from those of the (ro)bot.
How far will engineers progress in building distinctly human attributes into their (ro)bots? The rich engineering projects already begun in fields like affective (‘emotional’) computing, social robotics and machine consciousness surprises many people who view computers and robots as mere machines. However, emotions, social skills and (self-)awareness are unlikely by themselves to be sufficient to build AMAs (even if engineers can build sub-systems into (ro)bots that create supra-rational faculties). Top-down approaches, bottom-up approaches and supra-rational faculties will need to be combined. The challenge for philosophers and engineers is to determine the necessary cognitive faculties, the computational requirements necessary to support those faculties, and the available techniques for building those faculties in a (ro)bot.
The Future of Machine Morality
Eventually we may have artificial systems with intelligence comparable to humans’ – a subject which engenders a great deal of interest, and some anxiety. Might such systems be deemed moral agents with both rights and responsibilities? Would it make sense to punish an artificial agent when it performs an immoral or illegal act? Can society ensure that advanced forms of Artificial Intelligence (AI) will be friendly to humans? If not, should we ban research into AI? The prospect that future intelligent (ro)bots might want to override restraints suggests that moral propensities should be integral to the foundations of complex computer systems, and not treated as add-ons or secondary features.
Reflections on the possibilities can serve as fascinating and illuminating thought experiments. For example, philosophers and legal theorists find that considering the criteria for eventu ally granting (ro)bots rights and responsibilities contributes to a better understanding of when any agent should be held culpable.
Given the relatively primitive state of AI research, machine morality tends to be highly speculative. Yet themes that border on science fiction often mask more immediate considerations. For example, fears that superior (ro)bots will one day threaten humanity underscore a societal fear that science is untrustworthy and technology a juggernaut already out of control.
For the immediate future, machine morality research will be grounded in the challenges posed by presently available or imminent technologies. It will thrive as a continuing enquiry into the prospect for computerizing moral decision-making, spurred on by both practical and philosophical challenges. As (ro)bots with explicit moral decision-making faculties are developed, new markets for ingenious products will open up. However, some of the most significant research into machine morality will be philosophical in nature, and comprehensive reflection on teaching (ro)bots right from wrong will focus attention on many aspects of moral decision-making that have often been taken for granted. The building of moral machines provides a platform for the experimental investigation of decision-making and ethics. The similarities and differences between the way humans make decisions and what approaches work in (ro)bots will tell us much about how we humans do and do not function, and much about what, and who, we are.