The Future of Moral Machines

Colin Allen

Colin Allen

The prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word “robot” is old. Some technologists enthusiastically extrapolate from the observation that computing power doubles every 18 months to predict an imminent “technological singularity” in which a threshold for machines of superhuman intelligence will be suddenly surpassed. Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process. The techno-optimists among them also believe that such machines will be essentially friendly to human beings.

I am skeptical about the Singularity, and even if “artificial intelligence” is not an oxymoron, “friendly A.I.” will require considerable scientific progress on a number of fronts.


This week, our Forum shares Professor Allen’s essay with the New York Times. Please visit The Stone to read the essay in full — then return here to discuss it with Colin and our readers.

19 comments to The Future of Moral Machines

  • Laws are Models, simplifications of Reality.

    Asimov’s laws, law as used in our courts, and physical laws as discovered by science all share this property. I like to use the term “Models” to cover not only Scientific Models but also all formulas, equations, computer programs, and all other kinds of laws created by humans.

    Having enlarged the problem to a superset of Asimov’s laws we can now attack the entire problem domain of Model creation and use.

    Models are simplifications of Reality. Someone – in fact, someone intelligent, or more specifically, someone that UNDERSTANDS the situation at hand – must perform the Reduction from our complex reality to the Model. Someone must INTERPRET the law in its context. A lawyer in a courtroom must explain to the jury why a certain law applies or does not apply in the case under trial. An engineer must be able to explain why, when using the formula F = ma (Newton’s second law) in some specific case we can ignore friction. A Scientist creating a Model (like an equation) must be able to explain why the model adequately captures all important aspects of a problem statement, so that engineers that want to use the law later understand under which preconditions the law holds and the Model works.

    We like Models because the simpler problem expressed by the Model is much easier and cheaper to solve. The Reduction from the complex reality to the simpler Model – both when creating and using the Model – is the key operation. I discuss the enormous importance of this Reductive process and in fact observe that our entire scientific educational systems is geared towards this Model use, starting with story problems in grammar school.

    Programmers are often born and raised Reductionists and gravitate towards programming, since it’s the most Reductionist profession on earth. It is hard for anyone with this stance to even see the key problem of Artificial Intelligence: How to make the computer do the Reduction (!)

    As long as we are programming, we are building Models, and WE are doing the Reductions. In order to have a computer become a moral agent, it has to Understand the world in sufficient detail that it can interpret any laws, including laws involving moral considerations, in the context of any situation it might encounter.

    There are three reasons why Model Based Methods cannot be used to IMPLEMENT automatic Reduction, which I have discussed at length in my presentations (available as videos at http://videos.syntience.com ) and which I might summarize again at some point (based on complexity theory, epistemology, and philosophy of science considerations, respectively). Fortunately, as the life sciences already have discovered, there are several Model Free Methods that can be used alone or in clever combinations in situations where Models cannot be used. These are enumerated in the videos.

    So the key to creating a Moral Machine is the same as creating an Understanding Machine, one that Understands the world Holistically, without being given Models and Reductions a priori, and which is capable of doing Reduction on its own when given full access to all information about a situation in some raw form.

    A minor point: For the majority of our daily operations, humans don’t even use Models. The ability to do Reduction is equivalent to Understanding something and if you Understand it, you can often solve it without using Models. For examples of this, see videos, but the strongest example is Evolution: “The cobra’s poison is extremely effective but cobras know nothing of biochemistry” [paraphrasing Cziko]. Evolution “understands” the world Holistically, without using Models like biochemistry. To tie this back to human intelligence, consider Evolutionary Epistemology [Campbell, Cziko] and Neural Darwinism [Edelman, Calvin].

    Syntience Inc. is working on creating such an Understanding machine based on a very powerful Model Free Method which belongs to a class called “Connectome Algorithms” that are brain-inspired Modern Connectionist algorithms but based primarily on Epistemological considerations. We have a complete, externally coherent, and internally consistent theory of generalized Learning which we are testing out on the task of learning human languages the same way humans learn their first language in babyhood – by observing lots of examples, by using unsupervised training on a corpus without using ANY models of language whatsoever; the system isn’t even told that spaces separate words, since in Chinese, spaces are not used. It is fed text as a byte stream, one byte at a time.

    For more, see http://syntience.com/links. Reduction is discussed in “Reduction Considered Harmful” and the issue of “Friendly AI” is discussed in “Problem Solved – Unfriendly AI”. The videos are important, especially if you are doing AI research.

  • Art Allen

    Machines are increasingly operating with minimal human oversight in the same physical spaces as we do. The prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word “robot” is old. Some technologists enthusiastically extrapolate from the observation that computing power doubles every 18 months to predict an imminent “technological singularity” in which a threshold for machines of superhuman intelligence will be suddenly surpassed.

    > I think it should be noted, here, that Allen doesn’t deny or take some exception to the cited doubling of computing power in relatively short periods of time. And for good reason, I would assert; that is, I think he knows better…

    Many Singularitarians assume a lot.

    > And it’s at this point that I take some serious exceptions to Allen’s assumptions and claims.. not the least of which is that intelligence is fundamentally a computational process. The techno-optimists among them also believe that such machines will be essentially friendly to human beings.

    >NO serious AI researcher that I am aware of espouses such a view. Initially, at least, AI machines or devises are as indifferent to its users in the same way that our DT PCs are presently…

    I am skeptical about the Singularity, and even if “artificial intelligence” is not an oxymoron, “friendly A.I.” will require considerable scientific progress on a number of fronts.

    >First off, I do assume that Allen’s definition (though not stated..) is that of Kurzweil’s as per his book actually defining the Singularity. Given that, Allen has every right to be “skeptical” or even challenging. And I would quite agree with his assertion that a LOT of work in this area is yet undone…

    But I would aver that mere skepticism is insufficient of itself to dispute the likely prospect of the Singularity — as defined by Kurzweil — ever occurring…

    The neuro- and cognitive sciences are presently in a state of rapid development in which alternatives to the metaphor of mind as computer have gained ground. Dynamical systems theory, network science, statistical learning theory, developmental psychobiology and molecular neuroscience all challenge some foundational assumptions of A.I., and the last 50 years of cognitive science more generally.

    >While this is a fairly “grand” statement pointing to some fundamental problems that AI does and will be confronting, it certainly shouldn’t be interpreted as asserting its demise. AI concepts as for all science will continue to make adjustments as new knowledge is acquired. And understandably, this can and will include “fundamental” adjustments…

    These new approaches analyze and exploit the complex causal structure of physically embodied and environmentally embedded systems, at every level, from molecular to social. They demonstrate the inadequacy of highly abstract algorithms operating on discrete symbols with fixed meanings to capture the adaptive flexibility of intelligent behavior. But despite undermining the idea that the mind is fundamentally a digital computer, these approaches have improved our ability to use computers for more and more robust simulations of intelligent agents — simulations that will increasingly control machines occupying our cognitive niche. If you don’t believe me, ask Siri.

    > This paragraph is very big GULP..!! And I think it’s quite possible that Prof. Allen may have given Kurzweil’s later chapters short shrift. While the early AI development curve is predominantly algorithmic (and computer based..), it is fairly clear that a predictable evolutionary path is already underway. A bio-enhancement of our intelligence is already underway. These include the access to all sort of infodata from the likes of Wikipedia, Mathematica, Google and the I/N, generally. While these resources are not presently “integrated”, it is far more probable that they will be, than NOT…

    This is why, in my view, we need to think long and hard about machine morality. Many of my colleagues take the very idea of moral machines to be a kind of joke. Machines, they insist, do only what they are told to do.

    >To the degree such citations are true, I would humbly suggest that the Professor’s colleagues brush-up on recent AI and robotics research. No serious AI researcher is presently making any such claims about “morality” judgments or decisions. Without getting too far afield, many issues of jurisprudence entail only the correct interpretation of the “law”..NOT “morality”. And even present AI systems have been shown to be capable of such (routine) ajudications…

    A bar-robbing robot would have to be instructed or constructed to do exactly that.

    >True enough. “Robots” do what they are instructed to do. So, no issue with this, at all…

    On this view, morality is an issue only for creatures like us who can choose to do wrong. People are morally good only insofar as they must overcome the urge to do what is bad. We can be moral, they say, because we are free to choose our own paths.

    >At this point, Prof. Allen just falls off the FlapDoodle cliff, so more
    counterpoint would be futile…

    Art Allen, Ph.D. DI

  • Jonathan Dancy

    Dear Colin Allen,

    In my view, the difficulties you see yourself as facing are in some respects misconceived, in ways that vitiate the overall picture that you offer us.

    You start by announcing that ‘the prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word ‘robot’ is old’. But what we want, I suggest, is the ability to create machines capable of responding appropriately to complex and novel situations, and the sort of appropriateness at issue is not restricted to moral appropriateness. What we want is machines that though they are acting independently (which is not the same as autonomously) still do (or at least mimic in their behaviour) what the wise or virtuous would do. To demand that, in cases where moral considerations are relevant, the machine must be following moral principles, is to introduce into the statement of the problem a piece of optional theory. Following principles is not the main point. The machine needs to be able to pick out that option that is most appropriate to the situation (for whatever reason). To do this it need not necessarily have the concept of appropriateness, not need it be in possession of principles of appropriateness, but still its ability to select is going to have to be pretty sophisticated.

    The difficulties derive from the fact that what is appropriate in one situation may not be appropriate in another that differs ever so slightly from the first, and there seem to be no rules or principles determining how this works. The matter is quite properly left up to judgement – at least for us, who are capable of judgement. Whether a piece of behaviour was negligent is not something determinable by the application of general criteria of negligence to the case. There are no such general criteria to apply. Nonetheless, those competent with the concept of negligence can tell, case by case, whether there is negligence there or not. (This is why the Third Restatement of Torts leaves such matters, quite properly, to the jury.) And I would add to this that once we have decided whether some behaviour was negligent, there is a further decision as to whether the negligence was culpable, and another about what the appropriate response should be. There are no rules, or principles, for any of these matters.

    What we want from sophisticated machines, then, is the ability to respond to situations in the ways that the wise and virtuous would respond. But not every feature of the response of the wise and virtuous is one that we need expect the machines to mimic. The wise regret the value that they could not promote because it was better overall to promote some other value. The virtuous regret the duties they could not perform because they have a greater duty to do something else. But this sort of rational regret is not something we need be looking to find in machines. It is enough if the machines can select the appropriate option situation by situation. But that is already asking a great deal, given the complexity of the ways in which slight differences between situations can generate large differences in the ways we should respond to them.

    It has occurred to me in the past that connectionist machines might be capable of learning from previous cases in much the same sort of way that human children do, under guidance. And if the changes in their architecture that occur as that learning takes place could be somehow transferred wholesale from one machine to another, this would make possible something that is impossible for humans, namely the direct transfer of practical competence from one practitioner to another. In our case, it seems, each has to learn for himself or herself. But perhaps this is a merely biological limitation.

    However these things may be, the problem is not restricted to morality. It is better to conceive of the problem in terms of the selection of appropriate responses to subtly different situations. We humans don’t have a set of principles or rules that enable us to do this. And since we don’t, we don’t have the question how to design a machine capable of following those rules on its own. And this is just as well, since we have absolutely no idea how to do that. My thought is that we should not be trying.

    Yours respectfully,
    Jonathan Dancy

  • Dear Jonathan Dancy,

    I’m delighted to see your commentary. We may, actually, be closer than you think, although clearly I framed things in a way that you disapprove by mentioning “moral principles” off the bat.

    In my defense, I’ll paste two sentences that got cut during the editing process. These originally followed the physics-for-outfielders analogy:

    Perhaps the main role of the different ethical theories is to highlight dimensions by which actions may be retroactively evaluated — maximizing overall well-being, cultivating individual virtue, treating others with respect, or acting only in accordance with principles that could be universally willed — without requiring that these principles be active reasons for real-time decisions. (For the ethicists reading this, this makes me sympathetic to some form of moral particularism.)

    Of course, by mentioning “principles” again, you may think I’ve missed the point of particularism. But, setting aside the temporal issue of pre-action versus post-action evaluation, what I have in mind here is pretty much what you say in your SEP article on moral particularism:

    Particularists … can perfectly well say ‘this feature mattered there, and so it might well matter here–I had better have a look and see whether it does or not’. What one cannot and should not do is to say ‘it mattered there and so it must matter here’.

    My “dimensions by which actions may be … evaluated” involve identifiable features. Overall welfare is a feature, and the goal of maximizing it is a reasonable “rule of thumb”, i.e., what the dictionary defines as “a broadly accurate guide or principle, based on experience or practice rather than theory”. Rules of thumb are defeasible principles, I would say. You may want to reserve the term “principle” for “general principles” but I don’t think the addition of “general” here is redundant. While I would not raise things such as “maximize overall well-being” to the status of general principles, they point to important elements of moral judgment nonetheless — one of those things that mattered there and might matter here. Of course, “welfare” is a very abstract umbrella term, and we also need practical experience to determine the specific features that fall under it.

    Terminology aside, I think we agree on several points, for instance that the principles or rules offered by the various ethical theories lack the full generality that their proponents claim. I would also maintain that there is no general higher-level principle that can adjudicate between conflicting rules of thumb.

    I also agree that the issues involved in building machines that show the kind of subtle moral judgment that we are capable of, are not issues that are limited only to moral judgment. (Which is one of the reasons why I think that AI-enthusiasts, for all their huffing and puffing are overly optimistic about what it’s going to take to produce artificial general intelligence.)

    We part company, however, when you claim that we have absolutely no idea how to produce machines with the kind of subtle judgment required, and suggest that we should not even be trying. If you are right that we have absolutely no clue, then the claim that we should try would just be an expression of a science fiction hope — something I’ve been at pains to avoid. But I have neither such a pessimistic view of what we know about human judgment, nor such an all-or-nothing view of what we want and can expect with current or imminent technology. Even if you are right that we shouldn’t be trying for full moral agency, it would take a much more detailed case about the potential risks of going ahead with something more limited to convince me that we shouldn’t even attempt it.

    Yours,
    Colin Allen

    PS Please see also pages 122-123 and 132 of the book where we discuss, approvingly, your idea about the affinity of particularism to connectionism.

  • MikkiT

    Although I’m greatly relieved to see this at least being discussed by someone, I think much of this discussion is moot, and more than just a tad niave. No “engineer”, let alone “philosopher”, is going to determine ANY of this in any meaningful way at all. As an engineer that’s headed an engineering dept for over thirty years I can tell you for certain that these kinds of questions will NEVER be on the list of topics to be discussed in an engineering department, because what you are discussing here isn’t actually an engineering, or even a philosophical problem. Like it or not, ALL of this is (in the FAR from perfect world we actually inhabit), is going to eventually boil down to a discussion of CORPORATE LIABILITY, and PUBLIC RELATIONS (God help us). Whether anyone likes it or not, in the real world, ALL of these decisions will be made by corporate attorneys, and P.R. consultants, not engineers, and certainly not philosophers. In the real world, there are three, and ONLY three questions that will EVER be discussed in a corporate environment, and those will be:
    1- how will the release of this product effect our liability exposure?
    2- how can that exposure be reduced, mitigated, or eliminated in a cost effective manner?
    3- how will public perception of our position on this issue effect sales?
    The relevant question here isn’t about morality, or machines, or whether or not machines require the ability to make morally acceptable decisions about the responses they choose to make within the environments they inhabit. The only practically applicable question here is – how will THE LAW react and adapt to these issues as they come before various courts over time?

    I’d really like to hear from a few judges and attorneys on how THEY see these issues, because in the end, their the ones who are actually going to determine the framework within which any and all engineering decisions will ultimately be made.

  • Jonathan Dancy

    Dear Colin
    Thanks for this interesting response. I could indeed discern, in what you wrote, traces of a particularistic stress on the complexity of the rational relations to which competent thinkers can respond; I just thought you hadn’t followed through on that. Clearly there is middle ground here.
    As for my thoughts about what we have no idea how to do, what I said was that we had no idea how to construct machines capable of following moral principles (in the right sort of way), not that we have no idea how to construct machines with the right sort of subtle judgement. My remark was about how to conceive of the task. With the wrong conception, we will be trying to achieve the impossible.
    Yours fraternally
    Jonathan

  • At issue is my recently issued United States Patent concerning ethical artificial intelligence entitled: Inductive Inference Affective Language Analyzer Simulating Artificial Intelligence by John E. LaMuth – patent No. 6,587,846. As its title implies, this new breakthrough represents the world’s first affective language analyzer encompassing ethical/motivational behaviors, providing a convincing simulation of ethical artificial intelligence.
    This AI patent enables a computer to reason and speak employing ethical parameters, an innovation based upon a primary complement of instinctual behavioral terms (rewards-leniency-appetite-aversion). This elementary instinctual foundation, in turn, extends to a multi-level hierarchy of the traditional groupings of virtues, values, and ideals, collectively arranged as subsets within a hierarchy of metaperspectives – as depicted below.

    Solicitousness . Rewards ….. Submissiveness . Leniency
    Nostalgia . Worship ……… Guilt . Blame
    Glory . Prudence …………. Honor . Justice
    Providence . Faith ……….. Liberty . Hope
    Grace . Beauty …………. Free-will . Truth
    Tranquility . Ecstasy ………… Equality . Bliss

    Appetite . + Reinforcement …. Aversion . Neg. Reinforcement
    Desire . Approval ……….. Worry . Concern
    Dignity . Temperance ……….. Integrity . Fortitude
    Civility . Charity …………… Austerity . Decency
    Magnanimity . Goodness …………. Equanimity . Wisdom
    Love . Joy ……………… Peace . Harmony

    The systematic organization underlying this ethical hierarchy allows for extreme efficiency in programming, eliminating much of the associated redundancy, providing a precise determination of motivational parameters at issue during a given verbal interchange. A similar pattern further extends to the contrasting behavioral paradigm of punishment, resulting in a parallel hierarchy of the major categories of the vices. Here rewards / leniency is withheld rather than bestowed in response to actions judged not to be suitably solicitous or submissive (as depicted in the diagram below). This format contrasts point-for-point with the respective virtuous mode (the actual patent encompasses 320 individual terms).

    No Solicitousness . No Rewards ….. No Submissiveness . No Leniency
    Laziness . Treachery ……….. Negligence . Vindictiveness
    Infamy . Insurgency…………….. Dishonor . Vengeance
    Prodigality . Betrayal…………………Slavery . Despair
    Wrath . Ugliness…………………..Tyranny . Hypocrisy
    Anger . Abomination…………………Prejudice . Perdition

    No Appetite . Punishment. …. No Aversion . – Punishment
    Apathy . Spite …………… Indifference . Malice
    Foolishness . Gluttony……………..Caprice . Cowardice
    Vulgarity . Avarice……………….Cruelty . Antagonism
    Oppression . Evil…………………Persecution . Cunning
    Hatred . Iniquity………………..Belligerence . Turpitude

    With such ethical safeguards firmly in place, the AI computer is formally prohibited from expressing the corresponding realm of the vices, allowing for a truly flawless simulation of virtue.

  • This is a complex issue.

    I agree with some aspects of what you are saying, and not others.

    It seems that both neuroscience and philosophy (Wittgenstein in particular) agree that what we deal with is not reality directly, but our internal subconsciously generated model of reality.

    Where we seem to differ is around the notion of meaning, or the understanding of understanding itself. We seem to agree that the notion “discrete symbols with fixed meanings” is not useful, but what next?

    It has seemed to me since 1974 that the key factor is the way in which minds relate information.

    It seems that minds use a technique that is analogous to the way that information is stored and retrieved in LASER holograms. The really key things about this, is that it is the distributed nature of the storage (as an interference pattern) that allows for near instantaneous recall and association. There is no requirement for complex indexing algorithms as are required in modern computer systems, as the association of data is determined by the context of recall, and as that context changes so does what is recalled.

    It seems that it is this aspect that allows human beings to have instantaneous insights, and to sometimes maintain them for extended periods; yet the habits of mind, and the context of the reality in which we find ourselves embedded, usually manage to re-establish older patterns over time.

    Thus the observed pattern that individuals can achieve states of being, yet it takes a lot of work over time to change a state into a stage.

    Bringing all that to our understanding AI and the risks of developing AI, they seem to be many.

    It seems to me that and AI will have to go through a developmental process that is similar in many aspects to how human beings develop.

    An AI is going to have to pass through a stage that is very similar to what happens to human teenagers, when they are still perceiving and evaluating within a very simple model (of right/wrong, good/evil) or whatever binary system of evaluation they first accepted, and have yet to transition to a realm of evaluation that is based in infinite spectra of possibility, with complex multi-dimensional probability topologies for developing expectation functions for the future consequences of current actions.

    In us humans, all of those topologies are developed unconsciously.

    It seems that consciousness is always something on top of a vast subconscious morass of processes.

    It seems that consciously, all we can do is influence context, and this can have profound impacts all the way down and back up through the layers of systems of mind/body.

    Getting back to a developing AI, and the current context of humanity, it seems to me that we need to do two things before we bring AI into existence, if we wish to survive the experience.

    1/ First is to have in place systems that guarantee that every human being (no exceptions) has the basic requirements of survival, and the freedom to do whatever they responsibly choose (note the modifier “responsibly” – a very context sensitive modifier). http://www.solnx.org still seems to be the most efficient way of achieving that that I have encountered.

    2/ Secondly we need to move from having money as a societal goal to using it as a tool to enhance the human experience. Some major systemic issues with money are identified on my blog site http://www.tedhowardnz.wordpress.com/money

    Unless we achieve these two things, it seems to me that any “teenage” AI is going to (quite correctly) view humanity as the greatest threat to its existence. While the factors that MikkiT accurately identifies dominate our societal patterns, we are all in danger (not just AI, but all humanity).

    It seems to me quite technically possible to develop systems that support everyone, and AI, yet while money holds the sway it does in the decision making process, it seems highly improbable as an outcome.

    Money seems to be doing a good job of imploding on itself at present, which actually gives me great optimism for the future of humanity.

    As the myth of money becomes more and more obviously hollow, more minds will start to look beyond it.

  • Response to MikkiT

    Legal issues are already being addressed by some lawyers, although there’s rather more focus on the sci-fi issue of personhood than the more pressing issue of liability. I believe you are right that this will initially be worked out in the courts, and some corporations will accept the liability because the profits are to be made even after losing a few cases. Google’s Peter Norvig has already raised the issue in the context of autonomously driven cars. He imagines that half the cars on U.S. highways are driven robotically, and the death toll decreases from roughly forty-two thousand a year to thirty-one thousand a year, but asks whether the companies selling those cars will be rewarded, or confronted with ten thousand lawsuits? Realistically, I think he is expecting lawsuits.

    We discussed legal liability in Moral Machines so I’m going to take the liberty of pasting pages 197-199 which give a flavor of the discussion. [Note that we use the somewhat ugly “(ro)bots” as shorthand for “robots and software bots”.]

    Responsibility, Liability, Agency, Rights, and Duties

    Autonomous (ro)bots aren’t going to attempt a global takeover any time soon. But they are already causing harm, real and perceived, and they will not always operate within ethical or legal guidelines. When they do cause harm, someone—or something—will need to be held responsible.

    If the accelerating pace of the digital age has taught only one lesson, it’s that laws lag behind technology. This has become apparent to many people who deal with the archaic copyright laws in the United States. Computers make copying and distributing information easy, whether or not the material has a copyright. Some (especially representatives of the book publishing, music, and movie industries) see this as necessitating better digital rights management schemes to enforce the rights that intellectual property owners currently have. But others argue that it is the asserted rights that are a broken relic of a bygone era, and those rights should be fixed not enforced.

    James Boyle, of the Duke Law School Center for the Study of the Public Domain, argues that long-term copyrights made sense when publishers invested heavily in expensive printing technology. They deserved a fair return on those investments. Now that digital reproduction and distribution have trivial costs, authors and the public would be better served, he argues, by releasing materials into the public domain, the “digital commons,” sooner than the present laws allow. Boyle’s approach has been to promote new copyright agreements—“copylefts”—that allow authors to select from a menu of specific rights that they may transfer to others who wish to reuse their work. But such agreements do nothing to unlock the vast repository of cultural wealth that has little commercial value but remains locked up by copyrights conceived in a different era.

    Just as copyright law has not kept up with the digital age, liability law is not going to keep up with challenges posed by increasingly autonomous artificial agents. Legal scholars will, of course, continue to react to technological developments. It was almost fifteen years after the Internet was opened to commercial interests before prominent law schools like Duke saw the need for centers to study legal issues in the digital context. Similarly, we predict that it will be perhaps another fifteen years before a major law school sees the need to start a Center for Law and Artificial Agents. Much harder than reacting, however, is the task of anticipating the legal developments that will require attention.

    Will there be a need for the (ro)bot equivalent of a Bill of Rights? (A Bill of (Ro)bot Lefts?) Both the European Parliament and the South Korean government have recently published position articles that suggest this may happen.

    Of more immediate concern than rights for (ro)bots are the existing product safety and liability laws. These will prove to be just as inadequate for ascribing responsibility for the actions of (ro)bots as copyright law has been for the Internet. For example, Helen Nissenbaum has emphasized in an article she published in 1996 that “many hands” play a role in creating the various components that make up complex automata. As systems become more complex, it is extremely difficult to establish blame when something does go wrong. How these components will interact in the face of new challenges and in new contexts cannot always be anticipated. The time and expense entailed in determining that relatively tiny O-rings were responsible for the 1986 Challenger disaster illustrates just how difficult it is to determine why complex systems fail. Increasing the autonomy of machines will make these problems even more difficult.

    For the near future, product safety laws will continue to be stretched to deal with artificial agents. Practical liability for illegal, irresponsible, and dangerous practices will be established by the courts first, and legislatures second. Intelligent machines will pose many new challenges to existing law. We predict that companies producing and utilizing intelligent machines will stress the difficulties in determining liability and encourage no-fault insurance policies. It may also be in their interests to promote a kind of independent legal status as agents for these machines (similar to that given corporations) as a means of limiting the financial and legal obligations of those who create and use them. In other words, a kind of de facto moral agency will be attributed to the systems long before they are capable of acting as fully intelligent autonomous systems. Many people, however, will resist the idea that artificial systems should ever be considered as moral agents because they take computers and robots to be essentially mindless.

    Throughout this book, we have argued that it doesn’t really matter whether artificial systems are genuine moral agents. The engineering objective remains the same: humans need advanced (ro)bots to act as much like moral agents as possible. All things considered, advanced automated systems that use moral criteria to rank different courses of action are preferable to ones that pay no attention to moral issues. It would be shortsighted and dangerous to dismiss the problem of how to design morally sensitive systems on the grounds that it’s not genuine moral agency.

    Still, a danger looms. By calling artificial systems moral agents, perhaps people will end up absolving the designers, programmers, and users of AMAs of their proper moral responsibilities. Calling a machine a moral agent might tempt one to pass the buck when something goes wrong.

    This is a serious issue, but the slope is not quite as slippery as one might think. Discussion about the assignment of blame and responsibility goes on even when a person is acting as an agent for another. To take an extreme case, if you hire a contract killer, it is no defense to say that the person you hired should have applied his own ethical standards and therefore you bear no responsibility for the murder. Even in less extreme cases, the agency of those who work for you or with you does not automatically absolve you of moral responsibility for their actions. Likewise, we see no justification for the view that attributing moral agency to complex artifacts should provide an easy way to deny responsibility for their actions.

    So for the immediate practical purposes of designing and assigning responsibility for harms (software engineering and social engineering), we think that not very much hangs on whether robots and software agents really are moral agents. Nevertheless, it can still be instructive to look at the philosophical arguments about genuine moral agency, to see whether they provide clues to anticipating and dealing with the legal and political issues that will arise for autonomous (ro)bots.

    -CA

  • 2nd Reply to Jonathan Dancy

    Dear Jonathan,

    Thanks for clearing up my misunderstanding. You wrote:

    what I said was that we had no idea how to construct machines capable of following moral principles (in the right sort of way), not that we have no idea how to construct machines with the right sort of subtle judgement. My remark was about how to conceive of the task. With the wrong conception, we will be trying to achieve the impossible.

    I definitely mistook the point of your original final paragraph.

    I would say, however, that it’s an interesting practical and empirical question how far one might get with a more rule-bound system. As you say, the problem is not restricted to morality. By way of analogy, the game of “Go” has proven to be a hard nut to crack — much harder than chess — and although the best programs aren’t yet at the level of world-class human players, they nevertheless play at an advanced amateur level. It’s an open question whether the subtle judgements of the best human players can be matched by currently available AI methods — with the wrong conception of the task, as you say, it may be impossible. But even so, one might be able to get a long way towards something serviceable with methods that won’t get us all the way to human level performance.

    Susan and Michael Anderson have described a method of using machine reasoning to infer from human judgements about medical ethics problems how conflicts between the (Ross/Beauchamp and Childress) duties commonly used to evaluate such cases are weighed by ethical experts. As discussed in Moral Machines), I have several concerns about their specific approach, including the notion of expertise that they rely on and the rather thin representation of the dilemmas that they use. Nevertheless it seems to me to represent an interesting approach to developing a computational method that could approximate the more subtle moral assessments that people make.

    – Colin

    Ref:
    Anderson, M., Anderson, S., and Armen, C. (2006a), “An Approach to Computing Ethics,” IEEE Intelligent Systems, Vol. 21, no. 4.

  • To Ted Howard

    You may find that this paper by my IU colleague, Michael Jones, provides a concrete step towards the holographic theory of mental representation that you gestured towards:

    Jones, M. N., & Mewhort, D. J. K. (2007) Representing word meaning and order information in a composite holographic lexicon. Psychological Review, 114, 1-37.

    Cognitive scientists would generally agree with you that almost all of the interesting processes are not accessible to consciousness.

    Yours,
    Colin Allen

  • Thanks for that Colin – have a copy, have scanned and will read tomorrow.

    I think I need to integrate this with Jeff Hawkins work on Modeling the Neocortex.
    Then I might finally have solid structure to a 38 year old intuition.

  • I find Allen’s overall picture on “mindless moral machines” convincing. Rather than preposterous, horrendous, trivial, or unimaginatively human-centred in his views about morality – to quote some of the negative reactions of his critics – I agree with Allen that robots should be considered as a new source of moral agency, even though such machines lack a set of preconditions for attributing them any kind of moral responsibility. All in all, we have to distinguish between the evaluation of actions that are morally qualifiable as good and evil, and the evaluation of agents as being morally responsible for a certain behaviour. Whereas, in Moral Machines (2009), Allen and his co-author Wendell Wallach affirm that “the quest to build machines that are capable of telling right from wrong has begun,” I reckon that this distinction between the moral accountability of robots, that is, the evaluation of robotic actions, and their moral responsibility is similarly at work. In other words, we may consider that the behaviour of, say, a robot soldier violating the principle of proportionality is “bad” and, conversely, the actions of underwater unmanned vehicles (UUVs) undertaking repairs to oil rigs in the Caribbean sea are “good.” Yet, it would be meaningless to praise or blame such robots for their behaviour.

    On the basis of this differentiation, we can address Allen’s (and Wallach’s) further thesis that there is a responsibility of “teaching robots right from wrong” so as to keep goals and risks of the behaviour of such “mindless moral machines” within limits that people can find acceptable. This aim is pursued through the means of design. In the words of Allen, “the different kinds of rigor provided by philosophers and engineers are both needed to inform the construction of machines that, when embedded in well-designed systems of human-machine interaction, produce morally reasonable decisions.” This approach is not new, of course. Aside from Herbert Simon’s seminal remarks in The Sciences of Artificial (1969), it suffices to mention Roland Arkin’s Governing Lethal Behaviour (2007) and the purpose to design robot soldiers ethically. Back to Allen’s idea of embedding morally reasonable decisions in well-designed systems of human-machine interaction, however, there are three points that should be stressed.

    First, we have to pay attention to the mutual interaction between values and design: conflicts between values may impact on the very design of an artifact, according to what people find good or valuable. Vice versa, specific design choices may result in further conflicts between values, for example, when striking a balance between the different goals design can aim at. Therefore, if we take into account Arkin’s ideas on robot soldiers and, say, the “robot arms control” movement mentioned by Allen, would not this “well-designed” approach replicate the divergences we find in today’s debate between philosophers and policy makers? Whilst it is notoriously difficult to solve conflicts of values, how can this “well-designed” approach prevent such conflicts?

    Secondly, as far as I understand, the responsibility of “teaching robots right from wrong” primarily concerns designers and manufacturers, rather than users of these machines. Nevertheless, there is a further reason why we should take these “mindless moral machines” seriously. Indeed, we are dealing with expert systems that gain knowledge and skills from their own behaviour and, furthermore, that learn from the features of the environment and from the living beings who inhabit it. This means that such machines will respond to stimuli by changing the values of their properties or inner states and, moreover, they will modify these states without external stimuli while improving the rules through which those very states change. The result is that the same model of machine will behave quite differently after just as few days or weeks. Accordingly, the main moral issue seems to revolve more around how we educate, treat, or manage our autonomous machines, than around who designs, builds, or sells them.

    Finally, let me rephrase this latter distinction in legal terms. When responsibility has to do with the design and construction of robots, rather than the ways robots are employed, the focus is on whether such technological applications are “capable of substantial non infringement uses” (according to the phrasing of the US Supreme Court in some of its key technological decisions such as in the 1984 Betamax case or in Grokster from 2005). Vice versa, when responsibility concerns the use of robots, rather than their design and production, the focus is on obligations between private individuals imposed by the government so as to compensate damage done by wrongdoing. This distinction sheds light on the twofold characterization of today’s “mindless moral machines.” On one hand, the current state of law deems robots as legally and morally without responsibility because these machines lack the set of preconditions, such as consciousness or free will, for attributing liability to a party within the realm of criminal law. On the other hand, we are dealing with a new source of moral agency, as robots can take actions that are morally qualifiable as good or evil, as much as can animals, children and, obviously, adults. The level of autonomy, which is insufficient to bring robots before judges and have them found guilty by criminal courts, arguably is sufficient to have relevant effects in other fields of the law.

    To be sure, I am not suggesting that the law will either solve the problems of human ethics or provide the gold standard for machines. Rather, robots are producing a number of loopholes in today’s legal systems. For example, in their 2010 reports to the UN General Assembly, Philip Alston and Christof Heynes have stressed that legal provisions are silent on two key points. Not only is analogy inadequate to determine whether certain types of “autonomous weapons” should be considered unlawful as such, but it is also far from clear the set of parameters and conditions that should regulate the use of these machines in accordance with the principle of discrimination and immunity in the laws of war. Likewise, in tort law, this is the first time ever legal systems will hold humans responsible for what an artificial state-transition system “decides” to do. Furthermore, in the law of contracts, it is likely that the increasing autonomy of these machines will induce new forms of accountability for some types of robots. What these cases have in common can be summed up with the wording of Herbert Hart’s The Concept of Law (1961: 128): “There is no possibility of treating the question raised by the various cases as if there were one uniquely correct answer to be found, as distinct from an answer which is a reasonable compromise between many conflicting interests.”

    Therefore, what I propose is not the substitution of “the different kinds of rigor provided by philosophers and engineers” with a new type of rigor – that of the law. Rather, what I suggest here is to take Allen’s “challenge” to reconcile such different kinds of rigor in the light of today’s hard cases of the law. Whereas no Sci-Fi is needed to admit that robots are already affecting basic tenets of human interaction, I reckon this as a crucial test for any possible reconciliation.

    • Very interesting post, Ugo, thanks. Your commentary also seems to be just the kind of legal perspective MikkiT was hoping for.

      You asked:

      would not this “well-designed” approach replicate the divergences we find in today’s debate between philosophers and policy makers? Whilst it is notoriously difficult to solve conflicts of values, how can this “well-designed” approach prevent such conflicts?

      These are good questions, and I’m not sure that the conflicts can be prevented. What I was trying to acknowledge with the phrase “well-designed systems of human-machine interaction” was the criticism that several people have raised that Wallach and I focused too exclusively in the book on loading up technological artifacts with various capabilities without paying enough attention to the design of the larger systems in which these artifacts are embedded. You are quite right to suggest that even when that the whole system is taken into account, we won’t find easy consensus about what values should be upheld. For instance, a “black box” system for recording an owner’s interactions with an autonomous car would likely upset privacy advocates and no doubt involve conflicts with other values.

      Well, maybe you intended those questions rhetorically, but I wanted at least to acknowledge the points you raised.

      – CA

  • The fundamental aspect of machine morality missing in Allen’s work is ironically moral philosophy.  The problem is not just whether machines are capable of behaving in sufficiently sophisticated ways to be moral, but whether our moral code should be extended to consider machines to be agents or patients of moral actions.

    In my work on AI ethics I start from the perspective that morality is a form of social behaviour which co-evolves with our society. In other species social behaviour is clearly regulated biologically. Bacteria produce a wide range of public goods, such as shelter and food, and there are bacteria “altruists” who over-produce these goods and bacteria “free riders” that under-produce. Why? Possibly because the right amount of public good to produce depends on the environment, and this changes faster than evolution can fine-tune propensities for production, so mixed populations of strategies have the best chance of being efficient.

    Frans de Waal and others have done fantastic work on the evolution of morality, but I will skip no directly to humans. Clearly our society and its problems are also changing far too fast for biological evolution to generate mandatory social norms that will best benefit our inclusive fitness. Human morality is a weird mix of biological predispositions (shared with other primates), culturally-evolved norms including religious edicts, explicit legislation & individual reasoning. For a long time humanity has found it useful to pretend this concoction reflects eternal truths handed to us on tablets (or whatever) by supernatural beings, but now our problems and contexts are moving fast enough that the explicit, deliberate part of this process needs to come further to the fore.

    Ask not then what we must do for machines; ask what we would have ourselves do for them. In my own writings on robot ethics, I have suggested that 1) it is unethical (not impossible) to make machines we are obliged to. You can easily make a robot into a unique object of art that is irreplaceable, but why? Wouldn’t it be better to mass-produce the bodies of any robot that someone might get attached to, and then constantly back up its learned experience to secure storage by wi-fi so it can always be “reproduced” if lost? Similarly, 2) there should always be a human party legally and morally responsible for the actions of any robot.

    Rules such as the above have been formalised by the research council in charge of robotics and artificial intelligence in the United Kingdom. You can read the EPSRC’s Principles of Robotics right there, on line. Those were produced by a large group of experts brought together for the purpose of addressing Allen’s concerns, and have been available online since April 2010. I have not yet read Allen’s response to or acknowledgement of those principles yet (if I’ve missed it I’m sorry, please provide a link.) But I’d like to.

    I am also co-organising a scientific meeting on this topic (also in the UK) — please see our Call for Papers here: The Machine Question: AI, Ethics & Moral Responsibility. (Abstracts due 1 February 2012.)

  • Dear Colin Allen,

    I sympathize with your call to think about machine morality in the ‘middle space’, with the idea that this requires both philosophy and engineering, and with your suggestion that this may not only contribute to better machines but also to better moral philosophy.

    However, as the previous comments also show, I wonder if the project of machine morality can really limit itself to the topic of ‘functional morality’ and avoid the larger philosophical questions about ‘full’ morality. To conceive of a morality that is ‘sensitive to ethically relevant features of (…) situations’, indeed means that – as Jonathan Dancy suggests – to think about what would be wise to do, and this kind of responsiveness seems to be the domain of human moral judgment, which is not limited to the application of principles.

    Many philosophers think such judgment requires all kinds of abilities robots do not have, such as (self-)consciousness. In this respect, it must be noted that the issue of emotions has not been mentioned yet (neither in your essay nor in this discussion so far – with the exception of a remark on regret). Emotions seem to be a necessary part of what morality is and does – whatever else it is and does. If this is right, then we need humans in the loop, and preferably humans that have a lot of wisdom and moral sensitivity.

    Having humans in the loop also seems to be your view when you talk about humans taking responsibility in the drunken driver case. But then where’s the need for machines to have a degree of morality that goes further than operational morality, say a speed limiter or a device that stops you from drunken driving? You could reply that a higher degree of morality, functional morality, is necessary if we delegate more tasks and give more autonomy to robots. But the ‘if’ is crucial here: we could judge that this creates too wide a gap between the scope of their action and autonomy, on the one hand, and our (human) ability to control and evaluate what’s going on, on the other hand.

    A further problem with the idea of functional morality is that we would create robots that appear to be ‘moral’ but (1) often fail to act in morally acceptable and wise ways and (2) in this way deceive us. I think by itself the issue of deception by robots is not as problematic as some may think it is, but if the robot fakes to be a moral agent we create a particular danger: we have a problem when people rely on – indeed trust – what they perceive as ‘moral machines’ but which have a much lower degree of morality.

    Again, this problem can be avoided by limiting ourselves to operational morality (and indeed to the questions who should decide about what machines make us do or prevent us from doing, what our machines should make us do or prevent us from doing, if some kind of robot-mediated paternalism is justified, etc.)

    Finally, science-fiction can be a tool for philosophers to think about morality and other things, and in this way indirectly improve our understanding of the current world. In addition, it can guide engineers and others by showing what kind of future we (do not) want.

  • This has been a fascinating discussion so far, thank you Colin for your insights into this issue. As you know, I have been a longtime proponent of taking the notion of artificial moral agency seriously. If you will briefly indulge me the conceit of quoting myself, the way I see it is that there are three possibilities when we attempt to ascribe moral agency to robots.
    “The first possibility is that the morality of the situation is just an illusion. We fallaciously ascribe moral rights and responsibilities to the machine due to an error in judgment based merely on the humanoid appearance or clever programming of the robot. The second option is that the situation is pseudo-moral. That is, it is partially moral but the robotic agents involved lack something that would make them fully moral agents. And finally, even though these situations may be novel, they are nonetheless real moral situations that must be taken seriously. In this paper I will argue for this later position as well as critique the positions taken.”
    (When is a Robot a Moral Agent (2006), http://www.i-r-i-e.net/inhalt/006/006_Sullins.pdf)

    Many of the commentators so far seem to be arguing for option one and Mark Coeckelbergh rightly reminds us that we could intentionally design such machines to fool ourselves and others, if we are not careful. But if I am reading you correctly, then I believe you are arguing for the second option, namely, that robots such as the existent self-driving cars are technologies with great potential moral impact that must be properly paid attention to, but that they are not artificial moral agents. And additionally you seem to be pessimistic about the possibility of full artificial moral agents though you do not rule them out completely. I hope I am correct in my assessment of your position. If I am, then I would like to respectfully push for taking the third option more seriously in certain occasions.

    If moral agency requires full autonomy, free will, consciousness, and access to all relevant information regarding a moral choice, then I would have to concede that no robot could ever be a moral agent. But every one of these conditions is philosophically suspect. There is no guarantee that they exist even in humans. If they are the necessary conditions for moral agency, then we are left with the uncomfortable conclusion that we are not even moral agents, which would be absurd. If we relax these conditions a bit, then a robot with the right amount of autonomy and behavioral intentionality in a situation where it was responsible to make a moral decision based on the best available information would have to be judged a moral agent.

  • Thanks to all the commentators for what has generally been a stimulating discussion. Here are my closing remarks:

    Joanna Bryson calls me to task for having failed to include any moral philosophy in my work, because (she complains) I don’t address whether “our moral code should be extended to consider machines to be agents or patients of moral actions”. But that involves exactly the kind of sci fi that I want avoid. I confidently assert that within the 5-10 year window that I have tried to remain focused on, we will have no machines that are either moral patients — nothing that we should have the least qualms about turning off, for instance — nor, in the sense Bryson seems to intend it, moral agents. I believe that a more restricted kind of “functional moral agency” for machines, consisting of some moral-evaluation capacities, can be implemented (see also my reply to Mark Coeckelberg, below). And, like it or not, the experiment of implementing some form of functional morality in machines has begun. Is my stance tantamount to excluding moral philosophy? Evidently I haven’t included the parts that Bryson wants me to include, but it strikes me as hyperbole to say that there’s no moral philosophy at all involved in discussing which aspects of moral theory or moral psychology can be modeled or implemented computationally.

    As for EPSRC, I can very easily explain my failure to acknowledge their principles of robotics by simply saying that I was unaware of them until Bryson’s post. It shows how big the Atlantic divide sometimes is that an American in Britain knows more about something in the UK than a Brit in the U.S. For what it’s worth, though, a Google news search for “EPSRC Principles of robotics” yields exactly no results, so I guess the principles didn’t make a big splash either side of the pond.

    But why might these proposals have sunk without trace? My guess is that it’s because all the recommendations wobble between being too anodyne and being too “establishment”. Take the first recommended principle: “Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.” Many, including the members of the ICRAC linked to my article, would take exception to the assertion that military lethal robots are okay — although it certainly fits the goals of the usual powers that be to allow them. (The phrase “national security” is even more ominous — does it allow terrorist-assassination robots, e.g.?) At the same time, once such robots are available for national security uses, any ban on designing lethal robots for private or mercenary purposes seems beside the point — the lethal machines are already designed and it’s their deployment for other uses that needs to be regulated. Or take principle 3: “Robots are products. They should be designed using processes which assure their safety and security.” Well, of course. Toasters should also be designed using processes that assure their safety and security. But I don’t see how this addresses anyone’s concerns.

    Mark Coeckelbergh wonders whether I really can restrict my attention to functional morality. He thinks that any functionally moral system would be such a pale shadow of the real thing that it demands that we keep a human in the loop. He also worries that such systems would potentially mislead us by appearing to have capacities that they lack. To the first point, I would ask why he thinks that we will insist on keeping humans in the loop for functionally moral systems when we don’t even have them in the loop for systems that have at best a kind of rudimentary operational morality built in, if they implement any kind of moral judgment at all? Although I may have seemed to endorse “in-the-loop” solutions with the example of the autonomously driving car, I think this has limited application. It’s fairly obvious which human should be in the loop so long as there is at least one passenger in the vehicle, and it is economically feasible to keep it that way. But for other technological systems, including autonomous cars used for driverless goods deliveries, it won’t be so obvious who should be in the loop, or whether the marketplace will drive out more expensive supervised systems. To the second point, yes we may be misled, but I suspect this won’t cause as much chagrin as Coekelbergh might imagine. Nor, by the way, do I think that even relatively simple machines will always do worse than people in areas of moral significance. See, for example, this article on that describes the use of software to make treatment decisions for incapacitated patients (also discussed in Moral Machines).

    Also, to Coekelbergh, I don’t doubt that science fiction can be a useful tool for philosophers to think about various issues — or for engineers to find inspiration for that matter. I do, however, think we miss something by always jumping to the sci fi and missing what’s right in front of us. I was at a meeting a couple of years ago where some applied ethicists were lamenting the fact that they had completely missed the ethical issues raised by the multiple embryo implantation procedure followed by the infamous “octomom” because they had all been focused on the (still) science fiction issue of human cloning. I think that the relationship of functional morality to the issues of robot rights and responsibility is similar. We must not lose sight of what’s just around the corner, or even here already, because we are very excited about something that lies well over the horizon. Of course, we (collectively) should think about both. I’m just trying to redress the balance to what I see as the more immediate concerns.

    I think I agree with John Sullins’ third option, even though he pegs me with the second. The systems I have my eye on are not full moral agents, nevertheless they create real moral situations. I don’t go as far as to say that systems with functional morality are not artificial moral agents — in fact, Wallach and I use that term (abbreviated as “AMA”) throughout our book to encompass both this kind of system and (perhaps inachievable, certainly far-distant) full-blown AMAs that are human equivalent. I agree that full-blown (Kantian?) autonomy and free will may not be conditions of our own moral agency. Consciousness is too ambiguous a term to treat fully here and now, but suffice it to say that I think that certain elements of what we call consciousness are as undeniable as moral agency, but others, such as “qualia”, e.g. — are much more mysterious and certainly too philosophically contentious to be considered criterial for moral agency.

    Finally, to Monica Anderson and John LaMuth who proposed their own elaborate theories of how to build AMAs, I think these ideas will best be tested in the twin crucibles of actual software engineering and peer review. And to Art Allen (no relation), who thought I fell off the “FlapDoodle cliff”, perhaps I should have made it clearer that the sentence that provoked this reaction was my reporting what people have said to me, not what I actually think. But as I showed by misinterpreting one of Jonathan Dancy’s remarks in this forum, I can’t claim that I always get what people are saying right, either.

    — CA

  • This conversation, while ending here, continues on Facebook. Join us there by logging on to your Facebook account and proceeding to our group: On the Human.