Comments on: The Future of Moral Machines http://nationalhumanitiescenter.org/on-the-human/2011/12/the-future-of-moral-machines/ a project of the National Humanities Center Mon, 13 Feb 2012 19:42:46 +0000 hourly 1 By: Jason King http://nationalhumanitiescenter.org/on-the-human/2011/12/the-future-of-moral-machines/comment-page-1/#comment-8910 Mon, 09 Jan 2012 13:04:58 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=3057#comment-8910 This conversation, while ending here, continues on Facebook. Join us there by logging on to your Facebook account and proceeding to our group: On the Human.

]]>
By: Colin Allen http://nationalhumanitiescenter.org/on-the-human/2011/12/the-future-of-moral-machines/comment-page-1/#comment-8909 Mon, 09 Jan 2012 07:56:52 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=3057#comment-8909 Thanks to all the commentators for what has generally been a stimulating discussion. Here are my closing remarks:

Joanna Bryson calls me to task for having failed to include any moral philosophy in my work, because (she complains) I don’t address whether “our moral code should be extended to consider machines to be agents or patients of moral actions”. But that involves exactly the kind of sci fi that I want avoid. I confidently assert that within the 5-10 year window that I have tried to remain focused on, we will have no machines that are either moral patients — nothing that we should have the least qualms about turning off, for instance — nor, in the sense Bryson seems to intend it, moral agents. I believe that a more restricted kind of “functional moral agency” for machines, consisting of some moral-evaluation capacities, can be implemented (see also my reply to Mark Coeckelberg, below). And, like it or not, the experiment of implementing some form of functional morality in machines has begun. Is my stance tantamount to excluding moral philosophy? Evidently I haven’t included the parts that Bryson wants me to include, but it strikes me as hyperbole to say that there’s no moral philosophy at all involved in discussing which aspects of moral theory or moral psychology can be modeled or implemented computationally.

As for EPSRC, I can very easily explain my failure to acknowledge their principles of robotics by simply saying that I was unaware of them until Bryson’s post. It shows how big the Atlantic divide sometimes is that an American in Britain knows more about something in the UK than a Brit in the U.S. For what it’s worth, though, a Google news search for “EPSRC Principles of robotics” yields exactly no results, so I guess the principles didn’t make a big splash either side of the pond.

But why might these proposals have sunk without trace? My guess is that it’s because all the recommendations wobble between being too anodyne and being too “establishment”. Take the first recommended principle: “Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.” Many, including the members of the ICRAC linked to my article, would take exception to the assertion that military lethal robots are okay — although it certainly fits the goals of the usual powers that be to allow them. (The phrase “national security” is even more ominous — does it allow terrorist-assassination robots, e.g.?) At the same time, once such robots are available for national security uses, any ban on designing lethal robots for private or mercenary purposes seems beside the point — the lethal machines are already designed and it’s their deployment for other uses that needs to be regulated. Or take principle 3: “Robots are products. They should be designed using processes which assure their safety and security.” Well, of course. Toasters should also be designed using processes that assure their safety and security. But I don’t see how this addresses anyone’s concerns.

Mark Coeckelbergh wonders whether I really can restrict my attention to functional morality. He thinks that any functionally moral system would be such a pale shadow of the real thing that it demands that we keep a human in the loop. He also worries that such systems would potentially mislead us by appearing to have capacities that they lack. To the first point, I would ask why he thinks that we will insist on keeping humans in the loop for functionally moral systems when we don’t even have them in the loop for systems that have at best a kind of rudimentary operational morality built in, if they implement any kind of moral judgment at all? Although I may have seemed to endorse “in-the-loop” solutions with the example of the autonomously driving car, I think this has limited application. It’s fairly obvious which human should be in the loop so long as there is at least one passenger in the vehicle, and it is economically feasible to keep it that way. But for other technological systems, including autonomous cars used for driverless goods deliveries, it won’t be so obvious who should be in the loop, or whether the marketplace will drive out more expensive supervised systems. To the second point, yes we may be misled, but I suspect this won’t cause as much chagrin as Coekelbergh might imagine. Nor, by the way, do I think that even relatively simple machines will always do worse than people in areas of moral significance. See, for example, this article on that describes the use of software to make treatment decisions for incapacitated patients (also discussed in Moral Machines).

Also, to Coekelbergh, I don’t doubt that science fiction can be a useful tool for philosophers to think about various issues — or for engineers to find inspiration for that matter. I do, however, think we miss something by always jumping to the sci fi and missing what’s right in front of us. I was at a meeting a couple of years ago where some applied ethicists were lamenting the fact that they had completely missed the ethical issues raised by the multiple embryo implantation procedure followed by the infamous “octomom” because they had all been focused on the (still) science fiction issue of human cloning. I think that the relationship of functional morality to the issues of robot rights and responsibility is similar. We must not lose sight of what’s just around the corner, or even here already, because we are very excited about something that lies well over the horizon. Of course, we (collectively) should think about both. I’m just trying to redress the balance to what I see as the more immediate concerns.

I think I agree with John Sullins’ third option, even though he pegs me with the second. The systems I have my eye on are not full moral agents, nevertheless they create real moral situations. I don’t go as far as to say that systems with functional morality are not artificial moral agents — in fact, Wallach and I use that term (abbreviated as “AMA”) throughout our book to encompass both this kind of system and (perhaps inachievable, certainly far-distant) full-blown AMAs that are human equivalent. I agree that full-blown (Kantian?) autonomy and free will may not be conditions of our own moral agency. Consciousness is too ambiguous a term to treat fully here and now, but suffice it to say that I think that certain elements of what we call consciousness are as undeniable as moral agency, but others, such as “qualia”, e.g. — are much more mysterious and certainly too philosophically contentious to be considered criterial for moral agency.

Finally, to Monica Anderson and John LaMuth who proposed their own elaborate theories of how to build AMAs, I think these ideas will best be tested in the twin crucibles of actual software engineering and peer review. And to Art Allen (no relation), who thought I fell off the “FlapDoodle cliff”, perhaps I should have made it clearer that the sentence that provoked this reaction was my reporting what people have said to me, not what I actually think. But as I showed by misinterpreting one of Jonathan Dancy’s remarks in this forum, I can’t claim that I always get what people are saying right, either.

— CA

]]>
By: John P. Sullins http://nationalhumanitiescenter.org/on-the-human/2011/12/the-future-of-moral-machines/comment-page-1/#comment-8900 Thu, 05 Jan 2012 00:28:19 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=3057#comment-8900 This has been a fascinating discussion so far, thank you Colin for your insights into this issue. As you know, I have been a longtime proponent of taking the notion of artificial moral agency seriously. If you will briefly indulge me the conceit of quoting myself, the way I see it is that there are three possibilities when we attempt to ascribe moral agency to robots.
“The first possibility is that the morality of the situation is just an illusion. We fallaciously ascribe moral rights and responsibilities to the machine due to an error in judgment based merely on the humanoid appearance or clever programming of the robot. The second option is that the situation is pseudo-moral. That is, it is partially moral but the robotic agents involved lack something that would make them fully moral agents. And finally, even though these situations may be novel, they are nonetheless real moral situations that must be taken seriously. In this paper I will argue for this later position as well as critique the positions taken.”
(When is a Robot a Moral Agent (2006), http://www.i-r-i-e.net/inhalt/006/006_Sullins.pdf)

Many of the commentators so far seem to be arguing for option one and Mark Coeckelbergh rightly reminds us that we could intentionally design such machines to fool ourselves and others, if we are not careful. But if I am reading you correctly, then I believe you are arguing for the second option, namely, that robots such as the existent self-driving cars are technologies with great potential moral impact that must be properly paid attention to, but that they are not artificial moral agents. And additionally you seem to be pessimistic about the possibility of full artificial moral agents though you do not rule them out completely. I hope I am correct in my assessment of your position. If I am, then I would like to respectfully push for taking the third option more seriously in certain occasions.

If moral agency requires full autonomy, free will, consciousness, and access to all relevant information regarding a moral choice, then I would have to concede that no robot could ever be a moral agent. But every one of these conditions is philosophically suspect. There is no guarantee that they exist even in humans. If they are the necessary conditions for moral agency, then we are left with the uncomfortable conclusion that we are not even moral agents, which would be absurd. If we relax these conditions a bit, then a robot with the right amount of autonomy and behavioral intentionality in a situation where it was responsible to make a moral decision based on the best available information would have to be judged a moral agent.

]]>
By: Mark Coeckelbergh http://nationalhumanitiescenter.org/on-the-human/2011/12/the-future-of-moral-machines/comment-page-1/#comment-8892 Tue, 03 Jan 2012 11:15:59 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=3057#comment-8892 Dear Colin Allen,

I sympathize with your call to think about machine morality in the ‘middle space’, with the idea that this requires both philosophy and engineering, and with your suggestion that this may not only contribute to better machines but also to better moral philosophy.

However, as the previous comments also show, I wonder if the project of machine morality can really limit itself to the topic of ‘functional morality’ and avoid the larger philosophical questions about ‘full’ morality. To conceive of a morality that is ‘sensitive to ethically relevant features of (…) situations’, indeed means that – as Jonathan Dancy suggests – to think about what would be wise to do, and this kind of responsiveness seems to be the domain of human moral judgment, which is not limited to the application of principles.

Many philosophers think such judgment requires all kinds of abilities robots do not have, such as (self-)consciousness. In this respect, it must be noted that the issue of emotions has not been mentioned yet (neither in your essay nor in this discussion so far – with the exception of a remark on regret). Emotions seem to be a necessary part of what morality is and does – whatever else it is and does. If this is right, then we need humans in the loop, and preferably humans that have a lot of wisdom and moral sensitivity.

Having humans in the loop also seems to be your view when you talk about humans taking responsibility in the drunken driver case. But then where’s the need for machines to have a degree of morality that goes further than operational morality, say a speed limiter or a device that stops you from drunken driving? You could reply that a higher degree of morality, functional morality, is necessary if we delegate more tasks and give more autonomy to robots. But the ‘if’ is crucial here: we could judge that this creates too wide a gap between the scope of their action and autonomy, on the one hand, and our (human) ability to control and evaluate what’s going on, on the other hand.

A further problem with the idea of functional morality is that we would create robots that appear to be ‘moral’ but (1) often fail to act in morally acceptable and wise ways and (2) in this way deceive us. I think by itself the issue of deception by robots is not as problematic as some may think it is, but if the robot fakes to be a moral agent we create a particular danger: we have a problem when people rely on – indeed trust – what they perceive as ‘moral machines’ but which have a much lower degree of morality.

Again, this problem can be avoided by limiting ourselves to operational morality (and indeed to the questions who should decide about what machines make us do or prevent us from doing, what our machines should make us do or prevent us from doing, if some kind of robot-mediated paternalism is justified, etc.)

Finally, science-fiction can be a tool for philosophers to think about morality and other things, and in this way indirectly improve our understanding of the current world. In addition, it can guide engineers and others by showing what kind of future we (do not) want.

]]>
By: Colin Allen http://nationalhumanitiescenter.org/on-the-human/2011/12/the-future-of-moral-machines/comment-page-1/#comment-8890 Tue, 03 Jan 2012 03:54:08 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=3057#comment-8890 Very interesting post, Ugo, thanks. Your commentary also seems to be just the kind of legal perspective MikkiT was hoping for.

You asked:

would not this “well-designed” approach replicate the divergences we find in today’s debate between philosophers and policy makers? Whilst it is notoriously difficult to solve conflicts of values, how can this “well-designed” approach prevent such conflicts?

These are good questions, and I’m not sure that the conflicts can be prevented. What I was trying to acknowledge with the phrase “well-designed systems of human-machine interaction” was the criticism that several people have raised that Wallach and I focused too exclusively in the book on loading up technological artifacts with various capabilities without paying enough attention to the design of the larger systems in which these artifacts are embedded. You are quite right to suggest that even when that the whole system is taken into account, we won’t find easy consensus about what values should be upheld. For instance, a “black box” system for recording an owner’s interactions with an autonomous car would likely upset privacy advocates and no doubt involve conflicts with other values.

Well, maybe you intended those questions rhetorically, but I wanted at least to acknowledge the points you raised.

– CA

]]>
By: Joanna Bryson http://nationalhumanitiescenter.org/on-the-human/2011/12/the-future-of-moral-machines/comment-page-1/#comment-8887 Mon, 02 Jan 2012 17:55:02 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=3057#comment-8887 The fundamental aspect of machine morality missing in Allen’s work is ironically moral philosophy.  The problem is not just whether machines are capable of behaving in sufficiently sophisticated ways to be moral, but whether our moral code should be extended to consider machines to be agents or patients of moral actions.

In my work on AI ethics I start from the perspective that morality is a form of social behaviour which co-evolves with our society. In other species social behaviour is clearly regulated biologically. Bacteria produce a wide range of public goods, such as shelter and food, and there are bacteria “altruists” who over-produce these goods and bacteria “free riders” that under-produce. Why? Possibly because the right amount of public good to produce depends on the environment, and this changes faster than evolution can fine-tune propensities for production, so mixed populations of strategies have the best chance of being efficient.

Frans de Waal and others have done fantastic work on the evolution of morality, but I will skip no directly to humans. Clearly our society and its problems are also changing far too fast for biological evolution to generate mandatory social norms that will best benefit our inclusive fitness. Human morality is a weird mix of biological predispositions (shared with other primates), culturally-evolved norms including religious edicts, explicit legislation & individual reasoning. For a long time humanity has found it useful to pretend this concoction reflects eternal truths handed to us on tablets (or whatever) by supernatural beings, but now our problems and contexts are moving fast enough that the explicit, deliberate part of this process needs to come further to the fore.

Ask not then what we must do for machines; ask what we would have ourselves do for them. In my own writings on robot ethics, I have suggested that 1) it is unethical (not impossible) to make machines we are obliged to. You can easily make a robot into a unique object of art that is irreplaceable, but why? Wouldn’t it be better to mass-produce the bodies of any robot that someone might get attached to, and then constantly back up its learned experience to secure storage by wi-fi so it can always be “reproduced” if lost? Similarly, 2) there should always be a human party legally and morally responsible for the actions of any robot.

Rules such as the above have been formalised by the research council in charge of robotics and artificial intelligence in the United Kingdom. You can read the EPSRC’s Principles of Robotics right there, on line. Those were produced by a large group of experts brought together for the purpose of addressing Allen’s concerns, and have been available online since April 2010. I have not yet read Allen’s response to or acknowledgement of those principles yet (if I’ve missed it I’m sorry, please provide a link.) But I’d like to.

I am also co-organising a scientific meeting on this topic (also in the UK) — please see our Call for Papers here: The Machine Question: AI, Ethics & Moral Responsibility. (Abstracts due 1 February 2012.)

]]>
By: Ugo Pagallo http://nationalhumanitiescenter.org/on-the-human/2011/12/the-future-of-moral-machines/comment-page-1/#comment-8879 Fri, 30 Dec 2011 15:16:16 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=3057#comment-8879 I find Allen’s overall picture on “mindless moral machines” convincing. Rather than preposterous, horrendous, trivial, or unimaginatively human-centred in his views about morality – to quote some of the negative reactions of his critics – I agree with Allen that robots should be considered as a new source of moral agency, even though such machines lack a set of preconditions for attributing them any kind of moral responsibility. All in all, we have to distinguish between the evaluation of actions that are morally qualifiable as good and evil, and the evaluation of agents as being morally responsible for a certain behaviour. Whereas, in Moral Machines (2009), Allen and his co-author Wendell Wallach affirm that “the quest to build machines that are capable of telling right from wrong has begun,” I reckon that this distinction between the moral accountability of robots, that is, the evaluation of robotic actions, and their moral responsibility is similarly at work. In other words, we may consider that the behaviour of, say, a robot soldier violating the principle of proportionality is “bad” and, conversely, the actions of underwater unmanned vehicles (UUVs) undertaking repairs to oil rigs in the Caribbean sea are “good.” Yet, it would be meaningless to praise or blame such robots for their behaviour.

On the basis of this differentiation, we can address Allen’s (and Wallach’s) further thesis that there is a responsibility of “teaching robots right from wrong” so as to keep goals and risks of the behaviour of such “mindless moral machines” within limits that people can find acceptable. This aim is pursued through the means of design. In the words of Allen, “the different kinds of rigor provided by philosophers and engineers are both needed to inform the construction of machines that, when embedded in well-designed systems of human-machine interaction, produce morally reasonable decisions.” This approach is not new, of course. Aside from Herbert Simon’s seminal remarks in The Sciences of Artificial (1969), it suffices to mention Roland Arkin’s Governing Lethal Behaviour (2007) and the purpose to design robot soldiers ethically. Back to Allen’s idea of embedding morally reasonable decisions in well-designed systems of human-machine interaction, however, there are three points that should be stressed.

First, we have to pay attention to the mutual interaction between values and design: conflicts between values may impact on the very design of an artifact, according to what people find good or valuable. Vice versa, specific design choices may result in further conflicts between values, for example, when striking a balance between the different goals design can aim at. Therefore, if we take into account Arkin’s ideas on robot soldiers and, say, the “robot arms control” movement mentioned by Allen, would not this “well-designed” approach replicate the divergences we find in today’s debate between philosophers and policy makers? Whilst it is notoriously difficult to solve conflicts of values, how can this “well-designed” approach prevent such conflicts?

Secondly, as far as I understand, the responsibility of “teaching robots right from wrong” primarily concerns designers and manufacturers, rather than users of these machines. Nevertheless, there is a further reason why we should take these “mindless moral machines” seriously. Indeed, we are dealing with expert systems that gain knowledge and skills from their own behaviour and, furthermore, that learn from the features of the environment and from the living beings who inhabit it. This means that such machines will respond to stimuli by changing the values of their properties or inner states and, moreover, they will modify these states without external stimuli while improving the rules through which those very states change. The result is that the same model of machine will behave quite differently after just as few days or weeks. Accordingly, the main moral issue seems to revolve more around how we educate, treat, or manage our autonomous machines, than around who designs, builds, or sells them.

Finally, let me rephrase this latter distinction in legal terms. When responsibility has to do with the design and construction of robots, rather than the ways robots are employed, the focus is on whether such technological applications are “capable of substantial non infringement uses” (according to the phrasing of the US Supreme Court in some of its key technological decisions such as in the 1984 Betamax case or in Grokster from 2005). Vice versa, when responsibility concerns the use of robots, rather than their design and production, the focus is on obligations between private individuals imposed by the government so as to compensate damage done by wrongdoing. This distinction sheds light on the twofold characterization of today’s “mindless moral machines.” On one hand, the current state of law deems robots as legally and morally without responsibility because these machines lack the set of preconditions, such as consciousness or free will, for attributing liability to a party within the realm of criminal law. On the other hand, we are dealing with a new source of moral agency, as robots can take actions that are morally qualifiable as good or evil, as much as can animals, children and, obviously, adults. The level of autonomy, which is insufficient to bring robots before judges and have them found guilty by criminal courts, arguably is sufficient to have relevant effects in other fields of the law.

To be sure, I am not suggesting that the law will either solve the problems of human ethics or provide the gold standard for machines. Rather, robots are producing a number of loopholes in today’s legal systems. For example, in their 2010 reports to the UN General Assembly, Philip Alston and Christof Heynes have stressed that legal provisions are silent on two key points. Not only is analogy inadequate to determine whether certain types of “autonomous weapons” should be considered unlawful as such, but it is also far from clear the set of parameters and conditions that should regulate the use of these machines in accordance with the principle of discrimination and immunity in the laws of war. Likewise, in tort law, this is the first time ever legal systems will hold humans responsible for what an artificial state-transition system “decides” to do. Furthermore, in the law of contracts, it is likely that the increasing autonomy of these machines will induce new forms of accountability for some types of robots. What these cases have in common can be summed up with the wording of Herbert Hart’s The Concept of Law (1961: 128): “There is no possibility of treating the question raised by the various cases as if there were one uniquely correct answer to be found, as distinct from an answer which is a reasonable compromise between many conflicting interests.”

Therefore, what I propose is not the substitution of “the different kinds of rigor provided by philosophers and engineers” with a new type of rigor – that of the law. Rather, what I suggest here is to take Allen’s “challenge” to reconcile such different kinds of rigor in the light of today’s hard cases of the law. Whereas no Sci-Fi is needed to admit that robots are already affecting basic tenets of human interaction, I reckon this as a crucial test for any possible reconciliation.

]]>
By: Ted Howard http://nationalhumanitiescenter.org/on-the-human/2011/12/the-future-of-moral-machines/comment-page-1/#comment-8878 Fri, 30 Dec 2011 10:41:29 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=3057#comment-8878 Thanks for that Colin – have a copy, have scanned and will read tomorrow.

I think I need to integrate this with Jeff Hawkins work on Modeling the Neocortex.
Then I might finally have solid structure to a 38 year old intuition.

]]>
By: Colin Allen http://nationalhumanitiescenter.org/on-the-human/2011/12/the-future-of-moral-machines/comment-page-1/#comment-8876 Thu, 29 Dec 2011 05:34:21 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=3057#comment-8876 To Ted Howard

You may find that this paper by my IU colleague, Michael Jones, provides a concrete step towards the holographic theory of mental representation that you gestured towards:

Jones, M. N., & Mewhort, D. J. K. (2007) Representing word meaning and order information in a composite holographic lexicon. Psychological Review, 114, 1-37.

Cognitive scientists would generally agree with you that almost all of the interesting processes are not accessible to consciousness.

Yours,
Colin Allen

]]>
By: Colin Allen http://nationalhumanitiescenter.org/on-the-human/2011/12/the-future-of-moral-machines/comment-page-1/#comment-8875 Thu, 29 Dec 2011 05:22:20 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=3057#comment-8875 2nd Reply to Jonathan Dancy

Dear Jonathan,

Thanks for clearing up my misunderstanding. You wrote:

what I said was that we had no idea how to construct machines capable of following moral principles (in the right sort of way), not that we have no idea how to construct machines with the right sort of subtle judgement. My remark was about how to conceive of the task. With the wrong conception, we will be trying to achieve the impossible.

I definitely mistook the point of your original final paragraph.

I would say, however, that it’s an interesting practical and empirical question how far one might get with a more rule-bound system. As you say, the problem is not restricted to morality. By way of analogy, the game of “Go” has proven to be a hard nut to crack — much harder than chess — and although the best programs aren’t yet at the level of world-class human players, they nevertheless play at an advanced amateur level. It’s an open question whether the subtle judgements of the best human players can be matched by currently available AI methods — with the wrong conception of the task, as you say, it may be impossible. But even so, one might be able to get a long way towards something serviceable with methods that won’t get us all the way to human level performance.

Susan and Michael Anderson have described a method of using machine reasoning to infer from human judgements about medical ethics problems how conflicts between the (Ross/Beauchamp and Childress) duties commonly used to evaluate such cases are weighed by ethical experts. As discussed in Moral Machines), I have several concerns about their specific approach, including the notion of expertise that they rely on and the rather thin representation of the dilemmas that they use. Nevertheless it seems to me to represent an interesting approach to developing a computational method that could approximate the more subtle moral assessments that people make.

– Colin

Ref:
Anderson, M., Anderson, S., and Armen, C. (2006a), “An Approach to Computing Ethics,” IEEE Intelligent Systems, Vol. 21, no. 4.

]]>