Challenges for a Humanoid Robot

The Star Wars character C3PO is so convincingly depicted that we may have to remind ourselves that there was no real robot behind the elegant mannequin. The passage of time has not remedied this deficiency; nor, alas, have I a blueprint to offer. I do believe, however, that it will repay us to identify some attributes a robot would need in order to count as humanoid. By clearly distinguishing among such features, and considering what our attitudes toward such a device might be, we can enrich our understanding of what it is to be human.

Perhaps the most striking feature of C3PO is that it’s so smart. (It? Yes; but don’t think I wasn’t tempted to write “he”. I’ll return to this temptation.) Why do we think of it as smart? We see it confronted with complex situations that no one could have anticipated, and we observe it doing things and saying things that are appropriate to those situations. “Appropriateness” here implicitly refers to its goals — in this case, helping the rebel forces, protecting Luke and, where compatible with those goals, protecting itself.

So, if we want to build a real C3PO, we’ll have to make something that can speak and act appropriately to its goals over a wide range of unexpected circumstances. That, in a nutshell, is what the key idea behind Alan Turing’s famous test for computer intelligence becomes when reformulated for robots.

We will also want to insist that whatever enables our robot to act appropriately in unexpected circumstances be entirely “on board” — that is, encased within its titanium skin. Otherwise it won’t be humanoid. Our intelligence is something we get from what goes on within us. If we were to get detailed instructions about what to say or do via, say, a cell phone conversation, our actions would show the intelligence of the person we talk to, not our own. Our real C3PO could look up information, just as we do, but the processes that connect that information to its actions must be internal to it.

Robots, unlike computers, have effectors that enable them to move around and to act upon people and things they encounter. They must also have detectors that carry information about their location and the nature of the things in their vicinity. Our real C3PO will need to have little microphones, cameras, chemical analyzers, and parts of its skin that can bend enough to detect differences in pressure.

Robots’ detectors are often called “sensors”, but here we must be careful. When your house cools on an October evening, your thermostat detects the fall in temperature and turns on your furnace. But no one supposes that your thermostat suffers by feeling cold. A metal coil simply contracts and causes a circuit to close. Merely detecting a change does not imply having sensations.

That does not mean that we could never build a robot that has sensations. It shows only that the project of building a robot with sensations is different from the project of building a robot with detectors.

If we did want to build a robot with real sensations, how should we proceed? When I ask my students this question, they often respond with “Why would anyone want to do that?” That’s a good question that reflects an understanding that robots, as we usually think of them, don’t feel anything, and so can’t suffer. That’s why we think they’re ideal for jobs that would be dangerous to people, like fixing damaged nuclear facilities.

But suppose we are perverse philosophers and want to make a robot that has real sensations just to show we could do it. What should we do? The usual method of producing something is to find out what causes it, and then to bring it about by producing its cause. The causes of our own sensations are not known in detail, but there is wide convergence in science on the view that sensations depend on activities in our neurons. So, our best shot at producing real sensations in a robot would be to reproduce, in the robot’s electronic parts, the same patterns of activity that take place in our neurons when we have sensations.

Such a project would undoubtedly be very expensive, so let us now suppose that in a First Generation of humanoid robots, we forego it and settle for a robot that has no sensations. It does, however, have excellent detectors, and the electronic connections between its detectors, its inner processors, and its motors enable it to do what C3PO does in Star Wars.

A question that is now likely to arise is this: When it speaks, does it understand what it’s saying?

In some cases the answer seems to be “Obviously not” — namely, cases where it uses sensation words. So, for example, if C3PO says “Take the morphine to the sick bay. Luke is in terrible pain”, we may doubt that it understands “pain”. If it can’t feel, it has never felt a pain, and so doesn’t fully understand what that word means. At best, it knows that pain, whatever that is, is something people go to great lengths to avoid.

But what about “Luke” and “sick bay”? With these words, there seems less of an obstacle to C3PO’s understanding. Of course, if C3PO called everyone “Luke”, or said “Here’s the sick bay” randomly, whether it had entered the sick bay or not, we would not take its words to be meaningful. We’d say “It utters words, but they don’t mean anything”, or even “It utters noises that sound just like words, but when it produces them, they’re just noises”.

But the C3PO of Star Wars is not like that. It utters its words appropriately. Its reports reliably correspond to what it detects, and its sentences about what it is going to do correspond to what it actually goes on to do (except, of course, when it is cleverly deceiving members of the Empire’s forces). If we had a real robot of which such things were true, we would take what it said seriously — which means that we would act upon what it said just as we do in response to what our fellow humans say. Disregarding C3PO’s words — treating them as mere noises — would cost us dearly, and would not be an attitude that came naturally to us.

Some philosophers will hesitate to agree that a robot without sensations could mean what it said, on the ground that it could never feel to such a robot that it meant what it said (since nothing ever feels any way at all to a sensationless robot).

Now, one is free to adopt a decision to use “means what it says” in a way that requires having feelings. It is not a matter for verbal decision, however, that what gives words the meanings they have is the way they fit into a highly complex network. This network relates words to things that affect detectors and to verbal reports of what is detected, arguments that involve use of the words, statements of intention to do actions of various kinds, and actions themselves. The project of building a robot that uses words that fit correctly into the same network that we use makes sense so long as the robot has detectors, speakers, and effector motors. It does not depend on whether we have also attempted the further project of endowing it with sensations.

An analogy may help us understand how a network is related to meaning. Consider, for example, a rook in chess. Rooks are usually made to look like a castle tower, but any shape would do, so long as it was easily distinguishable from the shapes of the other pieces. What makes the rook a rook is that it can be moved only in certain ways. What makes it a piece in a chess game is that it is part of a network of pieces that get moved only in accordance with the rules of chess.

Of course, the rules of language are far more complex than the rules of chess, and they include rules that relate words to things and actions. For example, you are to say “I am in the sick bay” only when you are in it, and “I am going to the sick bay” only when, barring unexpected obstacles, you then proceed to the sick bay. The analogy holds, however, for the richer set of rules: noises are meaningful words because of their place in a network of rules, just as pieces of wood are rooks because of their place in their own network of rules.

C3PO’s noises are likewise meaningful words to the extent that they play the same roles as your words. If you say something to it, and it acts just as other people would if you’d said the same thing to them, it has understood what you’ve said.

We can, then, imagine a robot that uses its words meaningfully, even if we are imagining a robot that does not have sensations. But whether we think we have endowed a robot with sensations does make a difference to how we are likely to think we should treat it. If we think it cannot feel anything, we won’t have a certain kind of qualm about exposing it to danger. Our decision will be entirely a matter of balancing benefits against repair and replacement costs.

But now let us imagine a Second Generation robot endowed with the causal mechanisms required to produce genuine sensations. Now we must consider the possibility of pain. Of course, it may be that it — or, perhaps, we should now say “he” — will have to suffer anyway. After all, we do sometimes call on people to endure high risks. But the possibility of pain forces us to consider a factor that’s additional to repair and replacement costs.

Let us now imagine that a robot does something it shouldn’t do. If we think we have a First Generation, sensationless robot it’s obvious what to do. We should send it back to the factory, just as we would a vacuum cleaner that occasionally deposited some of its sweepings on the rug. In the robot’s case, it might be difficult to figure out how to fix it, but there is no alternative to making some rearrangement of its parts.

But if we have a Second Generation robot that has sensations, another possibility is available: We may threaten to do something that will cause it to have pain. If our threat is sufficiently credible, that may be enough to stop it from misbehaving again. And threatening may be better than sending it back to the factory, because we may not know how to rearrange its parts so that it stops misbehaving while retaining all the abilities that made it useful to begin with.

Let us now stipulate that Second Generation robots have pains only from physical abuse. The only sorts of threats that would make sense would be such things as beatings, applying blowtorches and, perhaps, leaving it in a painful low-battery state for a prolonged time.

But if we could endow a robot with the causes of pains, perhaps we could also build one that had the causes of other feelings, such as remorse, discouragement, or wounded pride. We can thus imagine a Third Generation robot in which causes of these feelings are activated only in circumstances that would activate such causes in us. If we had such a robot, we might then influence its behavior by regularly causing it to have these unpleasant feelings when it misbehaved.

Now, imagine that you have lived in a world with many Third Generation robots, and have become quite accustomed to these more subtle kinds of interactions with them. Since these robots are smart, they’ll anticipate their owners’ reactions. You and your fellow humans won’t have to cause robots’ pains, or even threaten to do so, very often. Most of the time you’ll just react to good and bad behavior with smiles and frowns, and most of the time, things will go reasonably well.

If you can imagine such a world, you can imagine a world in which people treat the Third Generation robots just like they treat human beings. If robots seriously misbehaved, you’d get angry at them. Why do I think so? Who has not kicked or swatted a car or vacuum cleaner that was not working right? We chide ourselves, of course: the kick can’t be felt and if it changes how the machine works, it will likely not be for the better. But if we kicked a Third Generation robot (avoiding its head), we might not be behaving irrationally in either of these ways.

The world I’ve just asked you to imagine is more than a world in which we treat Third Generation robots in a certain way: it is a world in which it would seem quite natural and proper to do so. The consequence I am willing to draw is that Third Generation robots have everything that’s needed to properly ground the attitudes toward them that we usually have to our fellow human beings.

If that is right, then there is a further consequence. If we stopped to think about our robots, it would be evident to us that they did not make themselves. They were made in a factory, and the changes that have occurred inside them since their manufacture have depended on how they were constructed to begin with, and what has affected their detectors subsequently. It would be evident to us that it would make no sense to blame them for what they are, or for the condition that they are in at any given moment. But, even after thinking about it, it would still make sense to treat them in the way that would come naturally to us — namely, the way we treat other people today, including blaming them for what they do, and attempting to make them feel bad about improper behavior.

It is consistent for us to have the same pair of attitudes toward our fellow humans. We can recognize that, although they were made in a womb and not in a factory, they did not make themselves, and the state they are in now depends on what they were when they were born and what has happened to them since then. It does not make sense to blame them for who they are, or for the condition they are in at any given moment. It’s appropriate to blame them for what they do, if they behave badly, but never for who they are.

Fully accepting this view of ourselves requires something that is difficult for us to do — namely, to keep two attitudes in full play. These are: calm recognition of blamelessness for being in the condition that we are in when we act; and anger and blame for actions when they are immoral. But it is only by embracing both attitudes and keeping them both robustly in mind that we can properly recognize both our humanity and our position in the natural world.

[The view expressed at the end of this essay is further developed and supported in my recent Your Brain and You: What Neuroscience Means for Us, available on Amazon. A convenient link, and references to my work on other issues discussed here, can be found on my website, yourbrainandyou.com. My sources are too numerous to mention, but I should note that the chess analogy is drawn from Wilfrid Sellars. Thanks to my first reader, Maureen Ogle.]

19 comments to Challenges for a Humanoid Robot

  • As I told Bill, C3P0 seemed so smart because he was a British Human stuffed into a very compact robotic gold colored armour. Of course, Bill’s point still hold in principle about building an actual humanoid robot who is NOT a British actor.

    Some of the things on Bill’s list to make a robot smart look like behavior. But of course behavior is only an outward manifestation of thought. So to make a robot that can act smart, we have to learn how to build one that is smart—one that can think. As Bill correctly points out, THAT is the holy grail first sought by Alan Turing. Unfortunately, Turing’s test is an operational one (about behavior) not one about computation. Which computations over symbols are intelligent? Surely not those in existing robots or even in computers that can pass the behavioral 2011 “Turing Test,” run each year and fooling many subjects. So to build a truly intelligent humanoid robot, we have to find the mark of the cognitive—the sign of what makes a process a cognitive process (whether it be in a brain or in a computerized robot). We know that vision and thinking and reasoning and learning are cognitive acitivities, yes. But what is it about them that makes them cognitive? There has been little or not nearly enough attention devoted to this question and I welcome Bill’s prodding as a means of moving us all a bit closer to an answer to this question.

    I also applaud Bill for raising the matter of sensation and how it is different from mere sensors. Sensors fire mechanically to properties they lawfully detect in the environment. Sensations may do that too, but they do more. They make us consciously aware of the things we sense and they do it in a way that as philosophers like to say has a “what it’s like” quality cleverly called qualia. Bill is getting at the matter of whether we can build a humanoid robot that could experience qualia. Personally, I don’t see why we couldn’t and I don’t see why we couldn’t build an intelligent thinking one too. We aren’t doing it now because we don’t know how, not because it isn’t possible—in my view.

    Now I personally believe that when we experience qualia we are experiening external physical properties of things—for instance, when I taste coffee, I am experiencing the chemical composition of the coffee (my mind is coming into direct contact with those coffee molecules). I’m not experiencing something located only in my mind. Of course, my experience is in my mind but what I’m experiencing isn’t. In this I follow the representational theorists of consciousness (Fred Dretske, 1995 & Michael Tye, 1996).

    While Bill doesn’t mention it in this piece, he is also an epiphenomenalist about quali of experience. He thinks we have them but they don’t cause us to do anything. They don’t cause anything at all in the workins of the brain. Sort of like the low oil light on your dash. It is caused by low oil, but it does not affect the workings of the engine itslef. I on the other hand think that it is because of the qualia I experience from a good wine that I want another glass and because I hate the unpleasant qualia of the dentist’s office that I loathe my next dental appointment. But I won’t dwell upon this since Bill didn’t in this short piece.

    In the end, I could not help think that Bill’s 3rd generation robots really are US. And in the end, it seems we are back to the freewill question. Bill wants to separate out the difference of freedom to be from the freedom to do. The moral seems to be neither we nor his 3rd generation robots were free to be who they are but they (we) are free to act and so we should hold them accountable for what the DO—not necessarily for what they ARE. I guess this is so only if we don’t change who we are by what we do. I guess it was Aristotle who seemed to tell us that we should pattern our actions after the persons who live good lives (hit the mean between life’s extremes) and in this way we too will become good citizens. So maybe we aren’t responsible for who we are at first, but maybe we are responsible for whom we become. Maybe that is a positive take-home lesson of Bill’s article. Worry wart that I am, I still wonder how I do anything freely at all. After all, my car doesn’t and I’m no less mechanical than it is.

  • There are obviously certain “attitudes … that we usually have to our fellow human beings”, as Bill puts it, that we don’t have toward other animals or existing machines, including our most sophisticated computers. What is the peculiar feature of human beings that makes those attitudes appropriate in the case of human beings but inappropriate in the case of animals and existing machines?

    The answer given over the centuries by very many writers, artists, and philosophers, not to mention men and women of affairs, is that this feature is something very special, something so special, indeed, that it’s almost certainly incompatible with our having a purely animal nature. Perhaps the feature is possession of an immaterial soul that grounds our unique worth, that enables us to tell right from wrong and to originate action freely, and that opens up the possibility of life after death.

    Bill aims to nudge us toward the view that this feature, or cluster of features, might turn out to be rather more mundane than this answer suggests.

    He writes in the spirit of Morgan’s Canon, according to which “in no case may we interpret an action as the outcome of the exercise of a higher psychical faculty, if it can be interpreted as the outcome of the exercise of one which stands lower in the psychological scale” (C. Lloyd Morgan, An Introduction to Comparative Psychology, New York: Scribner’s, 1894, p. 53). But Bill isn’t trying to interpret actions; he’s trying to justify the special attitudes that we take to our fellows—while being as economical as possible in his ascription of psychological capacities to humans.

    I think Bill’s task is easier in one way, but harder in another, than he thinks.

    (1) He writes that “If we think [a robot] cannot feel anything, we won’t have a certain kind of qualm about exposing it to danger. Our decision will be entirely a matter of balancing benefits against repair and replacement costs”. But on at least one view of morality, the capacity for sensation is not required for something to be worthy of moral consideration. On the broadly contractarian view that morality is a set of behavior-restricting rules that one would agree to follow on condition that others do so too, a robot would only need to be capable of agreeing to the rules in order to be morally considerable.

    (2) Bill envisages his Third Generation robots as capable not just of pleasure and pain but also of remorse, discouragement, or wounded pride. I worry that having these complex emotions requires sophisticated cognitive capacities—e.g., to think about oneself in the past, present, and future, or to think about others’ thoughts about oneself—that we may not know how to model.

  • This is a response to Andrew Melnyck’s comment of 6/7.

    Andrew has me right about wanting to nudge us toward the mundane. To put it in slogan form, a richly interrelated set of mundane things is worth a lot more in understanding ourselves than the idea that we’ve got one simple, extraordinary thing.

    The contractarian comparison is interesting, but I think we can distinguish two cases. One is where robots are seriously intimidating, but not overpowering. In that case, we might have to enter into a social contract that includes both them and us, in order to do the best we can for ourselves.

    The other case is where we feel an obligation to include them in the terms of our contract, on the ground that we can cause them genuine pain.

    I was thinking of the second case. But even regarding the first, I wonder whether we would really have a system of morality. What makes me wonder is the thought that if I imagine being faced with a chance opportunity to eliminate the robots, it would not seem immoral to do it; whereas the analogue for a group of people would be genocide.

    Well, maybe my moral intuitions just haven’t been schooled sufficiently by living long enough in a contract situation with our fellows, the robots. But it may be, instead, that contract theory has a problem with identifying just which properties must be equal (or nearly so) in order for parties to be sensibly regarded as eligible to enter into a contract. I am not sure about this.

    As to the second point, I completely agree that emotions have cognitive as well as feeling components, that complex emotions have complex cognitive components, and that we do not now have a good account of the needed cognitive components. (Nor, I’d add, even of simpler aspects of how the mind works.) My Third Generation robots are, for sure, presently mere science fictions.

  • This is a response to Fred Adams’s comment of 6/6.

    As I told Fred, I’d not been aware of how C3PO was done. But I certainly took the character to be smart. So, I don’t think I thought of C3PO as smart *because* there was an actor inside the suit. The impression of intelligence comes from what the character says and does.

    I think Turing had it right for his title topic, which was machines and *intelligence*. I.e., I’m happy with behaviorism for intelligence (but not, evidently, for sensations). We are fully warranted in thinking we are intelligent, and that our behavior is caused by our brains. We are far less warranted in believing any presently offered account of how our brains enable us to behave intelligently — including “computatonalism”s that say more than that our behavior is produced by physical, causal processes.

    That’s not meant to discourage trying to find an account: I’m in complete agreement with Fred that we should be looking for a good account of the causes of intelligent behavior, and also a good account of the causes of sensations.

    I can’t go far into my disagreement with Fred over representationalism — though I have written about that elsewhere. Here, I’ll just say that I don’t think that my mind comes into direct contact with coffee molecules. That contact, I think, is quite indirect, being mediated by inferences it took scientists centuries to construct and verify. Yes, the coffee molecules cause my experiences — but experiences aren’t *just* neural events, either. There’s the *taste*; and that does not have the kind of complexity that either molecular structures or neural events have.

    Yes, the set of abilities of Third Generation robots are the important ones that we have. Fred is, if course, right to relate this to the traditional free will problem, and he’s right to point to the complication that arises for my view because sometimes what we do affects who we are later. I explain how this complication is to be treated in _Your Brain and You_; here I’ll just note a result of that discussion. To wit, in cases where people do something that affects who they are later on, we still have to bear in mind that at the time they did it, their decision was a product of who they were at the time and how their environment was affecting them at the time. We have to maintain compassion for their being who then were, even if we condemn and punish the action. And this point holds no matter which time in a person’s life we happen to be considering.

  • Fred Adams hits the bulls eyes with his analysis that the underlying theme is that of free will vs determinism; assuming we are equivalent to 3rd gen robots, how do we attribute responsibility to humans while also agreeing that what they are is a result of their genetics/upbringing/environmental influences. Fred also correctly distinguishes between being held responsible for what we are when we start life’s journey (say as a child) and what we ‘choose’ to become as we grow. As Fromm has said, Man’s main task in life is to give birth to himself. The theory of holding someone responsible for his actions because of the consequences is a quaint theory dependent on outcomes rather than intentions — not that that is not how we usually judge and act towards others. However a more moral/ethical/developed stance is to use the intentionality of an action to judge the action; thus the focus shifts on what you are/want to become rather on just how you act.

    The superficial problem of how to act towards 3rd gen robots masks the greater problem of whether the 3rd gen robots are truly equivalent to humans (qua having qaulia, but lets say not being ‘agents’) ; I assert that for robots to have human equivalence they also have to have agency — however one defines it — and that will then make them as much responsible for what they become as to what they do. When the difference between what you do and what you become vanishes there is no longer a problem of how to judge others/how to act towards other sentient agents.

  • Bill Robinson’s terrific, all too brief discussion—which makes me eager to read the book—strikes me as throughout highly sensible, and I have very little I might take issue with or raise questions about. But I will raise a few concerns that occur to me, and make a particular suggestion about robots and sensations.

    First-generation robots, on Bill’s account, lack feelings and sensations, but they can talk in ways that comport not only with the robots’ perceptible environment and presumed goals. So the robots must have detectors to register their environment, though Bill warns that mere detection may not amount to actual sensation. More on that below.

    The robots evidently also have goals, intentions, and the like. Still, “[i]f we think [such a robot] cannot feel anything,’ Bill writes, “we won’t have a certain kind of qualm about exposing it to danger. Our decision will be entirely a matter of balancing benefits against repair and replacement costs.”

    I’m not so sure—or at least I’m not so sure that that’s the attitude we ought to have. If a robot does have goals, intentions, aims, and the like and also interacts with us and other robots in respect of its goals and intentions and what it detects others’ goals and intentions to be, that robot has, in a way, a sense of itself. I use ‘sense’ here in a way that has no implications about qualitative mental states, since Bill stipulates that our first-generation robots lacks those.

    But first-generation robots, as Bill—correctly in my view—maintains, understand what they say. And if so, such a robot has a certain range of thoughts—at least those thoughts that its speech acts express in words, and probably more thought that lead up to those that are verbally expressed. Thoughts and other so-called intentional states, moreover, involve holding mental attitudes towards intentional contents, for example, an assertoric attitude toward the content that Luke is in danger and a desiderative attitude towards the content that Luke be placed out of danger.

    Holding such mental attitudes towards intentional contents involves a kind of sense of oneself—again, not a sense that is in any way qualitative, but something nonetheless that we can colloquially call a sense of self. And any such robot must, to function reasonably well, hold a bunch of attitudes towards intentional contents that are about the robot itself. So I would be somewhat reluctant simply to send a malfunctioning robot back to the shop to have its parts rearranged—unless I had confidence, as one also hopes with human brain surgery, that the rearrangement of parts would not in any way impair the robot’s sense of self.

    Perhaps Bill would argue that the first-generation robot can speak, understand what it says, have its speech acts play suitable roles in the kind of rule-governed nexus that governs our own meaningful speech, all without having thoughts and without being in any other states correctly described as intentional. I would dispute that. A robot can do all that only if it can think—indeed, I would argue, only in virtue of its thinking.

    That’s one concern. I can put a second concern more concisely. I think it’s not clear that intentional states and sensations can be added to a robot wholly independently of one another. Bill says that we may be able to design a robot to be in mental states of one sort or another by finding out what causes the states in question and engineering the robot “to reproduce … the same patterns of activity that take place in our neurons when we have” those states. I think that’s a highly sensible approach. My concern is just that we may be unable to engineer robots to be in one sort of mental state without engineering it to be in others.

    It’s likely that in animal evolution, sensation occurs well before anything like thinking. So a particular issue one might raise in designing robots to have mental capabilities is whether the thinking that probably must accompany language use can be designed into a robot that doesn’t have genuine sensations. I’ll come back to the question of what genuine sensations are at the end of my remarks.

    Consider the “other feelings” Bill describes third-generation robots as having, feelings “such as remorse, discouragement, or wounded pride.” (I take it that Bill is considering in this context the possibility of steering the behavior of the robot “by the rudder of pleasure and pain” [Aristotle, E.N., X, 1, 1172a21]; presumably our second- and third-generation robots should have some nice feelings as well.) It’s unlikely that remorse, discouragement, and wounded pride can occur without both mental qualities and intentional content. So the robot’s engineers will have to take into consideration how mental quality and intentional content interact in forming such complex feelings.

    Let me close with a few remarks about sensations. Sensations are often thought of as especially difficult to understand, and hence to engineer into robots. My own view is that neither of these things is so. The apparent difficulty, I believe, stems from wrongly thinking of sensations as essentially conscious states.

    Sensations—like bodily sensations of pain and visual sensations of red—are states that figure essentially in perceiving, in the case of pain, the perceiving of one’s own bodily states. And we know from very extensive experimental and other empirical findings that some perceiving occurs without being conscious, as in subliminal perception. So the mental qualities that figure in perceiving must be able not only to occur consciously, but also to occur without being conscious.

    Given all that, we can seek to give an account of sensations in terms of their role in perceiving, both conscious and subliminal. It’s natural to think of mental qualities as properties that occur in response to a range of perceptible properties, for example, perceptible physical colors in the case of visual mental qualities and kinds of tissue damage in the case of pain. This approach promises to make it no less easy to design genuine sensations into a robot than to engineer language use and intentionality.

    Can language use and intentionality occur in a robot that lacks sensation? Bill’s first-generation robots lack sensation, but have detection. That raises a further question: If sensation can occur, as I argue, without the sensations’ being conscious states, what’s the difference between nonconscious sensations and the states that enable mere detection to occur? For that we need an account that does not simply define sensations as being necessarily conscious states, for example, an account that characterizes the mental qualities of sensations by appeal to sensed perceptible properties. On such an account, it may be that if detection is sufficiently elaborate, nonconscious sensations simply are the states that make such detection possible. An independent hypothesis would be needed, then, to explain how conscious sensations differ from sensations that fail to be conscious. But both would count as sensations.

    What should our attitude be toward robots—or other creatures—that have sensations none of which are conscious states? When a creature—robot or animal—is in pain but the pain is not a conscious state, should we care about that creature’s nonconscious suffering? I think we should. Pains are bad for the overall state of a creature even if the pains are not conscious pains. Similar remarks hold for other types of sensation.

    All this said, I think these comments and potential disagreements with what Bill says are relatively minor compared to the large and very significant areas of agreement. I look forward to reading the book that Bill’s brief remarks summarize. And I look forward to learning Bill’s reactions to these thoughts.

  • This is a response to Sandeep Gautam’s comment of 6/8.

    I agree that I’m concerned with attributing responsibility in our actual circumstances, and that the distinction between who you are and what you do is a key part of what I have to say about that issue.

    However, what I’ve said will seem less acceptable than it should if it is believed to imply that intentions do not (or should not, or cannot) enter into our considerations about how to respond to people or robots. I don’t believe that, and I don’t think I’ve inadvertently implied that view.

    To make the point briefly but, I hope, clearly, we do not think people’s actions are perfectly ok if they accidentally failed to bring about a harmful consequence that they were trying to bring about. That has a lot to do with the fact that if they wanted to do a harm, but failed, they might well try again (and succeed on their second try). So, it is quite natural for us to be concerned not just with actual consequences, but with what people are trying to bring about.

    If we think Robbie Robot would have had no barrier to achieving one of its goals by doing a harm, and did not do one only because it bungled (its detectors failed or were obscured, or its intelligence was limited, etc.), we will not be satisfied to let it roam around the house without first calling in a repairman.

    Regarding the idea from Fromm, I don’t think we generally aim to choose ourselves. We usually make particular decisions, and some of them turn out to have significant effects on the state we’re in when we make important, later decisions.

    But even if one does think of a case where a person actually thinks “I’m now choosing not just what to do now, but what kind of person I’m going to be for the future”, it will still be the case that the decision that’s made will be a product of what condition that person is in when the decision is made, and what his or her environment is contributing when the decision is made.

    The remaining point concerns agency. I think that is a combination of several things, e.g., ability to keep track of what one has done versus what others have done, or what has been done to one, ability to relate where one is located in relation to other places, and ability to anticipate some of the consequences of what one does. I didn’t mention such abilities in the article, but I think they are of the same general kind as abilities I did mention. E.g., if C3PO can give directions to people who don’t know where the sick bay is, it must have some sort of representation of where it is in relation to the sick bay.

    And, of course, if a Third Generation robot can suffer remorse, it has to be able to distinguish its actions from those of others. — So, I think it’s a natural addition to what I said in the article, to think of Third Generation robots as having the suite of abilities that are needed to construct agency, and knowledge of being the agents of their own acts.

  • This is a response to David Rosenthal’s comment of 6/8.

    There’s a lot here, and I’m grateful for all of it. I can’t possibly go into all your points in the detail they deserve, but I’ll try to respond in some way to all of them.

    I’ll follow you in deferring discussion of sensations to the end.

    Regarding the sense of self (quite properly distinguished from anything qualitative) I think of that as a set of abilities, and refer to my response to Sandeep Gautam for a little more explanation of that. Yes, we certainly would not want the factory to return us a robot that could not distinguish its own actions from those of others. But if it came back with revised inclinations, or a little more willingness to risk its safety for the sake of ours, I’d think that was all to the good.

    I’m not happy to move from a robot’s, or a person’s, acting intelligently to the view that it has a series of thoughts that it expresses in words (and others that “lead up to” those it expresses. There is, I agree, a difference between acting considerately as opposed to impulsively, and there is a difference between cases where we say a good many things to ourselves in inner speech before we act, and those where we do not. But I think it’s treacherous to think of the processes that precede actions as naturally divided into units that correspond to sentences. The sentences we speak (inwardly or overtly) are products of our brain processes, and supposing a similar structure in those processes begins a vicious regress. (Or so I argue — there’s more on that in the book, too.)

    I completely agree with the idea that you can’t engineer in one thought at a time. Believing even one claim involves understanding the words used in making it, and that will involve being able to use the words in other contexts, and so on. Reflection on children seems helpful here. There are cases where one thinks they’re just imitating a string of words without understanding, or using them as a signal, not as a meaningful word. But then there are cases where it seems clear they understand what they’re saying, even though there are many uses of their words that adults would understand, but they cannot. But wherever we think they understand, we think they can respond appropriately to the same words in at least some other combinations and circumstances.

    I also agree that if robots can suffer, they can, in principle, enjoy.

    Now, to sensations. Well, of course, our differences about that go back a long way. There’s no possibility of resolution here; I’ll just make a few remarks that suggest where and why I differ.

    I don’t think of sensations as the perceivings of bodily states. They are, instead, the feelings — the conscious episodes — that are caused when our bodies get into certain conditions, and when we come to know that our bodies are in those conditions. E.g., damage to our bodies causes a pain (= an episode of consciousness of a certain kind). If we’ve had a normal cognitive development and are no longer an infant, we will also know we’ve been damaged. So, I don’t agree at all that “the mental qualities that figure in perceiving must be able not only to occur consciously, but also to occur without being conscious.”. The mental quality that occurs along with our coming to know we are damaged is pain, and — in my view — there are no unconscious pains.

    Of course, detection can be unconscious. For example, I think we begin to perspire before we feel that our surroundings are warm. Our body detects a small rise in its temperature, and responds to that detection. On my way of thinking, it’s not that we have unconscious sensations of warmth; instead, we have bodily mechanisms that work without causing any sensations.

    Naturally, I don’t think that there is such a thing as unconscious suffering. So, I don’t think a First Generation robot has anything we should have an attitude of care toward. What it’s got is detectors of states of its body (e.g., pressure sensors, thermometers), and having these get into states that indicate extremes (of squeezing, freezing, being near melting, etc.) are in no sense bad for it. It’s all good, provided they’re connected to a robot brain that will remove it from destructive circumstances; if not, there’s no reason to feel bad about it except for the wasted expense.

    I’m completely aware that these remarks amount to little by way of argument. I do hope and believe they will be found plausible, and that they will at least clarify our differences. (I have given arguments elsewhere, primarily in Understanding Phenomenal Consciousness.)

  • Bill Robinson cites two types of attributes that a robot would need in order to qualify as a humanoid. One type of attribute includes those that are constitutive of the capacity to meaningfully use language, and the other includes those that are constitutive of responsible agency. Has he given us good reason to believe that a robot could possess these attributes?

    Of a first generation robot like C3PO, Bill writes that if it utters words appropriately, its reports correspond to what it detects, and it correctly predicts its own behavior, then,
    “we would take what it said seriously — which means that we would act upon what it said just as we do in response to what our fellow humans say. Disregarding C3PO’s words — treating them as mere noises — would cost us dearly, and would not be an attitude that came naturally to us.”
    There are plenty of scenarios where we should take seriously C3PO’s words, but the same can be said of digital road signs informing us of our speed or of traffic congestion ahead. More intriguing is the interaction we have with machines such as a bank’s ATM, for here there is kind of communication that might tempt us to attribute comprehension. Yet, most of us are no more inclined to think that the ATM comprehends its own displays any more than the road sign understands traffic patterns or a pocket calculator grasps arithmetic.

    If we spice up the interaction, as with C3PO, then some may be inclined to agree with Bill’s more substantive claim that its noises are
    “. . . meaningful words to the extent that they play the same roles as your words. If you say something to it, and it acts just as other people would if you’d said the same thing to them, it has understood what you’ve said.”
    In attributing language comprehension to the robot, Bill is taking sides in a debate that was popularized three decades ago with John Searle’s Chinese Room Argument. Searle contended that a mechanism could exhibit linguistic behavior by producing linguistic outputs in response to linguistic inputs in a rule-governed way that simulates understanding, but without actually understanding the symbols it is using. Arguing analogically from the example of the Chinese Room, Searle concluded that linguistic behavior that conforms to rules is not sufficient for language comprehension. At best, such linguistic behavior is a sign of meaningful language use, not constitutive thereof.

    Judging from what he said, Bill is not persuaded by Searle’s argument. But unless he adopts the seemingly implausible view that the Chinese Room and an ATM are capable using language meaningfully, he owes us an explanation of just how C3PO differs. More than a mere capacity to exhibit appropriate (rule-governed) linguistic behavior is required. Is it the “on board” encasement “within its titanium skin” that tips the balance? But couldn’t we soup up the ATM and the Chinese Room to satisfy this condition as well? Suppose we included sensory feelings characteristic of a second generation robot. Plainly, this would not be enough either; language comprehension is more than the mere sensory awareness of the sounds and marks that token syntactic types, otherwise we would all be terrifically multilingual.

    At the very least, linguistic competence requires communicational intentions and, thus, goals that are to be achieved through the use of language. If having such intentions and goals is nothing more than being internally disposed to behave in accordance with certain patterns, there seems no barrier to creating a first-generation robot with something very much like communicative intentions and communicational goals. Yet, communication is only part of the story with linguistic competence. At the core of the skeptical concerns about a robot’s ability to meaningfully use language is a suspicion that rule-governed manipulation of symbols might occur in the absence of comprehension, that is, of understanding what the detected tokens represent. Bill is unmoved by this concern since he is impressed by the fact that the C3PO’s network relates “words to things that affect detectors and to verbal reports of what is detected, arguments that involve use of the words, statements of intention to do actions of various kinds, and actions themselves.” Appropriate behavior, he finds, is constitutive of the representative function of language as well as the communicative use. But then we are back at the initial challenge: just how does C3PO differ from the Chinese Room or an ATM in its ability to use language meaningfully? Is this difference just a matter of greater complexity, higher processing speed, the right sort of hardware? Or, should we bite the bullet and acknowledge that C3PO differs merely in its degree of language comprehension?

    In turning to considerations of responsible agency and whether a robot might do things it shouldn’t do, Bill considers second and third generation robots. He rightly notes that a first generation robot that malfunctions or misbehaves should be sent back to the factory for an overhaul. That response may not be the best for generation 2 robots endowed with sensations and a capacity to feel pain. Perhaps it would be more effective, quicker, or cheaper if we merely threaten it with pain should it misbehave. Yet, this would amount to a very slender type of responsible agency. After all, we threaten our domesticated animals with pain in causing them to behave in desired ways, and many of us would be reluctant to blame a dairy cow, for example, if it enters the wrong stall, at least not in any way that would lead us to attribute the type of responsibility we recognize in humans and require of humanoids.
    Conceding that more may be required for justified blame, Bill considers a third generation robot endowed with the capacities to feel remorse, discouragement, or wounded pride. Bill writes that “we might then influence its behavior by regularly causing it to have these unpleasant feelings when it misbehaved,” and concludes that they “have everything that’s needed to properly ground the attitudes toward them that we usually have to our fellow human beings.” Even if the consideration of being a factory product implies that we cannot blame such a robot for what it is, that would not override the appropriateness of making it feel remorse for improper behavior. By the same token, Bill argues, our fellow human beings did not make themselves either, and it makes no sense to blame them for who they even it is appropriate to blame them for what they do.
    As a compatibilist in the debate about responsibility, free will and determinism, I am sympathetic with Bill’s claims, and in previous publications I have defended the possibility of a responsible robot. Even if we know that the robot is, just that, a robot, our activities of blaming, praising, etc. may still be appropriate if they are effective in bringing about or sustaining desired behavior. More must be said, however, to counter the charge that we cannot legitimately “blame” the robot since we would not “really” regard it as a responsible agent. We need to endow our robot with a sense of right and wrong and a capacity for feeling obliged to do or avoid certain actions. These feelings are not out of reach for a third generation robot with a capacity to feel remorse and pride. Also, our robot must be capable of intentional behavior and, consequently, of acquiring intentions. In fact, to cement the analogy with humans, we should grant it the ability to have character-forming intentions which allow us to blame people for what they have become, at least in part. I have argued that the acquisition of intentions requires an antecedent sense of options which, in turn, implies feelings of uncertainty about what one will do. If I am correct, these attitudes would have to be wired into the robot as well. Can they be? Well, I don’t know enough about robotics to say, but Bill knows more than I do, so perhaps he will shed more light upon the prospects of a responsible robot.

  • The question on the table is “Could humanoid machines progress to the point where they are entitled to moral status?” I agree with the gist of Robinson’s answer: if a humanoid robot has sensations and feelings, as well as the ability to react appropriately and in novel ways to its environment, humans will — indeed, should – view it as deserving moral status. Two comments:

    (1) I’d like to underscore the pressing nature of these issues. As I emphasized in my book, Science Fiction and Philosophy (Wiley-Blackwell, 2009), science fiction is increasingly converging with science fact. Scientists are already working on humanoid robots. Consider, for instance, the very lifelike androids developed in Hiroshi Ishigaru’s lab at Osaka University, which you can view at these sites:

    http://spectrum.ieee.org/automaton/robotics/humanoids/042010-geminoid-f-more-video-and-photos-of-the-female-android

    http://www.is.sys.es.osaka-u.ac.jp/index.en.html

    I recall speaking at a 2005 workshop called “Android Science” that was designed to discuss the projects of Ishigaru and his associates. Then, the main purpose for developing the androids was to take care of the elderly. The researchers’ focus was solely on making the androids physically similar to humans. I was surprised by this, for physical similarity is only one element of a viable eldercare project — the androids must also be intelligent. After all, assuming that one is friendly to the idea of having an android take care of a loved one to begin with, it is surely more desirable that it be a creature that can respond appropriately to a variety of novel situations and be emotionally in sync with patients.

    Here, you may suspect that it will prove too difficult to make androids this smart. The AI projects of the 80s and 90s are often laughable, to be sure. But nowadays, AI is serious business. Consider IBM’s Watson program, a natural language processing system that integrates information from various topic areas to such a degree that it outperforms Jeopardy grand champions. Now, we can imagine a more advanced version of Watson that is in an android body. This creature could possess a huge stock of possible actions, sophisticated sensory abilities, and so on. The point here is that sophisticated humanoids strike me as technologically feasible — my guess is that it could happen within the next 20 years or so.

    Further, I doubt the development of humanoid robots would be limited to eldercare. Who wouldn’t want a personal assistant to entertain the kids when one is busy, clean the house, and run mundane errands, after all? It is not far fetched to suspect that eventually, many people will have humanoid personal assistants.

    Still, as attractive as a personal assistant may sound, anyone who has viewed the films I,Robot or AI knows we would be on shaky ethical terrain. If humanoids are intelligent enough to take care of Grandma or the kids, isn’t it akin to slavery to have them serve us? Indeed, if Robinson is correct that there are situations in which we would accord humanoids moral status, why should we create them in the first place? If you ask me, society is not even managing to fulfill its ethical obligations to the members of our own species, let alone to nonhuman animals.

    I imagine that those who stand to benefit financially or otherwise from intelligent hominoids may be less inclined to agree that they have moral status — here again, parallels with slavery come to mind.

    In sum, these are serious ethical issues, and they will be pressing upon us relatively soon.

    (2) Turning to a different matter, notice that this question asks about humanoid robots — robots that are morphologically similar to humans. But it is worth noting that the ethical issues are not limited to humanoids. What about robots that look nothing like us at all but which exhibit a similar range of emotion and sensation, etc? While we may at first be less inclined to view such a creature as a moral agent than we would an android, there is nothing morally special about looking human. It is what is going on beneath the surface that matters.

    Of course Robinson knows this. His focus is on the nature of the robot’s internal operations, stressing the import of a system’s emotional and sensory similarity to us. In the spirit of this, we might think that insofar as a creature exhibits a similar range of emotional reactions as we do, and exhibits similar responses to sensory experiences, we should consider it as being a moral agent. While I agree this qualifies one for moral agency, we shouldn’t view such characteristics as being a necessary requirement.

    For what about more radical departures from the human — creatures that do not have our characteristic ways of thinking or feeling? Here, transhumanists like James Hughes, Ray Kurzweil and Nick Bostrom have insights to offer. Transhumanism is a philosophical, cultural, and political movement that holds that the human species is now in a comparatively early phase and that its very evolution will be altered by developing technologies, such as ultrasophisticated AI. Future creatures, such as “superintelligent” AI and even enhanced humans will be very unlike their present-day incarnation in both physical and mental respects, and will in fact resemble certain persons depicted in science fiction stories.

    So consider a superintelligence — a creature with the capacity to radically outperform the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. Would it qualify as moral agents according to the above requirement? Its sensory processing may be radically different than ours — for instance, it may be a web-based system, having sensors that span large spatiotemporal regions at a single moment. It may have a radically different sort of embodiment, being a virtual being which can be multiply located at a singular time and which can appear as a nanobot swarm or borrow an android body.

    We might look for behaviors that we would expect one to engage in in a variety of different situations. But behaviors intelligible to a superintelligent being may not be intelligible to us. Just as your cat lacks the concept of a photon, and overall, misses out on most of the concepts that ground your mental life, so too, we may lack the conceptual abilities to make sense of the behavior of superintelligent AI. For if they don’t think or feel like us, why will they act like us? The challenge here is that we have reason to believe they are more intelligent than we are, but they do not think anything like we do. On what grounds will we accord them moral agency?

    I suggest here a rough and ready principle: if creatures having sensory experience are capable of solving problems that our species has as of yet been unable to solve, we have reason to believe that they are sophisticated forms of intelligence. And sophisticated intelligences having sensations deserve moral status.

    This is not to suggest that it is safe to build them — for I have not addressed whether we can be confident that they would remain benevolent. But this issue — the issue of what moral principles we might encode in superintelligent AI — is a separate issue from whether they deserve moral status. I leave it for another day.

  • I want to thank Professor Robinson for his thoughtful essay. My question has to do with the difference Robinson sees between the attribution of meaningful speech to a machine and the attribution of sensations to a machine.

    I agree with Robinson that the meaningfulness of verbal behavior depends on whether that behavior fits within a complex network of causes and effects. And as I understand this view, we are therefore justified in attributing meaningful speech to a machine if it engages in an appropriate kind of complex behavior. If C3PO emits sounds that play the same complex roles as meaningful human speech, then C3PO produces and understands meaningful speech.

    By contrast, Robinson thinks that no kind of complex behavior justifies us in attributing sensations to a machine. Robinson observes that a simple machine such as a thermostat that detects changes in the environment does not have sensations. And Robinson concludes that we ought to attribute sensations to a machine only if it exhibits an artificial neural architecture that produces the same kind of activity as human brains.

    But even if a machine didn’t exhibit such an artificial neural architecture, it seems to me that we are justified in attributing sensations to a machine if it engages in a suitable sort of complex behavior. I think it’s natural to hold that sensations are the states of creatures (and perhaps of future machines) in virtue of which they make sensory discriminations amongst perceptible stimuli. My sensations of red are what enable me to discriminate red from the other colors and to react differentially to those colors. So I would be eager for Robinson to say a bit more about this point.

    I agree with Robinson that a thermostat does not register environmental changes in virtue of sensations. I would argue, however, that we do not attribute sensations to a thermostat because such a simple machine detects only a small repertoire of environmental changes and does not differentially react to them in complex ways. Consider, for instance, a simple thermostat that announces temperature changes. Even though its sounds are emitted at appropriate times, this thermostat does not exhibit meaningful speech. Its sounds are not suitably complex and integrated into the thermostat’s behavioral economy.

    Similarly, we conclude that the thermostat detects temperature changes without sensations because of the paucity of its detection behavior. We don’t draw this conclusion because the thermostat lacks an appropriate artificial neural architecture. I wouldn’t think that the fact that a machine lacks an appropriate artificial neural architecture is a reason to deny that it has sensations. If a machine’s discriminatory behaviors were suitably complex and integrated into its behavioral economy, we would, and I think should, attribute sensations to it.

    One might, however, resist attributing sensations to a machine even if it detects a large range of environmental changes. One might think that the states of the machine that enable such discriminations are, or at least could be, wholly nonconscious. But many assume, as Robinson does, that sensations are always conscious. So, again, it seems that a machine’s detection behavior cannot be evidence that it has sensations.

    I myself think there’s substantial evidence for sensations that aren’t conscious such as cases of subliminal perception, but I understand that this is too fundamental an issue to settle here. If there are such states, then we are justified in attributing sensations to machines with suitably complex discriminatory behaviors.

    But even if sensations are always conscious, most agree that a feature of conscious states is that one is able to report that one is in them. Indeed, it’s a hallmark of common sense and of experimental psychology that if one is in a conscious sensation of red then one can say that one sees red. A conscious sensation is therefore suitably integrated into something’s behavioral economy only if the thing can meaningfully say that it’s in that state.

    Suppose, then, that C3PO were able to detect via its sensors the same huge range of stimuli that we do and react to them as we do. And, moreover, suppose that C3PO were able to meaningfully report that it’s in the states that enable it to make those sensory discriminations. Imagine, for instance, C3PO not only discriminates red things from green things, but also says “I see the red things,” thereby reporting that it’s in the state in virtue of which it makes the discrimination. Even if C3PO lacked an artificial neural architecture similar to ours, I’m not sure I see an independent reason to deny that C3PO has a conscious sensation of red. I’m very interested to hear Robinson’s thoughts on this.

    I’ll close with some questions about what Robinson calls First Generation robots, which he describes as machines endowed with meaningful speech but not sensations. First, I wonder whether Robinson thinks First Generation robots have no mental states at all, or whether he thinks they lack sensations but has other sorts of mental activity. It seems to me that if something can speak meaningfully then it has mental states—in particular, thoughts. Robinson does claim that whether verbal behavior is meaningful depends on whether this behavior fits within a complex network of causes and effects. But most would agree that meaningful speech acts are typically caused by thoughts. If I say it’s raining, it’s typically because I think it’s raining and my thought caused me to say it. If this is right, then First Generation robots are machines that have thoughts but no sensations.

    If First Generation robots are possible, what is their moral status? In his commentary, Andrew Melnyk quotes Robinson as claiming that “[i]f we think [some robot] cannot feel anything, we won’t have a certain kind of qualm about exposing it to danger.” But, as Melnyk notes, it is not clear why this is so. If a First Generation robot has thoughts, why doesn’t that alone qualify it as a thing of moral value, which we shouldn’t expose to danger?

  • Paula Droege

    TO ERR IS HUMAN

    Bill Robinson makes some excellent points in assessing whether robots deserve moral consideration. By keeping in mind that humans are also machines of a sort, Robinson convincingly enumerates the ways robots duplicate humans in their causal interactions.

    The most important way in which robotic machinery approximates human intelligence is its ability to act ‘appropriately’ in its environment. C3PO responds to novel problems with actions that effectively advance its goals. Descartes thought mathematics and language were the best markers of intelligence, but Robinson’s criterion of ‘appropriate behavior’ is evolutionarily sound and clearly demarcates clever beasts from behaviorally rigid animals.

    Still, we need to ask what makes behavior ‘appropriate.’ We could imagine a robot consistently running into every wall in the vicinity, or smashing every cup it came across. This behavior seems completely inappropriate, but this intuition is dependent on our own standards of what counts as a good goal. If the goal of the robot is destruction, these might be perfectly appropriate actions. Similarly, if C3PO consistently called Luke ‘Hans’, we might reasonably assume that for it, ‘Hans’ means ‘Luke.’ This assumption is not simply a consequence of the Principle of Charity, as Davidson suggested, but due to the way C3PO uses the word to pick out Luke, we can see how the word functions in its repertoire of sensation-behavior.

    The fundamental issue is how C3PO and other robots acquire goals and actions appropriate to them. Kapitan mentioned Searle’s objection that rule-based manipulation of symbols does not endow meaning. To expand on that point a bit, the problem is that when computations run by a robot are entirely programmed, robot intentionality is derived rather than intrinsic. One need not accept Searle’s own view that only biological consciousness endows intrinsic intentionality to be compelled by his objection. Second-hand meanings are not genuine; there must be some way for the robot to acquire meaning for itself.

    If the robot is nothing more than a programmed system modified by causal input, then its actions are no more meaningful than the behavior of my robotic vacuum cleaner. Robot action becomes meaningful when it interacts with the environment to determine which actions further its goals and which actions do not. In other words, causation alone is inadequate to endow meaning; the robot must have some way to assess how effectively an action responds to a stimulus in relation to its goals. Only with this sort of ongoing evaluation process could the First Generation robot have the behavioral flexibility exhibited by C3PO.

    This may sound cognitively sophisticated, but it need not be. If the goal is to get the morphine to sick-bay, then C3PO must have some way have some way to determine whether a drug is morphine or not and when it has arrived at sick-bay rather than helm. In its response to mistakes – oops, a wrong turn landed me at the helm – C3PO comes to understand the world for itself.

    Likewise, in its response to mistaken goals, a robot comes to understand itself and its own agency. Several commentors have noted the importance of agency to moral responsibility, and I agree. Robinson thinks of Third Generation robots as agents, yet says that they are blameless for what they are. The result is an apparent contradiction: we accept they are blameless, and we blame them anyway. A resolution to this contradiction lies in the capacity of agents to assess their goals and change them upon reflection. If a goal consistently causes pain to oneself or others, a moral agent would revise that goal. (Presumably. There are, of course, many competing theories of morality, but most take the infliction of pain to be immoral.) Failure to revise the offending goal would be grounds for blame.

    Robinson seems to think of goals as the result of passive causal processes and so the possessor of them cannot be blamed. Robots, he says, “were made in a factory, and the changes that have occurred inside them since their manufacture have depended on how they were constructed to begin with, and what has affected their detectors subsequently.” In addition to these factors, though, agents have the capacity to make changes inside themselves by evaluating their goals in relation to one another and their social and physical environment.

    It’s not clear whether conscious sensation is necessary for agency, although I suspect Rosenthal is right that sensation of some sort is prior to thought, and thought is undoubtedly prior to agency. Moreover, there are reasons to be morally concerned for things that are not agents or even capable of consciousness. In my view, objects such as the earth ought to be candidates for moral concern as well. So the connection between consciousness and morality is more complicated than one might think.

    What is clear to me is that agency is necessary to ascribing to a robot the same moral consideration we ascribe to other human beings. When we have good reason to think robots are capable of assessing their goals in light of the consequences of their actions, we will have good reason to take them to be moral agents.

  • Eric Kraemer

    Bill claims that Third Generation robots have everything needed to ground attitudes similar to them that we now have toward humans, and that a consequence we can now “properly recognize both our humanity and our position in the natural world”. I think he’s right that we may be able to do a better job recognizing our humanity in its important variety and that his discussion helps us to do this; but it’s not clear to me how much better our understanding of our position in the natural world is, and I am not yet sure about the proper attitude to have towards Third Generation robots. Here’s why.

    Bill’s distinction of three different generations of robots is useful with respect to pointing out helpful differences between humans as well. Just as we can make sense out of three different types of robots, so too can we find important differences between diverse kinds of humans. Some humans (such as the deaf or the blind) lack sensations most other humans have, and some humans have sensations that most other humans lack (such as ‘noses’ and those with perfect pitch or synesthesia). These differences should not tempt us to treat those with more or fewer kinds of sensations as more or less human. Could we imagine a human with no sensations at all? Some philosophers have claimed this to be possible, and Bill’s discussion seems to support this claim. Having dealt with robot sensations Bill’s major concern involves how robots behave.

    There are two importantly different senses of “misbehaving” at play when Bill speaks of robots doing things they shouldn’t do. First there is the notion of working contrary to design function, indicating either construction or operation malfunction. Second there is the notion of behaving contrary to established social norms. Only on the first but not the second does Bill’s suggestion regarding a misbehaving robot that “we send it back to the factory” make sense. Here the proper analogy is with human illness interfering with proper biological function. But, if a robot is misbehaving in the second sense, since the robot did not design itself we should first take corrective measures with respect to the robot’s manufacturers. But, should we blame the robots?

    Bill claims that there are two different senses of blame, blaming someone for what they are versus blaming someone for what they do. The first does not apply to robots but the second does. The moral that Bill draws from this is that the same applies to humans. While I think he’s right that what holds for robots probably also holds for humans, I am not sure about whether blame for either humans or robots still makes any sense. But first, here’s a quibble. Some new-wave libertarians claim that humans as purely material systems can be “self-forming” in proper constructed complex situations involving quantum indeterminism. If so, then just as Bill has us imagine adding sensations to robots, perhaps we might also imagine adding the right kind of randomizers to robots to endow them with similar power s to form themselves in an appropriately deep sense. (Perhaps, then a Fourth Generation of robots is on the horizon!)

    But, let’s not get carried away yet. Suppose neither humans nor robots self-form in any convincing sense. If so, the act of blame as traditionally understood, is out of place. Just as we do not blame people for congenital medical conditions, so too is it odd to blame robots for what they do. We explain why the temptation to do so by noting how humans often forget about how certain objects are constructed and rashly attribute special powers to items which they lack. But, if no robot self-forms, then Bill’s attribution of robot blame involves something radically revisionary. Why so?

    In addition to Bill’s two senses of blame mentioned above I think there are also two senses of blaming a human or robot for what they do, one traditional, and Bill’s alternative, which is involves radical revisionism. The traditional sense of blame involves the notion of deserving approbation, being blame-worthy. If Bill is right, this sense of blame really has been eliminated. Bill’s own use of blame, on the other hand, involves public blaming activity as an integral part of the reinforcement mechanism, working through their senses of shame and pride, by means of which of which we affect changes in Third Generation robot behavior. We train and correctively re-train robots in the same way we train and re-train small children or pets. But, let’s not call this activity blaming, as though it still had an important connection to blame-worthiness, but blame-training, or, better, just training. So, I am ultimately not sure what I have learned about humans from Bill’s robots.

    Part of what makes me unsure involves the absence in his discussion of desire. Humans have lots of desires, but First Generation robots do not seem to have any. Second Generation robots seem to have just one desire—to avoid pain, and Third Generation robots seem to have a few additional desires—to maintain a sense of pride and to avoid remorse or discouragement. Some philosophers have argued that certain special desires are importantly involved in attributions of responsibility. This is a further point of contention to be worked out. In any event, before I can be confident that Third Generation robots have everything needed to ground attitudes similar to those we now have toward humans I would need to learn more about their desires. If they are not included somehow in the ‘inner processers’, then perhaps this would require yet another generation of robots to accomplish!

  • Cara Spencer

    What I like about Bill’s essay is that it focuses our attention on a problem that is both immediate and very difficult to answer: What would it take to make a robot count as humanoid, that is, enough like a person to justify us in treating it as one? What specific attributes would that robot have to have? Bill’s answer is a brief sketch of a broad and ambitious program of spelling out these attributes in a way that makes it conceivable that robots could actually have them.

    Bill argues that if a robot is to count as sufficiently like a person to justify us in treating it like one, it should (1) understand language, (2) have a capacity for pain and other sensations, and (3) have at least some of our reactive attitudes, such as resentment, pride, and discouragement. If a robot could have all three of these traits, Bill says that we would in fact treat it like a human, and that it would seem “quite natural and proper” to do so.

    For pretty ordinary and familiar reasons, I have my doubts about whether a robot could have any of (1)-(3), but I think it’s worthwhile to sidestep that question and consider whether Bill is right to say that if a robot had those three traits, we would treat it like a human and it would seem natural and proper to do so.

    First, I think we should distinguish two questions:

    A. Would we in fact treat Bill’s humanoid robots just like other humans?
    B. Should we treat them just like other humans?

    Bill seems to think that the answer to the first question is “yes,” and that very fact is supposed to suggest that we would be justified in doing so, and that we should in fact treat them that way.

    I am less sanguine than Bill about what we would do when faced with a humanoid robot. Even if, in some circumstances, we would interact with these robots in the same ways we interact with other people, there would also be some likely differences. For instance, these robots would be produced for profit, and they would be bought and sold and perhaps discarded when the old model can no longer run the new operating system. People have of course treated other people in some of these ways (consider chattel slavery), but that is hardly consonant with treating them as human beings. Furthermore, animals quite obviously feel pain, yet our ordinary and pervasive treatment of animals differs markedly from what we would regard as justified ways of treating fellow human beings. It may be that we are justified in treating animals differently from people because animals lack traits (1) and (3). But if we are want to know how we would treat humanoid robots who share our core mental capacities, how we in fact treat animals that we know to have at least some of those capacities is certainly apposite. I don’t know whether the question here is a philosophical one—it’s more of an issue about predicting what we would do in a novel situation. That said, it is less obvious to me than it seems to be to Bill that we really know how we would treat his hypothetical humanoid robots.

    Perhaps Bill is ultimately more interested in the second question: if robots had traits (1)-(3), what if anything could justify us in treating them differently from other human beings? I think this question is more interesting, but I don’t think we can rely so heavily on predictions about how we would treat humanoid robots to answer it.

  • This is a response to Thomas Kapitan’s comment of 6/12.

    You’ve put my feet to the fire on two issues. The first concerns the Chinese Room argument, the second concerns requirements for a responsible robot. I’ll try to respond to both.

    I can’t give a full justification here for my line on Searle, but I think I can make it clear, and invite anyone to go back to Searle’s paper and see if what I say isn’t so. (There’s more about this in my 1992 book, Computers, Minds and Robots.)

    After the Systems Reply, the original Searle has turned into a sort of mnemonic monster — let’s call him “Mnonster” for short — who writes good replies to Chinese questions without having to consult anything outside himself. Now, on the face of it, that gives some reason to think that Mnonster understands his words. So, Searle needs to give us a reason why Mnonster does *not* understand the questions, or the words he provides in his answers.

    And Searle gives one. Namely, while Mnonster has excellent connections between words and words (words in the questions to words in the answers) he has no connections between words and the world. Perceptions of objects don’t suggest Chinese words, and Chinese words don’t suggest actions. If the Chinese outside write “We’ve been at this a while, raise your left hand if you’d like a hamburger”, Mnemonster will have no reason to raise his left hand, even if he’s ravenously hungry and loves hamburgers.

    So, Mnonster doesn’t understand words. So far so good. Searle then considers another reply, and eventually returns to the Robot Reply. He doesn’t say much in response to the Robot Reply; he writes as if he’s mostly already given the answer to it.

    But in fact, the Robot Reply removes the reason Searle gave for denying understanding to Mnonster. (I hope it’s clear that this name is mine, for brevity; Searle does not give the post-Systems-Reply entity a separate name.) For the robots he imagines *do* have word to world connections (in both input and output directions). So, as far as any reason Searle has given, *robots* can be understanders, even though mere computers cannot be.

    In short, what Searle should have said was that *computers* cannot understand the words they may manipulate (which is what he started out to show, against a claim made by Schank, if I recall rightly). He should have stopped while he was ahead: he was not entitled to extend his claim to robots.

    Your second part makes some very insightful points about what’s required for robot responsibility. For the most part, I think that if there could be Third Generation robots with abilities I attributed to them explicitly in the article, there could also be robots that had the further attributes you mention. But there’s an exception; and in some cases, I’m not sure about whether these further attributes are truly necessary for having responsibility.

    The attributes in question are these: (1) A sense of right and wrong; (2) A capacity to feel obliged to do or refrain from some particular action; (3) intentional behavior: (4) acquiring intentions; (5) character-forming intentions; (6) and antecedent sense of options; and (7) a feeling of uncertainty about what one will do.

    As to (1) a responsible robot will have, of course, to know the difference between right and wrong. The rest of a “sense” of right and wrong is, I think the same as (2). I think (2) would be possessed by anything that can experience anxiety at the prospect of being found out not to have done an obligation or to have done something forbidden. I think that’s already part of being a Third Generation robot.

    As to (3), yes, certainly, a responsible robot must be able to act intentionally rather than accidentally or unthinkingly. However, I’m suspicious of the move from that to (4) a need to acquire intentions. I’m inclined (for reasons I really can’t go into now, but are parallel to those offered about “occurrent beliefs” in Your Brain and You) to think that intentions are fishy, and come in only through an inadequate analysis of acting intentionally.

    I’m not sure about (5). As I indicated in a previous response, I think it’s rare that we explicitly deliberate about our character, although many things we do have effects on it. So, it’s not clear to me that if there were no such deliberations, that would remove a robot from the list of responsible entities. Still, if a relation to future ability to decide for the good were sufficiently evident, and within the cognitive capacities of the subject concerned, it might well be that taking that relation into account would be required for responsibility.

    (6) An antecedent sense of options seems to be a consequence of intelligence; e.g., one ordinarily knows that one doesn’t know of anything that rules out going to see a movie or that rules out not going to see it. But (7) is harder, for one might imagine a superintelligent robot to arrive at decisions without ever going through a sense of uncertainty. It seems, however, that in some hard cases, the attractiveness of an option we’ve ruled out reasserts itself when some ‘cognitive distance’ intervenes — i.e., when we have not just recently been dwelling on the drawbacks of some otherwise attractive option, the intensity of the downside may be forgotten, and we may have to go through our deliberation all over again. I’m not sure that this kind of cognitive distance effect is actually necessary for responsibility, but if it is, I think it might be able to be incorporated into a robot. That ability would, I agree, be additional to what’s implied by what I said in the article.

    These remarks are evidently the beginning rather than the end of reflections on a very stimulating set of comments.

  • This is a response to Susan Schneider’s comments of 6/13.

    There are indeed impressive recent developments in AI, and in subtlety of movement, especially facial expression. That’s done with many small motors — so many that, on a recent tour of his lab, Ishiguro himself couldn’t remember offhand how many were in his Geminoid. (It’s somewhere in the sixties.) A Telenoid of about toddler size brings elderly women to tears as they hug it and it coos and hugs back a little — even though it’s operated and voiced by an offstage person viewing a TV monitor. (BTW, the operator gets to feeling the Telenoid as part of her own body after a short while. One member of the tour turned the Telenoid upside down; the operator said that had made her feel slightly nauseous.)

    So, yes, artificial intelligence and our responsiveness to humanoid expression are important and increasingly realizable matters. It’s further true that intelligence does not require interests compatible with ours, nor does it require a form similar to ours. (Octopi are stunning examples.)

    However, there is a key point that I’m worried about that runs through these comments. Namely, none of the projects Susan mentions are remotely in the business of trying to decipher and then build in the causes of sensations. Building in facial expressions that will elicit an emotional response from us is not even a beginning on the project of producing a robot that enjoys pleasant sensations or suffers pains. If we respond to any of these robots with solicitation, that will be a response based on an illusion.

    In short, I don’t want the interest and truth in Susan’s remarks to obscure one of the main points of the article: Entry into the moral sphere where we have genuine obligations to robots depends on building robots that can actually suffer. I am not saying this can’t be done — on the contrary, if we can figure out the causes of sensations, we may be able to build those causes into robots. But none of the present-day research programs are even trying to do that.

  • This is a response to Jacob Berger‘s comments of 6/14.

    The first part of these comments are about sensations, and the issue is whether functionalism about sensations is correct. I can’t, of course, give a full accounting here of why I’m not a functionalist about sensations — one can get a lot more from my Understanding Phenomenal Consciousness. But I think I can indicate a main line of approach here.

    There is a possible *empirical* constraint. Namely, it could turn out that the only way to make a set of detectors that can make all the discriminations that we can make, in a time frame roughly like ours, would be to build a brain. In that case, it would be an empirical fact that anything that could (empirically could, not logically could) satisfy our functional description would have sensations. Of course, in that case, *robots* with sensations would be empirically impossible.

    A second possible *empirical* fact would be that there are only two ways of making a set of detectors that can make all our discriminations roughly in our time frame — one, a brain, and the other a device that instantiates the same patterns that our brains instantiate when we have sensations. Such a second device might be a robot, and would have a good claim to be a robot with sensations — it would have what a future neuroscience may tell us are the causes of our sensations.

    Now, suppose there is a third empirical possibility — a device that makes all our discriminations in roughly our time frame, but does not instantiate patterns that we have good reason to think are the causes of our sensations. That third device would *at least* have less of a claim to be a robot with sensations than the second device.

    In fact, I see no plausibility at all in the suggestion that the third device would have sensations. Where there are sensations, there are qualities — bodily feelings such as pain or nausea, colors, tastes, and so on, and emotional feelings like fear or anger. These feelings are not behaviors, and pointing to complexity in behavior does nothing to explain how they get into the world *unless* one assumes that the “third possibility” isn’t really possible (i.e., one of the first two possibilities is the case).

    I don’t agree that “my sensations of red are what enable me to discriminate red from other colors . . . .” As Fred Adams pointed out, I’m an epiphenomenalist. But that background aside, there must be a neural story that explains how I can react (verbally and nonverbally) to red as opposed to green, blue, etc. in terms of neural transactions and their eventual effects on muscles. If you take away the neural events and replace them with some other apparatus for detection, you also take away the reason for thinking that there are any causes of sensations, and so, the reason for thinking that there are sensations.

    One further point. Yes, normally if we have a sensation we can report it. It does not follow that if we can report on discriminated inputs, we have sensations.

    I doubt that this will convince Jacob, but I think we have to leave matters here for the present.

    The last part of Jacob’s comments are concerned with thoughts. He says “most would agree that meaningful speech acts are typically caused by thoughts.” He’s probably right, but I think that view of speech and thoughts is hopelessly wrong. I’m just going to have to refer to Your Brain and You for why.

    C3PO can deceive the agents of the empire. It doesn’t have to deceive itself to do that; it believes, e.g., that Luke is at one location while saying he’s at another. Believing something is being in a mental state, so yes, First Generation robots have mental states. But it’s a bad analysis to move from that to the view that mental states are causes of the behavior that is symptomatic of having them.

    Finally, in a nutshell: The believers we are familiar with — i.e., ourselves — also have sensations. They can suffer, so they are morally considerable. First Generation robots are believers, but they can’t feel a thing. They expect, but they do not fear, they can be damaged but they cannot suffer pain, fear poverty, or grieve. That’s why they are not morally considerable.

  • This is a response to Paula Droege’s comments on 6/14.

    Early on, these comments bring out the important point that appropriateness of behavior depends on the goals aimed at; almost any behavior could be appropriate to some goal.

    This observation leads to the question of how goals are acquired, and in the course of this discussion, Paula says that “If the robot is nothing more than a programmed system modified by causal inputs, then its actions are no more meaningful than the behavior of my robotic vacuum cleaner.”

    I don’t agree with this. Of course, if a robot’s behavior is *canned* — i.e., every eventuality has been anticipated, and a movement provided for each eventuality that comes up, then it is no more interesting than a vacuum cleaner. But to be “entirely programmed” is not the same thing as being canned. For programs generally don’t deal with specific combinations or sequences of inputs and provide outcomes for each one; instead, they provide rules for combining input elements, and reiteratively applying such rules until an output and a stop command are reached. If a program of this usual sort were good enough to produce flexible behavior (i.e., appropriate over a wide range of unexpected circumstances), it really would be a lot more interesting than a vacuum cleaner.

    Paula continues, “Robot action becomes meaningful when it interacts with the environment to determine which actions further its goals and which actions do not”. I’d agree, but note that this is what a program of the right kind will enable a robot to do. Paula further says “the robot must have some way to assess how effectively an action responds to a stimulus in relation to its goals.” Again, this seems right, but it’s in no way incompatible with robots’ working by running programs. Of course, the programs will have to be pretty interesting.

    When the issue turns to agency, I’m held to be in a contradiction, holding robots to be blameless, but being willing to blame them anyway. Paula suggests a resolution, but I deny that there’s a contradiction to be resolved. Evidently, I wasn’t clear enough: I hold that robots (like us) are blameless for *what they are* (and similarly, in our case, for who we are) when they (or we) act); but they are blameable *for the action*, if it’s an immoral one. There is no contradiction in that.

    We can now return to the question of the origins of goals. “Robinson seems to think of goals as the result of passive causal processes and so the possessor of them cannot be blamed.” Yes, that’s what I think. Even in the human case. We are born with some goals (e.g., getting fed, avoiding freezing and pain sources, etc.). We acquire others as we go along; but these begin only by our having some goals to begin with, and seeing the relation of new goal to old one(s). This last depends on cognitive capacities we cannot directly influence and on whether inputs beyond our control were fortunate or unfortunate. And goals develop by such causes as repetition (generating habits) and changes in our bodies.

    If we make a change inside ourselves by evaluating current goals, we have to remember that evaluation presupposes some goals fixed during the evaluation, and cognitive capacities and environmental inputs that we did not arrange for. Even the decision to undertake an evaluation of our goals is dependent on who we are at that moment; and we are not in a position to make a decision about that.

    Of course, I am not disagreeing that people or robots can ask whether trying to achieve a certain aim is or is not conducive to satisfying one or more other aims. So, I think Third Generation robots, if they could be built, would meet Paula’s criterion for agency.

    That responds to the focal points. On sensations, I’ll refer to my response to David Rosenthal. On moral concern for the Earth, I think Paula and I disagree. What’s wrong with trashing the Earth, I think, is only that that’s so stupid from our own point of view, and will cause suffering in Earth’s future inhabitants. But I won’t go into defending this view here,

  • This conversation continues in our Facebook group, where Bill Robinson has posted additional responses to critics. Please join the discussion there by logging in to your Facebook account and proceeding to our group: On the Human.