Comments on: Challenges for a Humanoid Robot http://nationalhumanitiescenter.org/on-the-human/2011/06/challenges-for-a-humanoid-robot/ a project of the National Humanities Center Mon, 13 Feb 2012 19:42:46 +0000 hourly 1 By: Jason King http://nationalhumanitiescenter.org/on-the-human/2011/06/challenges-for-a-humanoid-robot/comment-page-1/#comment-7826 Wed, 22 Jun 2011 13:35:22 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2455#comment-7826 This conversation continues in our Facebook group, where Bill Robinson has posted additional responses to critics. Please join the discussion there by logging in to your Facebook account and proceeding to our group: On the Human.

]]>
By: Bill Robinson http://nationalhumanitiescenter.org/on-the-human/2011/06/challenges-for-a-humanoid-robot/comment-page-1/#comment-7763 Thu, 16 Jun 2011 13:31:48 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2455#comment-7763 This is a response to Paula Droege’s comments on 6/14.

Early on, these comments bring out the important point that appropriateness of behavior depends on the goals aimed at; almost any behavior could be appropriate to some goal.

This observation leads to the question of how goals are acquired, and in the course of this discussion, Paula says that “If the robot is nothing more than a programmed system modified by causal inputs, then its actions are no more meaningful than the behavior of my robotic vacuum cleaner.”

I don’t agree with this. Of course, if a robot’s behavior is *canned* — i.e., every eventuality has been anticipated, and a movement provided for each eventuality that comes up, then it is no more interesting than a vacuum cleaner. But to be “entirely programmed” is not the same thing as being canned. For programs generally don’t deal with specific combinations or sequences of inputs and provide outcomes for each one; instead, they provide rules for combining input elements, and reiteratively applying such rules until an output and a stop command are reached. If a program of this usual sort were good enough to produce flexible behavior (i.e., appropriate over a wide range of unexpected circumstances), it really would be a lot more interesting than a vacuum cleaner.

Paula continues, “Robot action becomes meaningful when it interacts with the environment to determine which actions further its goals and which actions do not”. I’d agree, but note that this is what a program of the right kind will enable a robot to do. Paula further says “the robot must have some way to assess how effectively an action responds to a stimulus in relation to its goals.” Again, this seems right, but it’s in no way incompatible with robots’ working by running programs. Of course, the programs will have to be pretty interesting.

When the issue turns to agency, I’m held to be in a contradiction, holding robots to be blameless, but being willing to blame them anyway. Paula suggests a resolution, but I deny that there’s a contradiction to be resolved. Evidently, I wasn’t clear enough: I hold that robots (like us) are blameless for *what they are* (and similarly, in our case, for who we are) when they (or we) act); but they are blameable *for the action*, if it’s an immoral one. There is no contradiction in that.

We can now return to the question of the origins of goals. “Robinson seems to think of goals as the result of passive causal processes and so the possessor of them cannot be blamed.” Yes, that’s what I think. Even in the human case. We are born with some goals (e.g., getting fed, avoiding freezing and pain sources, etc.). We acquire others as we go along; but these begin only by our having some goals to begin with, and seeing the relation of new goal to old one(s). This last depends on cognitive capacities we cannot directly influence and on whether inputs beyond our control were fortunate or unfortunate. And goals develop by such causes as repetition (generating habits) and changes in our bodies.

If we make a change inside ourselves by evaluating current goals, we have to remember that evaluation presupposes some goals fixed during the evaluation, and cognitive capacities and environmental inputs that we did not arrange for. Even the decision to undertake an evaluation of our goals is dependent on who we are at that moment; and we are not in a position to make a decision about that.

Of course, I am not disagreeing that people or robots can ask whether trying to achieve a certain aim is or is not conducive to satisfying one or more other aims. So, I think Third Generation robots, if they could be built, would meet Paula’s criterion for agency.

That responds to the focal points. On sensations, I’ll refer to my response to David Rosenthal. On moral concern for the Earth, I think Paula and I disagree. What’s wrong with trashing the Earth, I think, is only that that’s so stupid from our own point of view, and will cause suffering in Earth’s future inhabitants. But I won’t go into defending this view here,

]]>
By: Bill Robinson http://nationalhumanitiescenter.org/on-the-human/2011/06/challenges-for-a-humanoid-robot/comment-page-1/#comment-7758 Thu, 16 Jun 2011 03:34:17 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2455#comment-7758 This is a response to Jacob Berger‘s comments of 6/14.

The first part of these comments are about sensations, and the issue is whether functionalism about sensations is correct. I can’t, of course, give a full accounting here of why I’m not a functionalist about sensations — one can get a lot more from my Understanding Phenomenal Consciousness. But I think I can indicate a main line of approach here.

There is a possible *empirical* constraint. Namely, it could turn out that the only way to make a set of detectors that can make all the discriminations that we can make, in a time frame roughly like ours, would be to build a brain. In that case, it would be an empirical fact that anything that could (empirically could, not logically could) satisfy our functional description would have sensations. Of course, in that case, *robots* with sensations would be empirically impossible.

A second possible *empirical* fact would be that there are only two ways of making a set of detectors that can make all our discriminations roughly in our time frame — one, a brain, and the other a device that instantiates the same patterns that our brains instantiate when we have sensations. Such a second device might be a robot, and would have a good claim to be a robot with sensations — it would have what a future neuroscience may tell us are the causes of our sensations.

Now, suppose there is a third empirical possibility — a device that makes all our discriminations in roughly our time frame, but does not instantiate patterns that we have good reason to think are the causes of our sensations. That third device would *at least* have less of a claim to be a robot with sensations than the second device.

In fact, I see no plausibility at all in the suggestion that the third device would have sensations. Where there are sensations, there are qualities — bodily feelings such as pain or nausea, colors, tastes, and so on, and emotional feelings like fear or anger. These feelings are not behaviors, and pointing to complexity in behavior does nothing to explain how they get into the world *unless* one assumes that the “third possibility” isn’t really possible (i.e., one of the first two possibilities is the case).

I don’t agree that “my sensations of red are what enable me to discriminate red from other colors . . . .” As Fred Adams pointed out, I’m an epiphenomenalist. But that background aside, there must be a neural story that explains how I can react (verbally and nonverbally) to red as opposed to green, blue, etc. in terms of neural transactions and their eventual effects on muscles. If you take away the neural events and replace them with some other apparatus for detection, you also take away the reason for thinking that there are any causes of sensations, and so, the reason for thinking that there are sensations.

One further point. Yes, normally if we have a sensation we can report it. It does not follow that if we can report on discriminated inputs, we have sensations.

I doubt that this will convince Jacob, but I think we have to leave matters here for the present.

The last part of Jacob’s comments are concerned with thoughts. He says “most would agree that meaningful speech acts are typically caused by thoughts.” He’s probably right, but I think that view of speech and thoughts is hopelessly wrong. I’m just going to have to refer to Your Brain and You for why.

C3PO can deceive the agents of the empire. It doesn’t have to deceive itself to do that; it believes, e.g., that Luke is at one location while saying he’s at another. Believing something is being in a mental state, so yes, First Generation robots have mental states. But it’s a bad analysis to move from that to the view that mental states are causes of the behavior that is symptomatic of having them.

Finally, in a nutshell: The believers we are familiar with — i.e., ourselves — also have sensations. They can suffer, so they are morally considerable. First Generation robots are believers, but they can’t feel a thing. They expect, but they do not fear, they can be damaged but they cannot suffer pain, fear poverty, or grieve. That’s why they are not morally considerable.

]]>
By: Bill Robinson http://nationalhumanitiescenter.org/on-the-human/2011/06/challenges-for-a-humanoid-robot/comment-page-1/#comment-7755 Thu, 16 Jun 2011 02:32:26 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2455#comment-7755 This is a response to Susan Schneider’s comments of 6/13.

There are indeed impressive recent developments in AI, and in subtlety of movement, especially facial expression. That’s done with many small motors — so many that, on a recent tour of his lab, Ishiguro himself couldn’t remember offhand how many were in his Geminoid. (It’s somewhere in the sixties.) A Telenoid of about toddler size brings elderly women to tears as they hug it and it coos and hugs back a little — even though it’s operated and voiced by an offstage person viewing a TV monitor. (BTW, the operator gets to feeling the Telenoid as part of her own body after a short while. One member of the tour turned the Telenoid upside down; the operator said that had made her feel slightly nauseous.)

So, yes, artificial intelligence and our responsiveness to humanoid expression are important and increasingly realizable matters. It’s further true that intelligence does not require interests compatible with ours, nor does it require a form similar to ours. (Octopi are stunning examples.)

However, there is a key point that I’m worried about that runs through these comments. Namely, none of the projects Susan mentions are remotely in the business of trying to decipher and then build in the causes of sensations. Building in facial expressions that will elicit an emotional response from us is not even a beginning on the project of producing a robot that enjoys pleasant sensations or suffers pains. If we respond to any of these robots with solicitation, that will be a response based on an illusion.

In short, I don’t want the interest and truth in Susan’s remarks to obscure one of the main points of the article: Entry into the moral sphere where we have genuine obligations to robots depends on building robots that can actually suffer. I am not saying this can’t be done — on the contrary, if we can figure out the causes of sensations, we may be able to build those causes into robots. But none of the present-day research programs are even trying to do that.

]]>
By: Bill Robinson http://nationalhumanitiescenter.org/on-the-human/2011/06/challenges-for-a-humanoid-robot/comment-page-1/#comment-7753 Thu, 16 Jun 2011 01:36:57 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2455#comment-7753 This is a response to Thomas Kapitan’s comment of 6/12.

You’ve put my feet to the fire on two issues. The first concerns the Chinese Room argument, the second concerns requirements for a responsible robot. I’ll try to respond to both.

I can’t give a full justification here for my line on Searle, but I think I can make it clear, and invite anyone to go back to Searle’s paper and see if what I say isn’t so. (There’s more about this in my 1992 book, Computers, Minds and Robots.)

After the Systems Reply, the original Searle has turned into a sort of mnemonic monster — let’s call him “Mnonster” for short — who writes good replies to Chinese questions without having to consult anything outside himself. Now, on the face of it, that gives some reason to think that Mnonster understands his words. So, Searle needs to give us a reason why Mnonster does *not* understand the questions, or the words he provides in his answers.

And Searle gives one. Namely, while Mnonster has excellent connections between words and words (words in the questions to words in the answers) he has no connections between words and the world. Perceptions of objects don’t suggest Chinese words, and Chinese words don’t suggest actions. If the Chinese outside write “We’ve been at this a while, raise your left hand if you’d like a hamburger”, Mnemonster will have no reason to raise his left hand, even if he’s ravenously hungry and loves hamburgers.

So, Mnonster doesn’t understand words. So far so good. Searle then considers another reply, and eventually returns to the Robot Reply. He doesn’t say much in response to the Robot Reply; he writes as if he’s mostly already given the answer to it.

But in fact, the Robot Reply removes the reason Searle gave for denying understanding to Mnonster. (I hope it’s clear that this name is mine, for brevity; Searle does not give the post-Systems-Reply entity a separate name.) For the robots he imagines *do* have word to world connections (in both input and output directions). So, as far as any reason Searle has given, *robots* can be understanders, even though mere computers cannot be.

In short, what Searle should have said was that *computers* cannot understand the words they may manipulate (which is what he started out to show, against a claim made by Schank, if I recall rightly). He should have stopped while he was ahead: he was not entitled to extend his claim to robots.

Your second part makes some very insightful points about what’s required for robot responsibility. For the most part, I think that if there could be Third Generation robots with abilities I attributed to them explicitly in the article, there could also be robots that had the further attributes you mention. But there’s an exception; and in some cases, I’m not sure about whether these further attributes are truly necessary for having responsibility.

The attributes in question are these: (1) A sense of right and wrong; (2) A capacity to feel obliged to do or refrain from some particular action; (3) intentional behavior: (4) acquiring intentions; (5) character-forming intentions; (6) and antecedent sense of options; and (7) a feeling of uncertainty about what one will do.

As to (1) a responsible robot will have, of course, to know the difference between right and wrong. The rest of a “sense” of right and wrong is, I think the same as (2). I think (2) would be possessed by anything that can experience anxiety at the prospect of being found out not to have done an obligation or to have done something forbidden. I think that’s already part of being a Third Generation robot.

As to (3), yes, certainly, a responsible robot must be able to act intentionally rather than accidentally or unthinkingly. However, I’m suspicious of the move from that to (4) a need to acquire intentions. I’m inclined (for reasons I really can’t go into now, but are parallel to those offered about “occurrent beliefs” in Your Brain and You) to think that intentions are fishy, and come in only through an inadequate analysis of acting intentionally.

I’m not sure about (5). As I indicated in a previous response, I think it’s rare that we explicitly deliberate about our character, although many things we do have effects on it. So, it’s not clear to me that if there were no such deliberations, that would remove a robot from the list of responsible entities. Still, if a relation to future ability to decide for the good were sufficiently evident, and within the cognitive capacities of the subject concerned, it might well be that taking that relation into account would be required for responsibility.

(6) An antecedent sense of options seems to be a consequence of intelligence; e.g., one ordinarily knows that one doesn’t know of anything that rules out going to see a movie or that rules out not going to see it. But (7) is harder, for one might imagine a superintelligent robot to arrive at decisions without ever going through a sense of uncertainty. It seems, however, that in some hard cases, the attractiveness of an option we’ve ruled out reasserts itself when some ‘cognitive distance’ intervenes — i.e., when we have not just recently been dwelling on the drawbacks of some otherwise attractive option, the intensity of the downside may be forgotten, and we may have to go through our deliberation all over again. I’m not sure that this kind of cognitive distance effect is actually necessary for responsibility, but if it is, I think it might be able to be incorporated into a robot. That ability would, I agree, be additional to what’s implied by what I said in the article.

These remarks are evidently the beginning rather than the end of reflections on a very stimulating set of comments.

]]>
By: Cara Spencer http://nationalhumanitiescenter.org/on-the-human/2011/06/challenges-for-a-humanoid-robot/comment-page-1/#comment-7746 Wed, 15 Jun 2011 04:30:27 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2455#comment-7746 What I like about Bill’s essay is that it focuses our attention on a problem that is both immediate and very difficult to answer: What would it take to make a robot count as humanoid, that is, enough like a person to justify us in treating it as one? What specific attributes would that robot have to have? Bill’s answer is a brief sketch of a broad and ambitious program of spelling out these attributes in a way that makes it conceivable that robots could actually have them.

Bill argues that if a robot is to count as sufficiently like a person to justify us in treating it like one, it should (1) understand language, (2) have a capacity for pain and other sensations, and (3) have at least some of our reactive attitudes, such as resentment, pride, and discouragement. If a robot could have all three of these traits, Bill says that we would in fact treat it like a human, and that it would seem “quite natural and proper” to do so.

For pretty ordinary and familiar reasons, I have my doubts about whether a robot could have any of (1)-(3), but I think it’s worthwhile to sidestep that question and consider whether Bill is right to say that if a robot had those three traits, we would treat it like a human and it would seem natural and proper to do so.

First, I think we should distinguish two questions:

A. Would we in fact treat Bill’s humanoid robots just like other humans?
B. Should we treat them just like other humans?

Bill seems to think that the answer to the first question is “yes,” and that very fact is supposed to suggest that we would be justified in doing so, and that we should in fact treat them that way.

I am less sanguine than Bill about what we would do when faced with a humanoid robot. Even if, in some circumstances, we would interact with these robots in the same ways we interact with other people, there would also be some likely differences. For instance, these robots would be produced for profit, and they would be bought and sold and perhaps discarded when the old model can no longer run the new operating system. People have of course treated other people in some of these ways (consider chattel slavery), but that is hardly consonant with treating them as human beings. Furthermore, animals quite obviously feel pain, yet our ordinary and pervasive treatment of animals differs markedly from what we would regard as justified ways of treating fellow human beings. It may be that we are justified in treating animals differently from people because animals lack traits (1) and (3). But if we are want to know how we would treat humanoid robots who share our core mental capacities, how we in fact treat animals that we know to have at least some of those capacities is certainly apposite. I don’t know whether the question here is a philosophical one—it’s more of an issue about predicting what we would do in a novel situation. That said, it is less obvious to me than it seems to be to Bill that we really know how we would treat his hypothetical humanoid robots.

Perhaps Bill is ultimately more interested in the second question: if robots had traits (1)-(3), what if anything could justify us in treating them differently from other human beings? I think this question is more interesting, but I don’t think we can rely so heavily on predictions about how we would treat humanoid robots to answer it.

]]>
By: Eric Kraemer http://nationalhumanitiescenter.org/on-the-human/2011/06/challenges-for-a-humanoid-robot/comment-page-1/#comment-7744 Tue, 14 Jun 2011 22:56:31 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2455#comment-7744 Bill claims that Third Generation robots have everything needed to ground attitudes similar to them that we now have toward humans, and that a consequence we can now “properly recognize both our humanity and our position in the natural world”. I think he’s right that we may be able to do a better job recognizing our humanity in its important variety and that his discussion helps us to do this; but it’s not clear to me how much better our understanding of our position in the natural world is, and I am not yet sure about the proper attitude to have towards Third Generation robots. Here’s why.

Bill’s distinction of three different generations of robots is useful with respect to pointing out helpful differences between humans as well. Just as we can make sense out of three different types of robots, so too can we find important differences between diverse kinds of humans. Some humans (such as the deaf or the blind) lack sensations most other humans have, and some humans have sensations that most other humans lack (such as ‘noses’ and those with perfect pitch or synesthesia). These differences should not tempt us to treat those with more or fewer kinds of sensations as more or less human. Could we imagine a human with no sensations at all? Some philosophers have claimed this to be possible, and Bill’s discussion seems to support this claim. Having dealt with robot sensations Bill’s major concern involves how robots behave.

There are two importantly different senses of “misbehaving” at play when Bill speaks of robots doing things they shouldn’t do. First there is the notion of working contrary to design function, indicating either construction or operation malfunction. Second there is the notion of behaving contrary to established social norms. Only on the first but not the second does Bill’s suggestion regarding a misbehaving robot that “we send it back to the factory” make sense. Here the proper analogy is with human illness interfering with proper biological function. But, if a robot is misbehaving in the second sense, since the robot did not design itself we should first take corrective measures with respect to the robot’s manufacturers. But, should we blame the robots?

Bill claims that there are two different senses of blame, blaming someone for what they are versus blaming someone for what they do. The first does not apply to robots but the second does. The moral that Bill draws from this is that the same applies to humans. While I think he’s right that what holds for robots probably also holds for humans, I am not sure about whether blame for either humans or robots still makes any sense. But first, here’s a quibble. Some new-wave libertarians claim that humans as purely material systems can be “self-forming” in proper constructed complex situations involving quantum indeterminism. If so, then just as Bill has us imagine adding sensations to robots, perhaps we might also imagine adding the right kind of randomizers to robots to endow them with similar power s to form themselves in an appropriately deep sense. (Perhaps, then a Fourth Generation of robots is on the horizon!)

But, let’s not get carried away yet. Suppose neither humans nor robots self-form in any convincing sense. If so, the act of blame as traditionally understood, is out of place. Just as we do not blame people for congenital medical conditions, so too is it odd to blame robots for what they do. We explain why the temptation to do so by noting how humans often forget about how certain objects are constructed and rashly attribute special powers to items which they lack. But, if no robot self-forms, then Bill’s attribution of robot blame involves something radically revisionary. Why so?

In addition to Bill’s two senses of blame mentioned above I think there are also two senses of blaming a human or robot for what they do, one traditional, and Bill’s alternative, which is involves radical revisionism. The traditional sense of blame involves the notion of deserving approbation, being blame-worthy. If Bill is right, this sense of blame really has been eliminated. Bill’s own use of blame, on the other hand, involves public blaming activity as an integral part of the reinforcement mechanism, working through their senses of shame and pride, by means of which of which we affect changes in Third Generation robot behavior. We train and correctively re-train robots in the same way we train and re-train small children or pets. But, let’s not call this activity blaming, as though it still had an important connection to blame-worthiness, but blame-training, or, better, just training. So, I am ultimately not sure what I have learned about humans from Bill’s robots.

Part of what makes me unsure involves the absence in his discussion of desire. Humans have lots of desires, but First Generation robots do not seem to have any. Second Generation robots seem to have just one desire—to avoid pain, and Third Generation robots seem to have a few additional desires—to maintain a sense of pride and to avoid remorse or discouragement. Some philosophers have argued that certain special desires are importantly involved in attributions of responsibility. This is a further point of contention to be worked out. In any event, before I can be confident that Third Generation robots have everything needed to ground attitudes similar to those we now have toward humans I would need to learn more about their desires. If they are not included somehow in the ‘inner processers’, then perhaps this would require yet another generation of robots to accomplish!

]]>
By: Paula Droege http://nationalhumanitiescenter.org/on-the-human/2011/06/challenges-for-a-humanoid-robot/comment-page-1/#comment-7743 Tue, 14 Jun 2011 18:01:24 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2455#comment-7743 TO ERR IS HUMAN

Bill Robinson makes some excellent points in assessing whether robots deserve moral consideration. By keeping in mind that humans are also machines of a sort, Robinson convincingly enumerates the ways robots duplicate humans in their causal interactions.

The most important way in which robotic machinery approximates human intelligence is its ability to act ‘appropriately’ in its environment. C3PO responds to novel problems with actions that effectively advance its goals. Descartes thought mathematics and language were the best markers of intelligence, but Robinson’s criterion of ‘appropriate behavior’ is evolutionarily sound and clearly demarcates clever beasts from behaviorally rigid animals.

Still, we need to ask what makes behavior ‘appropriate.’ We could imagine a robot consistently running into every wall in the vicinity, or smashing every cup it came across. This behavior seems completely inappropriate, but this intuition is dependent on our own standards of what counts as a good goal. If the goal of the robot is destruction, these might be perfectly appropriate actions. Similarly, if C3PO consistently called Luke ‘Hans’, we might reasonably assume that for it, ‘Hans’ means ‘Luke.’ This assumption is not simply a consequence of the Principle of Charity, as Davidson suggested, but due to the way C3PO uses the word to pick out Luke, we can see how the word functions in its repertoire of sensation-behavior.

The fundamental issue is how C3PO and other robots acquire goals and actions appropriate to them. Kapitan mentioned Searle’s objection that rule-based manipulation of symbols does not endow meaning. To expand on that point a bit, the problem is that when computations run by a robot are entirely programmed, robot intentionality is derived rather than intrinsic. One need not accept Searle’s own view that only biological consciousness endows intrinsic intentionality to be compelled by his objection. Second-hand meanings are not genuine; there must be some way for the robot to acquire meaning for itself.

If the robot is nothing more than a programmed system modified by causal input, then its actions are no more meaningful than the behavior of my robotic vacuum cleaner. Robot action becomes meaningful when it interacts with the environment to determine which actions further its goals and which actions do not. In other words, causation alone is inadequate to endow meaning; the robot must have some way to assess how effectively an action responds to a stimulus in relation to its goals. Only with this sort of ongoing evaluation process could the First Generation robot have the behavioral flexibility exhibited by C3PO.

This may sound cognitively sophisticated, but it need not be. If the goal is to get the morphine to sick-bay, then C3PO must have some way have some way to determine whether a drug is morphine or not and when it has arrived at sick-bay rather than helm. In its response to mistakes – oops, a wrong turn landed me at the helm – C3PO comes to understand the world for itself.

Likewise, in its response to mistaken goals, a robot comes to understand itself and its own agency. Several commentors have noted the importance of agency to moral responsibility, and I agree. Robinson thinks of Third Generation robots as agents, yet says that they are blameless for what they are. The result is an apparent contradiction: we accept they are blameless, and we blame them anyway. A resolution to this contradiction lies in the capacity of agents to assess their goals and change them upon reflection. If a goal consistently causes pain to oneself or others, a moral agent would revise that goal. (Presumably. There are, of course, many competing theories of morality, but most take the infliction of pain to be immoral.) Failure to revise the offending goal would be grounds for blame.

Robinson seems to think of goals as the result of passive causal processes and so the possessor of them cannot be blamed. Robots, he says, “were made in a factory, and the changes that have occurred inside them since their manufacture have depended on how they were constructed to begin with, and what has affected their detectors subsequently.” In addition to these factors, though, agents have the capacity to make changes inside themselves by evaluating their goals in relation to one another and their social and physical environment.

It’s not clear whether conscious sensation is necessary for agency, although I suspect Rosenthal is right that sensation of some sort is prior to thought, and thought is undoubtedly prior to agency. Moreover, there are reasons to be morally concerned for things that are not agents or even capable of consciousness. In my view, objects such as the earth ought to be candidates for moral concern as well. So the connection between consciousness and morality is more complicated than one might think.

What is clear to me is that agency is necessary to ascribing to a robot the same moral consideration we ascribe to other human beings. When we have good reason to think robots are capable of assessing their goals in light of the consequences of their actions, we will have good reason to take them to be moral agents.

]]>
By: Jacob Berger http://nationalhumanitiescenter.org/on-the-human/2011/06/challenges-for-a-humanoid-robot/comment-page-1/#comment-7740 Tue, 14 Jun 2011 13:55:05 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2455#comment-7740 I want to thank Professor Robinson for his thoughtful essay. My question has to do with the difference Robinson sees between the attribution of meaningful speech to a machine and the attribution of sensations to a machine.

I agree with Robinson that the meaningfulness of verbal behavior depends on whether that behavior fits within a complex network of causes and effects. And as I understand this view, we are therefore justified in attributing meaningful speech to a machine if it engages in an appropriate kind of complex behavior. If C3PO emits sounds that play the same complex roles as meaningful human speech, then C3PO produces and understands meaningful speech.

By contrast, Robinson thinks that no kind of complex behavior justifies us in attributing sensations to a machine. Robinson observes that a simple machine such as a thermostat that detects changes in the environment does not have sensations. And Robinson concludes that we ought to attribute sensations to a machine only if it exhibits an artificial neural architecture that produces the same kind of activity as human brains.

But even if a machine didn’t exhibit such an artificial neural architecture, it seems to me that we are justified in attributing sensations to a machine if it engages in a suitable sort of complex behavior. I think it’s natural to hold that sensations are the states of creatures (and perhaps of future machines) in virtue of which they make sensory discriminations amongst perceptible stimuli. My sensations of red are what enable me to discriminate red from the other colors and to react differentially to those colors. So I would be eager for Robinson to say a bit more about this point.

I agree with Robinson that a thermostat does not register environmental changes in virtue of sensations. I would argue, however, that we do not attribute sensations to a thermostat because such a simple machine detects only a small repertoire of environmental changes and does not differentially react to them in complex ways. Consider, for instance, a simple thermostat that announces temperature changes. Even though its sounds are emitted at appropriate times, this thermostat does not exhibit meaningful speech. Its sounds are not suitably complex and integrated into the thermostat’s behavioral economy.

Similarly, we conclude that the thermostat detects temperature changes without sensations because of the paucity of its detection behavior. We don’t draw this conclusion because the thermostat lacks an appropriate artificial neural architecture. I wouldn’t think that the fact that a machine lacks an appropriate artificial neural architecture is a reason to deny that it has sensations. If a machine’s discriminatory behaviors were suitably complex and integrated into its behavioral economy, we would, and I think should, attribute sensations to it.

One might, however, resist attributing sensations to a machine even if it detects a large range of environmental changes. One might think that the states of the machine that enable such discriminations are, or at least could be, wholly nonconscious. But many assume, as Robinson does, that sensations are always conscious. So, again, it seems that a machine’s detection behavior cannot be evidence that it has sensations.

I myself think there’s substantial evidence for sensations that aren’t conscious such as cases of subliminal perception, but I understand that this is too fundamental an issue to settle here. If there are such states, then we are justified in attributing sensations to machines with suitably complex discriminatory behaviors.

But even if sensations are always conscious, most agree that a feature of conscious states is that one is able to report that one is in them. Indeed, it’s a hallmark of common sense and of experimental psychology that if one is in a conscious sensation of red then one can say that one sees red. A conscious sensation is therefore suitably integrated into something’s behavioral economy only if the thing can meaningfully say that it’s in that state.

Suppose, then, that C3PO were able to detect via its sensors the same huge range of stimuli that we do and react to them as we do. And, moreover, suppose that C3PO were able to meaningfully report that it’s in the states that enable it to make those sensory discriminations. Imagine, for instance, C3PO not only discriminates red things from green things, but also says “I see the red things,” thereby reporting that it’s in the state in virtue of which it makes the discrimination. Even if C3PO lacked an artificial neural architecture similar to ours, I’m not sure I see an independent reason to deny that C3PO has a conscious sensation of red. I’m very interested to hear Robinson’s thoughts on this.

I’ll close with some questions about what Robinson calls First Generation robots, which he describes as machines endowed with meaningful speech but not sensations. First, I wonder whether Robinson thinks First Generation robots have no mental states at all, or whether he thinks they lack sensations but has other sorts of mental activity. It seems to me that if something can speak meaningfully then it has mental states—in particular, thoughts. Robinson does claim that whether verbal behavior is meaningful depends on whether this behavior fits within a complex network of causes and effects. But most would agree that meaningful speech acts are typically caused by thoughts. If I say it’s raining, it’s typically because I think it’s raining and my thought caused me to say it. If this is right, then First Generation robots are machines that have thoughts but no sensations.

If First Generation robots are possible, what is their moral status? In his commentary, Andrew Melnyk quotes Robinson as claiming that “[i]f we think [some robot] cannot feel anything, we won’t have a certain kind of qualm about exposing it to danger.” But, as Melnyk notes, it is not clear why this is so. If a First Generation robot has thoughts, why doesn’t that alone qualify it as a thing of moral value, which we shouldn’t expose to danger?

]]>
By: Susan Schneider http://nationalhumanitiescenter.org/on-the-human/2011/06/challenges-for-a-humanoid-robot/comment-page-1/#comment-7730 Mon, 13 Jun 2011 15:29:48 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2455#comment-7730 The question on the table is “Could humanoid machines progress to the point where they are entitled to moral status?” I agree with the gist of Robinson’s answer: if a humanoid robot has sensations and feelings, as well as the ability to react appropriately and in novel ways to its environment, humans will — indeed, should – view it as deserving moral status. Two comments:

(1) I’d like to underscore the pressing nature of these issues. As I emphasized in my book, Science Fiction and Philosophy (Wiley-Blackwell, 2009), science fiction is increasingly converging with science fact. Scientists are already working on humanoid robots. Consider, for instance, the very lifelike androids developed in Hiroshi Ishigaru’s lab at Osaka University, which you can view at these sites:

http://spectrum.ieee.org/automaton/robotics/humanoids/042010-geminoid-f-more-video-and-photos-of-the-female-android

http://www.is.sys.es.osaka-u.ac.jp/index.en.html

I recall speaking at a 2005 workshop called “Android Science” that was designed to discuss the projects of Ishigaru and his associates. Then, the main purpose for developing the androids was to take care of the elderly. The researchers’ focus was solely on making the androids physically similar to humans. I was surprised by this, for physical similarity is only one element of a viable eldercare project — the androids must also be intelligent. After all, assuming that one is friendly to the idea of having an android take care of a loved one to begin with, it is surely more desirable that it be a creature that can respond appropriately to a variety of novel situations and be emotionally in sync with patients.

Here, you may suspect that it will prove too difficult to make androids this smart. The AI projects of the 80s and 90s are often laughable, to be sure. But nowadays, AI is serious business. Consider IBM’s Watson program, a natural language processing system that integrates information from various topic areas to such a degree that it outperforms Jeopardy grand champions. Now, we can imagine a more advanced version of Watson that is in an android body. This creature could possess a huge stock of possible actions, sophisticated sensory abilities, and so on. The point here is that sophisticated humanoids strike me as technologically feasible — my guess is that it could happen within the next 20 years or so.

Further, I doubt the development of humanoid robots would be limited to eldercare. Who wouldn’t want a personal assistant to entertain the kids when one is busy, clean the house, and run mundane errands, after all? It is not far fetched to suspect that eventually, many people will have humanoid personal assistants.

Still, as attractive as a personal assistant may sound, anyone who has viewed the films I,Robot or AI knows we would be on shaky ethical terrain. If humanoids are intelligent enough to take care of Grandma or the kids, isn’t it akin to slavery to have them serve us? Indeed, if Robinson is correct that there are situations in which we would accord humanoids moral status, why should we create them in the first place? If you ask me, society is not even managing to fulfill its ethical obligations to the members of our own species, let alone to nonhuman animals.

I imagine that those who stand to benefit financially or otherwise from intelligent hominoids may be less inclined to agree that they have moral status — here again, parallels with slavery come to mind.

In sum, these are serious ethical issues, and they will be pressing upon us relatively soon.

(2) Turning to a different matter, notice that this question asks about humanoid robots — robots that are morphologically similar to humans. But it is worth noting that the ethical issues are not limited to humanoids. What about robots that look nothing like us at all but which exhibit a similar range of emotion and sensation, etc? While we may at first be less inclined to view such a creature as a moral agent than we would an android, there is nothing morally special about looking human. It is what is going on beneath the surface that matters.

Of course Robinson knows this. His focus is on the nature of the robot’s internal operations, stressing the import of a system’s emotional and sensory similarity to us. In the spirit of this, we might think that insofar as a creature exhibits a similar range of emotional reactions as we do, and exhibits similar responses to sensory experiences, we should consider it as being a moral agent. While I agree this qualifies one for moral agency, we shouldn’t view such characteristics as being a necessary requirement.

For what about more radical departures from the human — creatures that do not have our characteristic ways of thinking or feeling? Here, transhumanists like James Hughes, Ray Kurzweil and Nick Bostrom have insights to offer. Transhumanism is a philosophical, cultural, and political movement that holds that the human species is now in a comparatively early phase and that its very evolution will be altered by developing technologies, such as ultrasophisticated AI. Future creatures, such as “superintelligent” AI and even enhanced humans will be very unlike their present-day incarnation in both physical and mental respects, and will in fact resemble certain persons depicted in science fiction stories.

So consider a superintelligence — a creature with the capacity to radically outperform the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. Would it qualify as moral agents according to the above requirement? Its sensory processing may be radically different than ours — for instance, it may be a web-based system, having sensors that span large spatiotemporal regions at a single moment. It may have a radically different sort of embodiment, being a virtual being which can be multiply located at a singular time and which can appear as a nanobot swarm or borrow an android body.

We might look for behaviors that we would expect one to engage in in a variety of different situations. But behaviors intelligible to a superintelligent being may not be intelligible to us. Just as your cat lacks the concept of a photon, and overall, misses out on most of the concepts that ground your mental life, so too, we may lack the conceptual abilities to make sense of the behavior of superintelligent AI. For if they don’t think or feel like us, why will they act like us? The challenge here is that we have reason to believe they are more intelligent than we are, but they do not think anything like we do. On what grounds will we accord them moral agency?

I suggest here a rough and ready principle: if creatures having sensory experience are capable of solving problems that our species has as of yet been unable to solve, we have reason to believe that they are sophisticated forms of intelligence. And sophisticated intelligences having sensations deserve moral status.

This is not to suggest that it is safe to build them — for I have not addressed whether we can be confident that they would remain benevolent. But this issue — the issue of what moral principles we might encode in superintelligent AI — is a separate issue from whether they deserve moral status. I leave it for another day.

]]>