Comments on: Qualitative Experience in Machines http://nationalhumanitiescenter.org/on-the-human/2009/10/qualitative-experience-in-machines/ a project of the National Humanities Center Mon, 13 Feb 2012 19:42:46 +0000 hourly 1 By: Gary Comstock http://nationalhumanitiescenter.org/on-the-human/2009/10/qualitative-experience-in-machines/comment-page-1/#comment-496 Sat, 07 Nov 2009 01:34:04 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=538#comment-496 We’re sorry, but this intriguing conversation has ended at this venue, and this thread in the Forum has been closed. However, conversation may continue, and is continuing, in our Facebook group: http://www.facebook.com/group.php?gid=52472677549

Please join us there.

]]>
By: Bill Lycan http://nationalhumanitiescenter.org/on-the-human/2009/10/qualitative-experience-in-machines/comment-page-1/#comment-494 Fri, 06 Nov 2009 15:54:15 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=538#comment-494 Thanks to all the commentators. I shall begin with the methodological points.

Fictional characters and fantastical hypotheses. Haikonen observes that Harry is a merely imaginary being, and argues that Harry no more has qualitative experiences (or any other mental states) than does Homer Simpson, for the simple reason that like Homer he does not so much as exist. True, of course, but as Kerley points out, Haikonen has misconceived the dialectic. My opponent, the biochauvinist, began the debate by proclaiming that no matter how similar a robot or other nonbiologic system might be to a human being in terms of information processing etc., the robot could not have qualitative experiences. That’s a conceptual impossibility claim. To refute it, all I need is a coherent hypothetical case in which a nonbiologic system would have qualitative experiences. Hence Harry, and I’ve argued, though defeasibly, that he would have them.

Scientists are understandably annoyed by fantastical examples (which, compounding the offense, philosophers sometimes pretentiously call “thought experiments”). All I claim for mine is that it refutes the biochauvinist conceptual impossibility thesis. Contra Endicott, I do not assume, because I don’t believe, that “science fiction reveals what is important about the nature of consciousness.” If no one had made any a priori impossibility pronouncement, there would be no need for science fiction. Incidentally, I would expect any scientist to agree at least provisionally with my goose-gander view; there may indeed be contingent reasons why a sentient being could not be built from nonorganic electronic parts, but any such reasons would have to be discovered empirically.

Epistemology. I’ll now concede to Robinson and Spencer that knowledge of material similarity would add to the case for sentience. After all, it would remove what is for some people a major barrier; there must be some reason why biochauvinism persists. (Interestingly, small children are not even faintly tempted by the notion that computers think and genuinely converse with them, and their objection is precisely that computer s are not alive.) I do continue to maintain that our evidence regarding interplanetary visitors would be good enough, so I don’t agree with Spencer that absent material similarity, behavioral evidence would lose much of its force.

I’m not sure why Robinson thinks I haven’t built in the relevant causes of Harry’s alleged experiences. I certainly have supposed that there are parallel causes. That the parallel causes are not organic just iterates the original issue. But I agree with Robinson and others that my Henrietta example does not settle anything.

Embodiment/embedding. Here too there may be empirical reasons why no one could just build an android that would have qualitative experience (Endicott, Edenbaum). And as a die-hard teleofunctionalist, I entirely agree that a sentient Harry would have to have whatever it takes for his internal states to have the right functions. I do not agree that the latter requires specifically evolutionary selection; conscious selection by designers would do as well.

Dualism. Jackson argues that a mind-body dualist will have special qualms about the goose-gander view: A dualist who grants causal interaction between qualitative experiences and bodily states must suppose that there are psychophysical laws, but could not presume to say whether those laws would relate the experiences to states functionally characterized or more specifically to neurophysiological or at any rate biologic states. That is indeed so, but I’m not sure why the same would not apply to us materialists.

Machines vs. humans. Nickel speculates that I have misdrawn the distinction. It’s reasonable to consider Henrietta still human despite her now prosthetic brain. But I am entirely unconvinced by the example of the machine that does not look at all like you or me. If that machine is functionally isomorphic to us, it will interact with us just as Harry would, though perhaps we would not be quite as quick to award it sentience. Nor do I at all see why, even if Nickel is right about his own machine, my argumentative strategy based on Harry “does not get off the ground.”

“Behavior” and psychological meaning. Long is quite right that I assume a univocal conception of behavior, and he is also right to question that assumption. (As he notes, we have already discussed the matter in print.) He holds that real behavior “is psychologically expressive because it arises out of the biological needs, interests, and concerns that develop naturally in living creatures”; Harry’s “behavior” looks like real behavior but does not express genuinely psychological states. At this point Long and I are at a stalemate. As a functionalist, I believe that Harry’s internal states are genuinely psychological, and I am unconvinced by arguments that biological life is required; but neither do I have any compelling argument against Long’s contrary view.

]]>
By: Matthew Haentschke http://nationalhumanitiescenter.org/on-the-human/2009/10/qualitative-experience-in-machines/comment-page-1/#comment-492 Fri, 06 Nov 2009 02:17:06 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=538#comment-492 Haikonen, in his comparison of Harry and Homer Simpson, seems to ignore the previously solved Frame Problem.

The Frame Problem as I understand it is the issue of dynamically determining which things are not affected by an action without explicitly specifying exactly which things they are – or rather, the problem of generating an infinite series of logical actions, with concurrent side effects, from a finite series of non-side effects. When Lycan claims that his Harry has superseded the Frame Problem, he has created a machine that, with a static program, can generate an infinite number of logical actions. We, as humans, have superseded this problem. With our static amount of neurons and brain matter, we are able to generate a series of non-affected conditions for any action.

In Haikonen’s example of Homer Simpson, there is an error in comparing Homer and Harry. Homer Simpson, a cartoon character, does not have the ability to supersede the Frame Problem.  Every action that he takes has been predetermined (as well as the non-results of his actions, i.e. the background scene does not change as he walks through it) by the cartoonist at work. Each “frame” (used in the animation sense) of his existence has been crafted by an external being. This being has generated a finite series of non-results for Homer from its infinite series of non-results. Homer Simpson is a sub-set of the actions of its creator.

Conversely, Harry has been created through one additional step.  From an infinite series of non-results from a creator being results a finite “organism” which can generate an infinite series of non-events for itself. Harry is therefore at least an equal set of its creator.

Haikonen makes a weak claim comparing Harry and Homer Simpson that appeals to the illusion that Homer Simpson is an actual being, which is exactly what the cartoonist wants us to believe!

]]>
By: Daniel Farrell http://nationalhumanitiescenter.org/on-the-human/2009/10/qualitative-experience-in-machines/comment-page-1/#comment-490 Fri, 06 Nov 2009 00:34:52 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=538#comment-490 Haikonen’s rejection of Lycan’s reasoning based on his Homer Simpson claim, the only argument in his response, has no relevance to Lycan’s actual argument. Haikonen says:

“Homer Simpson does not have any mental states because he does not really exist. He is just a cartoon character and a figment of imagination. But alas, so is also Harry. Why then, would Harry have any qualitative experiences? He does [n]ot, because he does not really exist.”

This is obviously false. Harry could, and does in this thought experiment, exist in every sense of the word. You can reach out and touch Harry, you can talk to Harry, Harry has gravitational interactions with the rest of the universe, Harry moves through space and time. How could Harry possibly not exist? How could he be a figment of our imagination? Lycan’s Goose and Gander argument should be applied here, if Haikonen is going to make these assumptions about Harry he is obligated to find a reason to not make them against humans. Haikonen’s “reasoning is faulty because the extraction of real world facts from arbitrary figments of imagination does not really work.”

]]>
By: Cara Spencer http://nationalhumanitiescenter.org/on-the-human/2009/10/qualitative-experience-in-machines/comment-page-1/#comment-486 Wed, 04 Nov 2009 22:00:02 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=538#comment-486 Here’s how I would put Lycan’s goose-gander thesis: there is no evidence that other people have qualitative experience that wouldn’t equally be evidence that certain possible machines also have it.

So what’s our reason for ascribing qualitative experience to others? Lycan says it’s their behavior. And he’s clearly right that a machine could exhibit humanlike behavior. I think he goes wrong at step one, when he says that behavior is our only source of evidence for qualitative experience in others.

Here is another source of evidence: I know that I have qualitative experience, and I can see that others are like me. So I can conclude that others have qualitative experience, too. When I look at a machine, I see that it isn’t like me. So I don’t have reason, in this case, to ascribe experience to it. And I’d argue that behavioral evidence about other people’s minds normally depends on this kind of evidence. If I don’t recognize that others are like me, then behavioral evidence isn’t probative in the normal way.

That’s not to say that machines don’t have any mental states, or that we could never have evidence that they did. There could be other types of evidence for the mental lives of machines. That’s also not to say that behavioral evidence would be irrelevant without the recognition of similarity. If we were really faced with an alien, perhaps we could conclude that it was conscious on the basis of its behavior. But we would need more behavioral evidence than we would need from a fellow human to draw that conclusion. My only point here is that we have some evidence for qualitative experience in others that doesn’t carry over to the case of machines.

Lycan says that behavioral reasoning doesn’t depend on the assumption of similarity, because if it did, children and naifs could never have evidence about other people’s beliefs. I would like to hear more about why he says so. It seems to me that children or naifs could easily recognize that another person is like them. You don’t have to know anything about the brain or how it works to make that basic recognition.

]]>
By: Ronald Endicott http://nationalhumanitiescenter.org/on-the-human/2009/10/qualitative-experience-in-machines/comment-page-1/#comment-483 Tue, 03 Nov 2009 21:48:18 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=538#comment-483 William Lycan supports a “goose-gander” thesis according to which there is no good objection to machine consciousness (in the “qualitative experience” sense) that is not an equally a good objection to human consciousness (in the same sense). But he does so by using thought experiments involving futuristic science fiction – an artificial information processing system named “Harry” that is functionally isomorphic to a human being in all its cognitive processing, including the same rich behavioral repertoire, etc. So, in my view, the real debate will arise over the subtext – things that Lycan does not explicitly defend but assumes for his argument. In particular, Lycan must think that the science fiction reveals what is important about the nature of consciousness, given high-level abstract information processing theories that would deem Harry to be conscious. But, as Lycan knows, that once traditional view of mind science is now contested. Multi-level theories of “embodied and embedded cognition” are all the rage (the term was coined by Andy Clark), meaning theories that judge the details of bioengineering to be relevant to mental phenomena, as well as how that bioengineering arose, evolutionarily speaking. As a consequence, others might not think that Lycan’s science fiction reveals what is important about the nature of consciousness. For example, artificial Harry might lack the appropriate engineering. According to this perspective, one could grant that if artificial system Harry were functionally isomorphic to a human being in all its cognitive processing, then it would have qualitative conscious experience. But perhaps the antecedent is impossible – not unimaginable (on some accounts), or conceptually impossible (on some accounts), but at least physically impossible, since it might well be a matter of law that only systems with the right neurophysical embodiment will have the appropriate information processing capacity to sustain conscious experience. Mere hypothetical cases can’t decide this issue, only evidence gleaned from the appropriate sciences. So the perhaps unexciting conclusion is that we must await the development of cognitive science, which discovers the laws governing conscious experience. Or, finally, to turn from embodied to embedded features, artificial Harry might lack the appropriate evolutionary history that determines the mental functions. Indeed, Lycan once held a version of teleological functionalism whereby the mind’s subsystems are functionally characterized in evolutionary terms (Lycan, Consciousness, 1987, chap.4). If Harry doesn’t have the right evolutionary history, one might argue, he doesn’t have the right mental functions either, including the functions of qualitative conscious experience. I find that these questions have no easy answers, and so I am not as confident as Lycan about machine consciousness.

]]>
By: seth edenbaum http://nationalhumanitiescenter.org/on-the-human/2009/10/qualitative-experience-in-machines/comment-page-1/#comment-482 Tue, 03 Nov 2009 17:59:33 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=538#comment-482 I’m writing obviously as an amateur, and this will be my last unsolicited comment.

What exactly are qualitative states? In its definition of qualia at the Stanford Encyclopedia of Philosophy begs the question. Perception is physical: experience, sandpaper etc. When animals sense we categorize things in the history of our perceptions (patterning as comfort) Our history is foggy, and facts and values are confused from the start. The machines we make do not have this complex conflicted relation to the world, they’re not desirous or anxious. They have no sense of telos, even a blind drive for survival.

It seems easier to want to ascribe qualitative states to man-made machines than to describe the mechanics of qualitative “experience” and “perception.” To a machine, the blueprint for a building and the building itself are identical, while animals require the presence of the building to understand the thing. And like the color red in doing that we’re not understanding the building or the color but our categorization of it, and all the details that we analogize in relation to what we’ve already stored away. We’re bombarded by perceptions and evocations resulting from perceptions. But all of that can be described in quantitative terms. What’s private -as experience- is that each of us contextualize the data according to our own history. Every animal has his or her own filing system and her own adaptive conditioning. Animals are drunken machines, each of us drunk in our own way.

The limits of conceptualism it seems to me is in the unwillingness to mark the distinction between blueprints and buildings, between ideas and experience, because ideas are universally available and one’s experience of a building is private and therefore secondary, But what this means is that the ability to communicate always private experience atrophies, while experience is still our primary relation to the world. This conversation above seems more about desire than the world we will always only know as experience, while shying away from real questions regarding our biological machinery.

]]>
By: Tria Metzler http://nationalhumanitiescenter.org/on-the-human/2009/10/qualitative-experience-in-machines/comment-page-1/#comment-481 Tue, 03 Nov 2009 15:15:35 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=538#comment-481 In Lycan’s example of ‘the case of Henrietta,’ Lycan describes a case that has stumped me for some time. At what point does organ transplant result in a change of identity? Some may think, for example, that once one’s heart is replaced, you are no longer human. But could this be right? According to this principle, those who suffer heart failure and receive a transplant would no longer be themselves. But that seems absurd. Would they now, instead, be the person from whom they received the heart? I doubt many would argue in favor of this perspective. It then seems to follow that hypothetically replacing one’s heart with an artificially created (but in all other ways equal) transplant should also not affect one’s personhood or identity, or his or her ability to have qualitative experiences. The heart is, after all, simply a muscular structure responsible for circulating blood throughout one’s body; it is imperative to survival, but can not truly qualify who we are as an individual.
Perhaps others may argue that once the brain is replaced with an artificial component, the individual is no longer his or herself, or even human at all. This declaration, I admit, is more difficult to defeat as the brain is much more complex organ and still not fully understood. Lycan, however, approaches this situation with the succinct explanation that although the entire central nervous system may be replaced, Henrietta herself has not changed; her desires, personality, and intelligence remain the same. In addition to this argument, I would like to contribute another thought: suppose Henrietta has been through the ordeal of having her central nervous system replaced with artificially created substitutes. Now suppose she realizes this may affect the opinions of others with respect to her personhood or ability to have qualitative experiences, and Henrietta is deeply distraught by this idea. Her greatest desire is simply to still be thought of as capable of qualitative experiences. I wonder if this desire in itself does not contribute much to her capability of having qualitative experiences. If she is capable of having these desires or aspirations, then there must be some sort of mental qualification on Henrietta’s part; she must be able to assess a situation and realize that she would be happier if it were slightly altered.
If a humanoid machine were ever so advanced that it developed the desire to be treated as a human; that it desired to be deemed capable of qualitative experiences; and, maybe most importantly, that it desired to be something other than what it is (an artificially created being), would this by definition mean the machine more than a machine? Simply put: does having the desire to be capable of qualitative experiences mean you have qualitative experiences? I believe it might.
Some individuals, Haikonen for example, believe this hypothetical situation is irrelevant to the situation at hand. He argues by using the hypothetical example of ‘Harry,’ all relevance of the argument is destroyed simply by the fact that this situation does NOT exist. I, however, am intrigued by Lycan’s example. Though the situation does not exist today, I believe Lycan’s point is that this predicament has the potential for existing in the future. For that matter, an android Homer Simpson could be created in the future, and we could discuss artificial Homer’s humanity as well. The focus of this argument should not be on whether or not the situations presented within the argument actually exist; the focus should remain on the topic of: if they did, what would that mean? Who am I to guess and determine what humankind will or will not be capable of decades from now? If everyday, scientific discover gets us closer to the ability to create prosthetics from human tissue or stem cells, who am I to assume we could not one day create an entire being? Thus, having the impression that one day creating an entire being will be possible, I think it imperative to begin discussing what the implications of this ability are now.

]]>
By: Bernhard Nickel http://nationalhumanitiescenter.org/on-the-human/2009/10/qualitative-experience-in-machines/comment-page-1/#comment-478 Tue, 03 Nov 2009 04:53:39 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=538#comment-478 Much of the debate turns on what exactly we count as machines. Lycan’s suggestion:

Let us mean a kind of artifact, an information-processing device manufactured by people in a laboratory or workshop, out of physical materials that have been obtained without mystery or magic.

Lycan then points us to the possibility of creating what we might call artificial humans, paradigmatically Harry(2). Now the argument: given that artificial humans are definitely machines (by the criterion just mentioned), and given that we want to credit them with qualitative experience, we should accept that there’s no in-principle obstacle to the existence of machines with qualitative experience.

I think that what Lycan really shows is that we may have to rethink where we draw the line between humans and machines—specifically, we may well have to consider creatures like Harry(2) human. That is certainly a reasonable moral to draw given the thought experiment centering on Henrietta. But for all that, Lycan hasn’t really touched the question of whether machines, as contrasted with (artificial or “natural”) humans, can have qualitative states.

To make good on my second point, consider a machine that exhibits all manner of intelligent behavior, but that doesn’t look like you or me. Not even a little bit. Let it consist of many different parts, distributed in various rooms, connected by radio. Each part has information storage and retrieval capabilities, and each has a screen and keyboard with which it communicates. It has many different specialized appendages, and some of its parts can move around on tracks. Crucially, it can intelligently respond to commands, engage with its environment, and learn.

I do not find myself inclined to credit this machine with qualitative states in addition to intelligence, even if I admit that its intelligence far exceeds my own. That means that Lycan’s argumentative strategy does not get off the ground—if I’m not inclined to attribute qualitative states to the machine, the question of defeaters is moot.

]]>
By: Rachel Geiger http://nationalhumanitiescenter.org/on-the-human/2009/10/qualitative-experience-in-machines/comment-page-1/#comment-477 Tue, 03 Nov 2009 01:21:00 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=538#comment-477 Haikonen presents an interesting objection to Lycan’s claim that machines can have qualitative experiences. While I agree that very few individuals would seriously attribute qualia to fictional characters such as Homer Simpson, I believe that we should not dismiss the idea that some fictional beings may possess qualia.

Alternative personalities created by individuals with Dissociative Identity Disorder (DID) behave in human-like ways. Regardless, alternative personalities are creations of the human mind and are therefore fictional. Still, would we not attribute them qualia? They possess organic bodies, experience a wide range of emotions, and have the ability to display logic and reasoning. While being expressed, alternative personalities seem to be self aware and can possess names different from that of the dominant personality’s. They also interact with their environment in all the ways that a normal person can. In addition, someone with DID could meet another individual for the first time, while expressing an alternative personality, without the other individual ever noticing anything awry. Most importantly, alternative personalities have a real-world presence.

Robots as advanced as Harry may not be technologically feasible now, but I do not believe we can dismiss the possibility of a machine eventually being able to have qualitative experiences just yet.

]]>