Qualitative Experience in Machines

Abstracted from ‘Qualitative experience in machines,’ The Digital Phoenix: How computers are changing philosophy.

1. Many people, perhaps most people, have the idea that, however problematic qualitative experience is for the case of human beings, it is a lot more so for that of machines constructed by human beings.  Few philosophers doubt that human beings’ experiences have qualitative characters, but many doubt or disbelieve outright that robots and computers (much less backhoes and can openers) could ever have qualitative experiences at all.  Often the latter denial is just evinced, as an “intuition,” though occasionally it has been argued.  There are even some philosophers who think that the big problems have been pretty well solved for human beings or can be solved without much further effort, but who also think that machines simply could not be conscious, have qualitative or subjective experiences, etc.; that is the most extreme version of the idea I am considering.

My purpose in this paper is to defend the goose-gander thesis that the disparity here is specious: There is no problem for or objection to qualitative experience in machines that is not equally a quandary for such experience in humans.  It is, I contend, mere human chauvinism or at best fallacy to suppose otherwise.

Just for the record, here are the leading problems regarding the phenomenal character of human experience: Leibniz’-Law objections; the immediacy of our access to qualia; essentialistic and other Kripkean (alleged) modal features of qualia; “zombie”- and “absent-qualia”-type puzzle cases; first-person/third-person asymmetries of several kinds and the perspectivalness of the mental; putative funny facts as claimed by Thomas Nagel and Frank Jackson; qualia in the strict sense, the introspectible monadic properties of apparent phenomenal individuals; the grainlessness or homogeneity of qualitative features, emphasized by Sellars; and Joseph Levine’s now celebrated “explanatory gap.”  That is an impressive array of difficulties for the materialist (1).  It is so impressive, in fact, that it immediately lends support to my goose-gander claim.  For if there is a problem about qualitative experience in machines that is not equally an objection to a materialist view of people, that problem must be additional even to the many and wide-ranging ones I have listed.  It must also be grounded in some substantive difference between machines and human beings.

2. For present purposes, then, we must mean by “machine” something that contrasts interestingly with “human being.”  (In one sense, uncontroversially, human beings are machines.)  Let us mean a kind of artifact, an information-processing device manufactured by people in a laboratory or workshop, out of physical materials that have been obtained without mystery or magic.  A paradigm case would be a robot driven by a present-day supercomputer.  But I want to allow technologically imaginable extensions of that paradigm; a machine need not have von Neumann architecture, or even digital architecture (whatever that means) at all.  And let us idealize a bit:  I shall assume that problems of information storage and retrieval, such as the notorious frame problem, are solved.  (A fairly outrageous assumption, true.  The reason I get to make it is that my chauvinist opponents do not think that their objection could be overcome even if the frame problem and its ilk could be; they think their obstacle arises no matter how good our machine might be at mere information storage and retrieval.)

What, then, are the most obvious differences between machines in the foregoing sense and natural-born human beings, that might support the chauvinist position?  Let us begin by abstracting away from the most obvious deficiency of actual, 1990s machines: that no such thing has a humanoid behavioral repertoire or anything remotely approaching it, because no present-day machine is anywhere nearly as complex as a human being or gifted with a biologic brain’s almost unthinkably vast information-processing capacity.  Here again, my opponents deny that more information processing (per se) would help; no further amount of the same, no matter how large, would convert a mere machine into a sentient creature capable of subjective, qualitative experience.

So let us help ourselves to some futuristic, science-fiction technology, and suppose that such resources have afforded us an expert human simulator.  Elsewhere I have introduced a character called Harry(2), who through amazing miniaturization and cosmetic art is an entirely lifelike android.  He is also a triumphant success of AI: his range of behavior-in-circumstances is equal to that of a fully acculturated and rather talented late 20th-century American adult.  No one would ever guess that he is not an ordinary person.  (Let us further suppose that his internal functional organization is very like ours; his total pattern of information flow is parallel to ours, even though it runs on considerably different hardware.)

But our question is, in the relevant sense, is Harry a person at all?  He is, remember, only a computer with limbs; his humanoid looks are only a brilliant makeup job.  Some philosophers will readily grant that he has beliefs or belief-like states; after all, he stores and deploys information about his environment and about the rest of the world.  But desires are a bit harder; hopes, embarrassments and other conative attitudes still harder.  Yet even those who would award Harry a full range of propositional attitudes might still balk at qualitative experience.  Even if in some sense he thinks, he does not feel in the most immediate sense in which we doso says the chauvinist.

3. Before we go on to look at some further differences between Harry and the rest of us, let us note that there is a heavy presumption in favor of my egalitarian goose-gander claim (3).  First, how do we now tell that any familiar humanoid being is conscious?  Normatively pursued, this is just the Problem of Other Minds.  But we need not take a stand on the best solution to that problem in order to note its origin.  The problem begins with the fact that the ordinary person’s evidence for ascribing mental states, including qualitative states, to another human being is the latter’s behavior, broadly construed, in the circumstances, broadly construed.  How we justify the epistemic move from that behavior to the mental ascription is a topic of notorious controversy, but unless we succumb to global skepticism about other minds, we do not doubt that the mental ascription is justified by our observation of the behavior.  (Of course the justification is defeasible.)

Few readers will have failed to foresee my next move:  By hypothesis, Harry is a flawless human simulator and behaves, in any circumstance, just as a human being might.  So, over time, he provides his viewers with just the same sorts of behavioral evidence for mental ascriptions that you or I doincluding ascriptions of qualitative experience.  So far as we have evidence for ascribing qualitative phenomenal states to each other, we have just as strong prima facie evidence for ascribing them to Harry.  And common sense, at least, counts that evidence as very strong, so strong that we rarely even entertain potential defeaters.

Notice further that in the case of human beings, such behavioral evidence does not require assumptions about the subject’s innards (4).  We mature and educated people do know that other human beings are biologic organisms and we presume that the others’ biology is like theirs, but the standard tacit behavioral reasoning does not depend on that presumption.  1) A child or naïf who did not know those things would be just as well justified in her/his mental ascriptions, or at least very nearly as well justified, as we.  And 2) if we were watching a videotape of humanoid creatures which might be from another planet and might have a biology quite different from ours, then if those beings behaved just like humans, we would still be justified in imputing human mental states to them–indeed, I submit we would not even think about it, unless our philosophical guard were up.

The foregoing points, especially subargument 2), might be thought to beg the question against the chauvinist.  But they do not.  I have granted (and would insist) that the justification conferred by the behavioral evidence is in every case defeasible.  That leaves open the possibility that for machines, or even for aliens, the class of potential defeaters is wider than that which attends mental ascription to human beings, and I have not assumed otherwise.  My present point is only that powerful defeat is required in Harry’s case; the chauvinist is already one–a big one–down.

Here is my second argument for the same conclusion.  (Science fiction again:)  Suppose that Henrietta, a normal human being, requires neurosurgery; indeed her entire CNS is under attack by a virus that will gradually destroy it.  The surgeons start replacing it (and if you like, much of the rest of Henrietta) with prostheses.  First a few neurons are replaced by tiny electronic devices.  These micromachines so sucessfully duplicate the functions of the neurons they replace that Henrietta’s performance is entirely unimpaired.  Then a few more neurons are removed and substituted for; complete success again.

And so on until there is no wetware lefteventually, Henrietta’s behavior is controlled entirely by (micro)machinery, yet her intelligence, personality, poetic abilities, etc., and most importantly her perceptual acuity, sensory judgments and phenomenological reports remain just as always.  Now, a chauvinist must maintain that at some point during the sequence of operations, Henrietta ceased to have qualitative experiences; she has become cold and dead inside and is now no more sentient than a pocket calculator.  One can imagine a particularly boorish chauvinist asserting this to her face.  She would protest, of course, and tell him that her inner life is as rich and vivid as ever, describing it as lyrically as time and his rudeness allow.  It is hard to imagine how the boor, or any other chauvinist, would be able to draw a line and state with assurance that after the nth operation, Henrietta ceased to have a phenomenology (whatever she may think to the contrary).  It is a hard position to defend.

Here again, I do not want to beg the question against the chauvinistor to commit a slippery slope fallacy, either.  For there may be a defeater that cuts in at some point and does override the behavioral evidence; and the “point” may be a vague one to boot (5).  As before, I am not asserting that no such defeater exists, but only emphasizing that the chauvinist bears the burden of coming up with one and that it is a considerably heavier burden than one might think.

4. What, then, are the defeaters specific to machinekind?  I can think of three possibilities.  First:  There is Harry’s origin.  He is an artifact; he was not of woman born, but was cobbled together on a workbench by a group of human beings for purposes of their own.  Perhaps a workshop is not a proper mother (imagine Dame Edith Evans enunciating, “A workshop?”).

I do not think any sound chauvinist argument can be based on that difference.  For suppose we were to synthesize billions of human cells and stick them together, making a biologic humanoid organism.  (We could either make a mature adult straightway or, what is technologically easier, make a fetus and nurture it.)  We might further suppose that the resulting pseudo-humanlet us call him Hubertis a molecular duplicate of a preëxisting human being.  There is little doubt that such a creature would have qualitative experience; at least, if he did not, that would probably not be simply because of his early history (6).  So artifactuality per se seems not to count against having phenomenal states. Our first difference is no defeater (7).

Second: It may be said that Harry is not a living organism.  (Paul Ziff made such an appeal in his well-known article, “The Feelings of Robots” [8].)  If something is not an organism at all, properly speaking, then there does seem to be something odd about ascribing sensations and feelings to it.

Much depends on what is considered criterial for “living organism.”  We have already failed to find reason to think that artifactuality per se precludes qualitative experience.  Parallel reasoning would show that artifactuality per se does not preclude something’s being a living organism either, for surely our synthesized pseudo-human would count as a living organism.  Putting artifactuality aside, then, what constitutes living?  Automotion?  Autonomous growth and regulation of functions?  Reproduction or self-replication?  Metabolism?  Being made of protein?

Whatever.  Some of these things–the first three, at least–could be done by a machine, in which case the machine would be “alive” in the relevant sense and the objection’s minor premise goes false.  Others, very likely the last two, could not be done by machines; but in that case we should ask pointedly why they should be thought germane to consciousness, qualia and the rest.  E.g., why should a thing’s metabolizing or not bear on its psychological faculties in so basic a way as to decide the possibility of qualitative experience?  It is hard to see what the one has to do with the other, or to imagine a plausible argument leading from “no metabolism” to “no qualitative experience.”  And likewise for being made of protein.

Also, remember Henrietta.  She started out as a normal human being but was gradually turned into a machine.  Did she go from being a living organism to being non-living, inanimate?  In that caseif she had been alive and then ceased to liveshe died, and obsequies are in order.  It would be both hard and easy to make her funeral arrangements:  Hard, because we would first have to persuade her that she was dead and that services should be held at all; she might resist that suggestion fairly indignantly, especially when we got around to the question of burial vs. cremation.  But then easier, because we would not have to guess posthumously at her wisheswe could just ask her what hymns she wanted, whether there should be a eulogy or a general sermon, and so forth, right up till the last minute.  I must say I think I would enjoy attending that funeral; I am not so sure that Henrietta would, herself.

Notes

1 As is perhaps surprising and certainly far from well enough known, every one of those problems is resolved in my books Consciousness (Cambridge, MA: Bradford Books / MIT Press, 1987) and Consciousness and Experience (Cambridge, MA: Bradford Books / MIT Press, 1996).

Incidentally, in this paper I shall concentrate only on “feels” in the sense of qualia.  But for explicit defense of the thesis that machines can have feelings in the sense of emotions, see (e.g.), A. Sloman and M. Croucher, “Why Robots will Have Emotions,” in the Proceedings of the 7th International Joint Conference on Artificial Intelligence (Vancouver, B.C., 1981), and N.H. Frijda, “Emotions in Robots,” in H.L. Roitblat and J.-A. Meyer (eds.), Comparative Approaches to Cognitive Science (Cambridge, MA: Bradford Books / MIT Press, 1995).

2 “Abortion and the Civil Rights of Machines,” in N. Potter and M. Timmons (eds.), Morality and Universality (Dordrecht: D. Reidel, 1985), pp. 141ff.; and the Appendix to Consciousness, loc. cit.

3 The following two arguments are reprised from the Appendix to Consciousness, loc. cit., pp. 125-26.  (Hereafter I am going to spare myself typing “loc. cit.” in references to my own works.)

4 This claim is contested by Christopher Hill, in Ch. 9 of Sensations (Cambridge University Press, 1991), and by Andrew Melnyk in “Inference to the Best Explanation and Other Minds,” Australasian Journal of Philosophy 72 (1994): 482-91.  What follows in this paragraph is in part a reply to their objections.

5 Ch. 2 of my Consciousness and Experience defends the claim that the notion of conscious awareness is vague and comes in degrees of richness or fullness.

6 My suggestion about molecular twinning is not meant to suggest that qualiaphenomenal propertiesare “narrow” in the sense of supervening upon molecular constitution.  In Ch. 6 of Consciousness and Experience I argue that they are “wide” and do not.  But there is no reason to think that the external factors needed to determine qualitative character include the circumstances of one’s coming into existence.

7 In fact, I think that discrimination against Harry on the basis of his birthplace and/or his genesis would be almost literally a case of racism.

8 Analysis 19 (1959): 64-68.  In reply, see also J.J.C. Smart, “Professor Ziff on Robots,” Analysis 19 (1959): 117-18, and Hilary Putnam, “Robots: Machines or Artificially Created Life?,” Journal of Philosophy 61 (1964): 668-91.  Interestingly, I have found that young children uniformly resist the anthropomorphizing of computers on the grounds that computers are not alive.

22 comments to Qualitative Experience in Machines

  • 1. It seems to me that there is a curious assymetry in the overall argument. I’d be inclined to assert that consciousness requires an organic substrate. But, alas, I can’t specify exactly why a living substrate is necessary. So, you reject that line of argumentation. Fine.

    On the other hand, while you admit that there’s a lot we don’t know about building Harry or Henrietta, you ask the we put such ignorance aside and simply assume that we can build them. Now that they’ve been assumed into existence, and, by definition, that they are behaviorally indistiguishable from real humans, you provide various arguments that Harry and Henrietta are conscious. OK.

    But, why should we grant you the possibility of building Harry and Henrietta? Why isn’t your inability to explain how to build them as damaging to your case as my inability to explain why living matter is necessary for consciousness?

    2. In a different direction, it is clear that there is an obvious physical difference between, say, the computational simulation of an atomic explosion and a real atomic explosion. No matter how fine the granularity of our simulation, no matter how much computing power we devote to it, that simulation is not going to result in the creation of a huge hole in the ground, etc. Similarly, the simulation of a huge thunderstorm is not going to make the ground wet. Doesn’t a similar difference obtain between the simulation of a mind-brain and a real mind-brain?

    There is a standard assertion that information is information regardless of substrate. If both minds and computers are just information processors, then I suppose there’s no reason to believe that qualia available, in principle, to one are not available, in principle, to the other. But just what is this information of which we speak? I don’t see that the commonsense notion is of much use here. So we need a technical account. The Shannon-Weaver explication is the most familiar technical account and is in wide use, certainly in computing, but elsewhere too. But if that’s the view we adopt, then we’re a long way from explaining human consciousness in terms of Shannon-Weaver information. We can assert that the brain is just an information processor, but there’s quite a bit of hand-waving and tap-dancing in that assertion.

    3. Finally, and doing a 180 degree turn, let’s grant that consciousness is possible, in principle, in a non-organic substrate. Must that substrate be organized to simulate the human mind in order to be conscious, or could that substrate be conscious on a different plan? Is the entire human plan necessary to consciousness, or only some element of it? If only some element, can that element be effectively embedded in different kinds of overall plan?

    Or: is this third case just meaningless gibberish?

  • Douglas C. Long

    Lycan’s “goose-gander” thesis rests on the assumption that simulations of a computer-controlled, nonliving autonomous robot could carry the same psychological meaning as the behavior of an animal or living human being. But designing an android to mimic the behavior characteristic of anger, joy, pain, or pleasure even in appropriate contexts does not thereby provide the underlying psychological motivations that we find in human beings. Animate behavior within a subject’s control is psychologically expressive because it arises out of the biological needs, interests, and concerns that develop naturally in living creatures. Awareness of this background allows us to understand the beliefs, desires, and purposes that motivate their behavior.

    Organisms naturally develop concerns and purposes of their own that arise from their very nature as forms of life. Food, water, and mates are attractions. Danger, injury, and hunger are threats. This background of natural needs and interests provides the underpinning for explaining the self-initiated and self-controlled behavior of living agents in psychological terms. In contrast, “autonomous” robot behavior is driven by mechanisms ultimately designed and constructed by living beings, and it is those living beings that have genuine intelligence, intentions, purposes, and desires. Reproducing human behavior by means of an electro-mechanical device is not the same as reproducing human psychology. This fact undercuts the legitimacy of attributing thoughts, feelings, and intentions to robots.

    Lycan stipulates that Harry is a flawless human simulator whose behavior “provides his viewers with just the same sorts of behavioral evidence for mental ascriptions that you or I do….” But whether Harry’s behavior is “the same” as ours is the very point at issue. We could make Harry act like a human in pain when burned and jabbed without succeeding in making this an expression of the robot’s pain. It might respond appropriately and rationally to its environment and social setting, producing useful and informative conversations, without its being moved by feelings and purposes of its own. A machine may search Mars for rocks that are of interest to geologists but not for its own interests. Resourceful as they are, we cannot expect engineers to build a robot in which the motivations for its performance arise from the needs and interests of the machine itself as opposed to those of their makers. [I develop this argument more fully in “Why Machines Can Neither Think Nor Feel,” in Language, Mind, and Art: Essays in Appreciation and Analysis, in Honor of Paul Ziff, edited by Dale Jamieson (Dordrecht, Holland: Kluwer Academic Publishers, 1994), 101-19.]

    Lycan’s second example, Henrietta, presents a more complex challenge. Miniature prostheses might be attached to the human brain in such a way as to take over some of the functions of natural nerve cells. But difficult questions arise as this process advances to the point where so much cortical tissue has been replaced that she counts as being “brain dead.” It is difficult to speculate confidently in advance about our attitudes toward such a medically unlikely hybrid. If the surgery replaced only her central nervous system, which would then control her living organs and limbs, she perhaps could still count as a living organism. Her human needs, interests, and concerns that support the literal ascription of mental states might be carried over from her original status as a fully biological creature.

    However, if organ replacement continued to the point where the flesh-and-blood Henrietta was replaced by a functional replica constructed of inorganic materials, the reasons we have offered for thinking that such behavior is not expressive of feelings would apply. In the later stages of the replacement process the robot might persist in voicing outdated information about Henrietta’s human motives and feelings, but they would become explanatorily irrelevant. The resulting machine might well be considered “cold and dead inside.”

  • Anglo-American philosophy in the 21st century is in something of an odd position. At one time philosophy covered all of human knowledge, including empirically-based knowledge. With Bertrand Russell and positivism the Anglo-American (and some other) traditions decided for some time that they were competent only to comment upon the language of science. Today philosophy seems to be swinging back again, but in quite a scholastic fashion; i.e., without much reference to the vast body of empirical evidence about topics like consciousness, including aspects of the problem that might have been familiar to Aristotle, Newton, and Descartes — all of whom made important empirical observations. As a result, we are entertaining arguments about putative paradoxes, most obviously, of course, the ancient mind-body debate.

    For a scientist this is somewhat uncomfortable. Our main example of consciousness is after all the human kind, with very important lines of evidence extending to and from other primates and mammals, and in recent years, with the genomic revolution, even C. elegans, fruit flies, and brain slice cultures. Those are more than remotely relevant. They tell us about the fundamental nature of neurons, which are after all the substrate of both conscious AND unconscious brain events.

    Some conventional philosophical debates strike me as prima facie absurd. I could obviously be wrong, but the fact that we cannot observe Mary’s consciousness directly is of no great concern to any scientist I know, with the exception of those who study comatose states and general anesthesia, where we have learned to our shock that people we thought were fully unconscious are in fact intermittently conscious an appalling percentage of the time. The misdiagnosis of brain death based on crude scalp EEG has been shocking. (See Steven Laureys and Niko Schiff’s various publications). An entirely new diagnostic category has been proposed and I trust it is being adopted, called “Minimally Conscious State” (MCS), because it turns out that patients diagnosed as irreversibly comatose were in fact conscious some of the time. The realization did not dawn from philosophical hair-splitting, but from simple, careful, videotaped studies of patients around the clock, and from new brain imaging methods like fMRI, high-density EEG, and intracranial recordings in humans and animals. No serious person I know has challenged the evidence.

    But notice that nothing has changed philosophically. We still cannot observe Mary’s consciousness directly. But we do not doubt in an everyday, sensible fashion, that it exists, if Mary shows all the standard physiological and behavioral features associated with consciousness. We can, if we choose, look at extreme cases where Mary may look conscious without being conscious, such as sleep walking or epileptic automatisms. But we believe that if we can do the proper brain recordings we will generally resolve our doubts.

    Which brings up a major difference in contemporary philosophical thought (quite different from scientist-philosophers before 1900 or so) — namely the fact that a single conceivable counter-example serves to falsify a hypothesis in logicist philosophy, but not in science, engineering, farming, or everyday life. Science is happy with probabilistic weight of evidence. Current philosophy demands all or none arguments. That is why Mary’s hypothetical state of mind doesn’t bother medical doctors or scientists, but evokes decades of debate in philosophical circles.

    Now there are historical cases where all-or-none thinking has indeed been fundamentally important. Those cases are in what we now call logic and mathematics, although fields like geometry were initially viewed as empirical disciplines, of course. In the history of mathematics we have paradoxes like Zeno’s famous paradox, which turned out not to be paradoxes at all — but their solution after some twenty centuries or longer was very important indeed. Roughly speaking, Zeno’s paradox had to do with the problem of infinitesimals and what later on turned out to be infinite series, either converging or diverging. Thinking about Zeno’s paradox in mathematics and logic was very important because it led to a clarification of the foundations of the infinitesimal calculus, with major ramifications elsewhere in mathematics and highly mathematical sciences.

    Zeno’s paradox was therefore arguably not a real paradox, but it was extremely productive over the very long term.

    Other proposed paradoxes turn out to be only apparent. Others may be real, like wave-particle duality, and also productive of important mathematical and scientific advances. Finally, some putative paradoxes may not be paradoxes at all, and they may turn out to be a waste of time, except for their entertainment value.

    What I do not know is which of those four categories applies to contemporary philosophy of mind. What we see (from outside of the field) is a great collection of putative paradoxes, some of which are said to be unsolvable as posed. For a scientist who is happy to carve out a single rock from the great Mount Everest of mind and brain, no matter what its shape or size turns out to be, I find myself rather baffled. I notice that vast domains of evidence about consciousness and the brain are never touched by our philosophical colleagues. But I have the most compelling sense that there’s gold in them thar Mount Everests. Without being able to prove it, I also have the sense that many philosophical questions may turn out to be unproductive.

    Zeno’s Paradox was only resolved after 20 centuries or so. It was a wonderful problem to pose, and the answer turned out to be fundamentally important. But if you were Zeno’s friend in the 4th century BCE, where would you be spending your time? On a putative paradox that does not stop you from your daily activities? Or on, let’s say, finding a way to triangulate the height of a mountain, a method that will help tunnel builders and architects and engineers every single day for the next 20 centuries?

    Where’s my straight edge and compass? I think I know what to do now!

    Yours,

    Bernard Baars

  • Pentti O Haikonen

    In his paper “Qualitative experience in machines” William G. Lycan discusses the topical problem of machine qualia. The experience of qualia is a major difference between humans and contemporary robots and it is understood that robots should eventually also have qualia in order to equal humans mentally. At this moment it is not clear how to make robots have qualia and engineers would welcome any practical ideas about this [1]. Unfortunately Lycan’s reasoning does not help here.

    Lycan proposes a robot, an artificial person, Harry, who is an android equalling a talented 20th century American adult. Harry behaves just like a human being; therefore, would it be fair to assume that he also has qualitative experiences? Yes, answers Lycan. In fact, according to Lycan, to assume otherwise would be an act of chauvinism and racism. Thus it has been proven that machines can have qualitative experiences, or has it? What, if anything is wrong with this line of argument?

    Homer Simpson is very human. He talks and acts like a human. He displays human emotions; he is sometimes happy and many times in great pain. Therefore, would it be fair to assume that Homer Simpson would also have qualitative experiences, qualia, or would the denying of this possibility be an act of racism? It should be obvious to everybody that this is a preposterous proposition; Homer Simpson does not have any mental states because he does not really exist. He is just a cartoon character and a figment of imagination. But alas, so is also Harry. Why then, would Harry have any qualitative experiences? He does dot, because he does not really exist. Lycan’s reasoning is faulty because the extraction of real world facts from arbitrary figments of imagination does not really work.

    The question about the possibility of qualitative experiences in machines remains unanswered.

    1. P. O. Haikonen, Qualia and Conscious Machines. International Journal of Machine Consciousness, Vol. 1, No. 2, 2009 (In print).

    • Matthew Haentschke

      Haikonen, in his comparison of Harry and Homer Simpson, seems to ignore the previously solved Frame Problem.

      The Frame Problem as I understand it is the issue of dynamically determining which things are not affected by an action without explicitly specifying exactly which things they are – or rather, the problem of generating an infinite series of logical actions, with concurrent side effects, from a finite series of non-side effects. When Lycan claims that his Harry has superseded the Frame Problem, he has created a machine that, with a static program, can generate an infinite number of logical actions. We, as humans, have superseded this problem. With our static amount of neurons and brain matter, we are able to generate a series of non-affected conditions for any action.

      In Haikonen’s example of Homer Simpson, there is an error in comparing Homer and Harry. Homer Simpson, a cartoon character, does not have the ability to supersede the Frame Problem.  Every action that he takes has been predetermined (as well as the non-results of his actions, i.e. the background scene does not change as he walks through it) by the cartoonist at work. Each “frame” (used in the animation sense) of his existence has been crafted by an external being. This being has generated a finite series of non-results for Homer from its infinite series of non-results. Homer Simpson is a sub-set of the actions of its creator.

      Conversely, Harry has been created through one additional step.  From an infinite series of non-results from a creator being results a finite “organism” which can generate an infinite series of non-events for itself. Harry is therefore at least an equal set of its creator.

      Haikonen makes a weak claim comparing Harry and Homer Simpson that appeals to the illusion that Homer Simpson is an actual being, which is exactly what the cartoonist wants us to believe!

  • William S. Robinson

    If you want to make a robot that has qualitative experiences, it would be best to try to build in the causes of those experiences. What exactly these are in us is not known, but leading researchers look for them among the activation states of some sets of our neurons. Perhaps such activation states could be realized in non-organic materials. In that case, maybe we could build a robot with qualitative experiences. But if we can’t figure out how to produce the same (or very similar) activation states in non-organic hardware, we’d have every reason to doubt that our robots had any qualitative experiences.

    Lycan doesn’t say anything about activation states of Harry’s artificial constituents. So, he hasn’t implied that Harry has the causes of qualitative experiences, and therefore he hasn’t implied that Harry has any qualitative experiences. The behavior and information flows that *are* featured in Lycan’s account are not enough, because we don’t know that these cannot be produced in ways that leave out the activation states that cause qualitative experiences.

    Lycan represents “the ordinary person’s evidence” for other people’s qualitative experiences as behavioral. The argument, then, is that Harry exhibits the same behavior, and so it would be arbitrary to deny it qualitative experience. But this argument misrepresents the ordinary person’s evidence. It leaves out our knowledge that other people have the same construction as we do, and thus omits the important point that we can apply the ‘same causes yield same effects’ principle to other people (but not to Harry).

    Lycan anticipates this point, saying that “the standard behavioral reasoning does not depend” on assumptions about others’ innards. Yes and No. Yes, the standard 20th Century *philosophers’* (mis)representation of our reasons for attributing qualitative experience to other people appeals only to behavioral analogy. But No, we actually have a better reason for such attributions: Other prople are made like us, so, very likely, when they get the same inputs, they experience the same effects.

    Lycan anticipates this point too, saying (1) that a child who did not know about our common biology would be just as well justified as we (who do know) in attributions of other people’s qualitative experiences. But that’s just asserted, and seems implausible to me. But note: what’s required for the child’s reasoning is not sophisticated biological concepts. “They’re made like me” is a premise for a good argument, and that argument gets better and better as we learn more about neural sensory systems.
    Lycan also suggests that (2) videotaped behavior of possible interplanetary visitors would justify us in attributing qualitative experiences to them. We wouldn’t even raise the issue unless our philosophical guard was up. I am not convinced. Octopi are clever, and they retract from and subsequently avoid contact with damaging objects. But I don’t know much about their brains, and suspect that their evolutionary history may have given them a neural constitution quite different from mine. I think it’s a serious question what, if anything, they feel when cut. If future research shows that their brains are, after all, quite like ours in those respects thought to be the neural causes of our conscious pains, I will be much more willing to think they have pains like mine. The same points would apply to creatures from Planet X.

    I agree with Lycan in rejecting artifactuality, not being alive, and not being made of protein as defeaters for claims for Harry’s qualitiative experiences. But the only relevant defeater would be lack of the causes of qualitative experiences. We have been given no reason to think these causes are present in Harry.

    The Henrietta case can’t be settled on the basis of the description that Lycan supplies. If her neurons are replaced with hardware that gets into the same activity states, then she still has the causes of qualitative experiences, and so still has them. If, however, the replacement is with devices that cause the same behavior but without getting into the same activity states, then she lacks the causes of qualitative experiences, and she is a zombie too, no matter what she says.

  • Joshua Kerley

    First of all, let me say that this will be a challenge for me to write a reply to. Because I have been analyzing, and reanalyzing arguments that I would make against Lycan and Haikonen. For myself, I have not chosen a side for this argument. I feel like I have just been presented with a mountain of evidence and first of all have to get my head around the claims.

    Lycan makes a lot of suppositions, which is one of my main problems with his argument. Suppose we have a cyborg and suppose it isn’t any of the things Lycan rules out. It isn’t based on von Neumann architecture and it doesn’t have any problems. Let us suppose, that is, that it is perfect. Just as Lycan stipulates. My first intuition is that well, we don’t have such an object. It doesn’t exist. It might never exist. But that does not change that I feel like the only way to argue whether or not this Harry could be human is to believe that he does exist. This is the reason which Haikonen’s argument, to me, doesn’t hold any validity. Controversial arguments like this, dealing with classifying robots as humans, should be dealt with now before they enter our world, so that we are not caught by surprise.

    But, back to Lycan: Even after I pretend that Harry exists, my intuition is that he is still not a human. He is not the same internally. We don’t know how he analyzes different situations or what his logic is. Furthermore, his logic was created by other humans, and therefore can be controlled quite easily. Just by a quick update of his software. I feel like true humans are a little more rooted in their logical capacities.

    When Lycan begins to discuss Henrietta I have a more difficult time articulating my objections. I have only heard arguments for or against humans who are having their organs replaced, such as the heart and other life-sustaining organs being replaced by computers. That argument seemed easy to me. The one that we are discussing, when you start replacing neurons, and parts of a brain that control personality, emotions, etc. becomes very much a gray area for me. It’s an issue, as I have said, about which I still haven’t fully decided where I stand.

  • I think Lycan is correct in his argument against consciousness chauvinism. There’s now evidence* that conscious qualitative states in humans are associated with certain higher-level cognitive, representational functions (e.g., those carried out in
    Baars’ “global workspace”), functions which permit flexible behavior and internal simulations. This suggests that a plausible basis for attributing such states to non-human systems is internally realized functionality and the flexible, self-maintaining and world-negotiating behavior such functionality makes possible. Absent proof that the human physical basis for such capacities (evolved neural wet-ware) is necessary for consciousness, any system that can do what we do, functionally and cognitively, is a plausible candidate for having qualitative states, whatever its origins. This doesn’t mean that consciousness might not accompany systems with very different functionalities and behavior, for instance an octopus or a stationary AI charged with controlling our economy (worrying itself to death, poor thing!), but at the very least we shouldn’t withhold ascriptions of consciousness merely on the basis of differing physical substrates.

    *for instance: Gaillard, R., Dehaene, S., Adam, C., Clemenceau, S., Hasbourn, D., et al., Converging intracranial markers of conscious access, PloS Biology, Vol. 7, #3, 2009.

  • William Kornahrens

    Professor Lycan presents a valid point, mainly that we should investigate whether machines can have qualitative experiences despite differences in origin. The potential problem with Lycan’s hypothesis is that not everyone defines humans on a purely behavorial basis. I believe that Mr. Kerley and Mr. Long have both alluded to the same problem, which is the ownership of emotions, motivations, and beliefs. I think most people will agree that we have progressed technologically to the point where we can (partially) give machines the capability of having human conversations and even having human voices. My GPS tells me instantly when I made a wrong turn. The GPS has successfully used logic to determine that I, for whatever reasons, ignored its instructions and that it must now determine how to fix my mistake. However, unless the voice is specifically recorded in a certain fashion, the voice telling me to make a U-turn does not express frustration at my failure to follow its instructions. The GPS does not have an opinion or an emotion related to my mistake, whereas my passengers could be irritated, amused, or angry with me.

    I think the general problem illustrated by this example can be summarized as the “lack of uniqueness” of the machine. I can easily picture a life-sized robot capable of simulating emotions and putting together sentences to describe the logic behind their decisions. The problem is the “simulating emotions” part. I don’t doubt that robots can use logic, but to suggest that they are able to independently experience the act of having emotions, beliefs, and personal motivations that were not preprogrammed is very difficult for me to conceptualize realistically, even within my own imagination. This may be because, unlike my example of the GPS using language and logic to communicate directions, there are not currently any examples of technology (to my knowledge) that are capable of giving technology truly genuine feelings owned completely by the machine. The presence of such technology would be absolutely essential in the making of any robot that we could consider human. The alternative is that we have a clever imitation of a human, one that may be able to do everything that we can except be unique. If we can mass-produce such machines, with no blend of eccentricities or special characteristics that make up a unique personality, then there is no reason to consider them human. Note that I am not saying that uniqueness is the only criteria that we should use for establishing the difference between what is human and what is not. However, it does seem to be a good potential criterion for separating us from machines until such a line is crossed. Once this criterion is no longer valid due to significant technological advances, then Lycan’s hypothesis will have much more weight and relevance to our discussion on the human status of these machines.

    There are other potential problems that have not really been discussed yet. Specifically, I am wondering how we can still call an “adult” machine a human if it never grows old. Or even the converse, which is how can we call a “child” machine human? In both scenarios, we have robots that do not develop physically (unless their hardware is synthetic, or “wetware” as I believe it is termed). The other issue is that they are not developing mentally. I suppose you could argue that machines can learn from making mistakes. After all, a computer brain could conceivably hold vast amounts of information and access it much more readily than a human mind can. But then doesn’t this mean that robots will easily outperform the original Homo sapiens, effectively separating them from us? To refute this question, suppose that we were able to program machines with the capabilities of only a child. However, is the machine really that useful to us if it has programming comparable to that of a child? Such a question probes deeper into why we would even want to create Homo sapiens 2.0. Are we going to make them serve us as we currently do? How else are we going to ensure that our race is not made obsolete by the possibility of a superhuman? Such ethical issues and potential ramifications on our species cast serious doubts on whether we should even consider the possibility of trying to create such machines, even if we could.

  • Objections to machine consciousness are equally a quandary for conscious brains, William Lycan argues. I agree. And therein lies Lycan’s problem.

    AI/functionalism assumes neuronal firings and synaptic transmissions are fundamental bits in a computer-like brain. Lycan’s simulated human Harry and the unfortunate human Henrietta (whose brain is replaced neuron-by-neuron with tiny electronic micromachines) assume cognition and consciousness emerge from computation in networks of simple neurons. One problem is that consciousness depends on neurons fused into synchronized syncytia by electrical synapses called gap junctions. Replacing Henrietta’s neurons one-by-one would destroy the syncytial vehicle of consciousness.

    Another problem is Paramecium. A single cell organism, it has no synaptic inputs and exists in no network. Yet Paramecium swims around, avoids obstacles and predators, learns, finds food and mates, and has sex. How? Paramecium relies on protein polymers called microtubules as its nervous system. (I’m not suggesting Paramecium is conscious, only that its cognitive functions are not trivial.) Brain neurons also have coherently excited microtubules with potential information capacity of 10 to the 15th operations per second per neuron. This raises the bar for computational brain equivalence significantly, but doesn’t explain consciousness.

    In 1989 Roger Penrose wrote ‘The emperor’s new mind’, arguing for non-computable effects in human consciousness (not that consciousness doesn’t utilize computation, just that some other factor is also required). Penrose called for a type of quantum coherent computation in the brain influenced by Platonic values embedded in quantum gravity – the fundamental level of the universe.

    Penrose and I teamed to suggest neuronal microtubules were the brain’s quantum computers, connecting consciousness to the universe. Some (not Penrose) extrapolate our theory to a quantum spirituality, or soul. Recent evidence demonstrates biological quantum coherence in brain activities. Meanwhile, functionalist approaches to consciousness flail and flounder. AI should simulate a Paramecium before attempting a human brain.

  • Frank Jackson

    Comment on William G. Lycan on “Qualitative Experience in Machines”
    Frank Jackson
    The issue about qualitative experience in machines turns on whether or not there are relevant differences between us and the machines. More precisely, it turns on whether or not there are relevant differences between us and possible machines of the future. It is obvious that I differ in relevant ways from my laptop. But do I differ in relevant ways from possible machines of the future that process information about their surroundings and use that information to interact with their surroundings much as I do? What I do with ‘wet’ organic innards, they do with ‘dry’ inorganic ones.
    Lycan says we don’t. There are no relevant differences. His main reason is that the (good) reasons we have for ascribing feelings to human beings – namely, behavior in circumstances – would apply equally to these machines of the future. Each is “a flawless human simulator”. I agree with him, but I do so only because I am (now) a materialist. For dualists the matter is not so clear-cut.
    Dualists hold that there are laws connecting bodily facts with distinct experience facts. If these laws connect functional bodily facts with experience facts, Lycan’s goose-gander argument succeeds. But maybe the laws link conjunctions of functional facts and, say, our neurological nature broadly conceived, with qualitative experiences. Maybe the causation of feelings is a joint effort by the functional organization and some aspect of our wet-ware. Given the anomalous nature of the dualists’ laws (something we materialists highlight), it is hard to see how dualists could be confident that functional facts alone matter for the causation of feelings. The remarkable behavioral capacities of human beings (and certain animals) give good reason to believe that what realizes their functional nature is of the same general kind, independently of knowing what that kind is. This means dualists need to allow as an open possibility that being of that general kind, whatever it is, matters for having feelings. (Of course if this is right, dualists should worry about brain operations that involve extensive replacement of wet-ware with hardware that performs the same functional role.)

  • If I’m running from a lion who’s pawed a gash in my leg, my body is communicating information by means of the qualia of “pain,” while a robot programmed for its own preservation will receive feedback in concrete quantitative terms. Biomechanical qualia are quanta: vast amounts of data known to us only in totality as sense.

    Biological machines are capable of reason. but are programmed also by conditioning; and reason and reflex can produce contradictory imperatives. If there’s a “choice” to be made, which mechanism is it that “makes” the choice?

    Consciousness is not complex calculation it’s indecision. Create an indecisive computer, a neurotic computer, torn (having been given the imperative to survive) between the heuristics of conditioned response, and calculation, and you’ll have a conscious non-biological machine.

    Mary the color scientist, seeing –sensing– color for the first time, will learn nothing new about color itself but will now give it a place among all the trillions of sense impressions over the course of her life which she has compartmentalized, characterized, and like as not narrativized into her personal logic. She will have a new understanding of color not as independent but in relation to herself as a form of experience within the totality of her imagining life.
    Mary will see, construct, and experience her red.

  • Bénédicte Veillet

    Lycan’s post seems to highlight the fact that we have conflicting intuitions about qualitative experience. On the one hand, we tend to think that what goes on inside doesn’t really matter: we would assume aliens have qualitative experiences based not on the structure of their brain (which we may know nothing about) but based on behavioral data (think of ET); and we would (initially at least) assume that Henry was sentient based on his behavior. On the other hand, we tend to think that what goes on inside *does* matter: when we learn about the inner workings of Harry and Henrietta, we feel tempted to change our verdict about their phenomenology.

    I think these intuitions reveal certain facts about how we *think about* phenomenology. It may be that our *concept* of qualitative experience is connected to our *concept* of life in such a way that we assume something needs to be alive to have qualitative experiences. This kind of conceptual connection would explain why learning about ET’s very different “brain matter” would not (I suspect) lead us to change our verdict about his phenomenology. Aliens are alive; machines are not. (If this is right, our way of thinking about living things becomes relevant, and unfortunately we may have conflicting (or unclear) intuitions about that.) On the other hand, our concept of qualitative experience may be connected to certain conceptions of behavior in a way that leads us to assume that if something behaves in very much like us, it has qualitative experiences.

    Of course, the interesting question is now whether any of our intuitions about qualitative experience are in fact rationally defendable. Lycan simply seems to be pointing out that it is much harder to rationally defend the intuition that (non-living) machines cannot have qualitative experiences than most people realize. I think he’s right.

  • Rachel Geiger

    Haikonen presents an interesting objection to Lycan’s claim that machines can have qualitative experiences. While I agree that very few individuals would seriously attribute qualia to fictional characters such as Homer Simpson, I believe that we should not dismiss the idea that some fictional beings may possess qualia.

    Alternative personalities created by individuals with Dissociative Identity Disorder (DID) behave in human-like ways. Regardless, alternative personalities are creations of the human mind and are therefore fictional. Still, would we not attribute them qualia? They possess organic bodies, experience a wide range of emotions, and have the ability to display logic and reasoning. While being expressed, alternative personalities seem to be self aware and can possess names different from that of the dominant personality’s. They also interact with their environment in all the ways that a normal person can. In addition, someone with DID could meet another individual for the first time, while expressing an alternative personality, without the other individual ever noticing anything awry. Most importantly, alternative personalities have a real-world presence.

    Robots as advanced as Harry may not be technologically feasible now, but I do not believe we can dismiss the possibility of a machine eventually being able to have qualitative experiences just yet.

  • Bernhard Nickel

    Much of the debate turns on what exactly we count as machines. Lycan’s suggestion:

    Let us mean a kind of artifact, an information-processing device manufactured by people in a laboratory or workshop, out of physical materials that have been obtained without mystery or magic.

    Lycan then points us to the possibility of creating what we might call artificial humans, paradigmatically Harry(2). Now the argument: given that artificial humans are definitely machines (by the criterion just mentioned), and given that we want to credit them with qualitative experience, we should accept that there’s no in-principle obstacle to the existence of machines with qualitative experience.

    I think that what Lycan really shows is that we may have to rethink where we draw the line between humans and machines—specifically, we may well have to consider creatures like Harry(2) human. That is certainly a reasonable moral to draw given the thought experiment centering on Henrietta. But for all that, Lycan hasn’t really touched the question of whether machines, as contrasted with (artificial or “natural”) humans, can have qualitative states.

    To make good on my second point, consider a machine that exhibits all manner of intelligent behavior, but that doesn’t look like you or me. Not even a little bit. Let it consist of many different parts, distributed in various rooms, connected by radio. Each part has information storage and retrieval capabilities, and each has a screen and keyboard with which it communicates. It has many different specialized appendages, and some of its parts can move around on tracks. Crucially, it can intelligently respond to commands, engage with its environment, and learn.

    I do not find myself inclined to credit this machine with qualitative states in addition to intelligence, even if I admit that its intelligence far exceeds my own. That means that Lycan’s argumentative strategy does not get off the ground—if I’m not inclined to attribute qualitative states to the machine, the question of defeaters is moot.

  • Tria Metzler

    In Lycan’s example of ‘the case of Henrietta,’ Lycan describes a case that has stumped me for some time. At what point does organ transplant result in a change of identity? Some may think, for example, that once one’s heart is replaced, you are no longer human. But could this be right? According to this principle, those who suffer heart failure and receive a transplant would no longer be themselves. But that seems absurd. Would they now, instead, be the person from whom they received the heart? I doubt many would argue in favor of this perspective. It then seems to follow that hypothetically replacing one’s heart with an artificially created (but in all other ways equal) transplant should also not affect one’s personhood or identity, or his or her ability to have qualitative experiences. The heart is, after all, simply a muscular structure responsible for circulating blood throughout one’s body; it is imperative to survival, but can not truly qualify who we are as an individual.
    Perhaps others may argue that once the brain is replaced with an artificial component, the individual is no longer his or herself, or even human at all. This declaration, I admit, is more difficult to defeat as the brain is much more complex organ and still not fully understood. Lycan, however, approaches this situation with the succinct explanation that although the entire central nervous system may be replaced, Henrietta herself has not changed; her desires, personality, and intelligence remain the same. In addition to this argument, I would like to contribute another thought: suppose Henrietta has been through the ordeal of having her central nervous system replaced with artificially created substitutes. Now suppose she realizes this may affect the opinions of others with respect to her personhood or ability to have qualitative experiences, and Henrietta is deeply distraught by this idea. Her greatest desire is simply to still be thought of as capable of qualitative experiences. I wonder if this desire in itself does not contribute much to her capability of having qualitative experiences. If she is capable of having these desires or aspirations, then there must be some sort of mental qualification on Henrietta’s part; she must be able to assess a situation and realize that she would be happier if it were slightly altered.
    If a humanoid machine were ever so advanced that it developed the desire to be treated as a human; that it desired to be deemed capable of qualitative experiences; and, maybe most importantly, that it desired to be something other than what it is (an artificially created being), would this by definition mean the machine more than a machine? Simply put: does having the desire to be capable of qualitative experiences mean you have qualitative experiences? I believe it might.
    Some individuals, Haikonen for example, believe this hypothetical situation is irrelevant to the situation at hand. He argues by using the hypothetical example of ‘Harry,’ all relevance of the argument is destroyed simply by the fact that this situation does NOT exist. I, however, am intrigued by Lycan’s example. Though the situation does not exist today, I believe Lycan’s point is that this predicament has the potential for existing in the future. For that matter, an android Homer Simpson could be created in the future, and we could discuss artificial Homer’s humanity as well. The focus of this argument should not be on whether or not the situations presented within the argument actually exist; the focus should remain on the topic of: if they did, what would that mean? Who am I to guess and determine what humankind will or will not be capable of decades from now? If everyday, scientific discover gets us closer to the ability to create prosthetics from human tissue or stem cells, who am I to assume we could not one day create an entire being? Thus, having the impression that one day creating an entire being will be possible, I think it imperative to begin discussing what the implications of this ability are now.

  • I’m writing obviously as an amateur, and this will be my last unsolicited comment.

    What exactly are qualitative states? In its definition of qualia at the Stanford Encyclopedia of Philosophy begs the question. Perception is physical: experience, sandpaper etc. When animals sense we categorize things in the history of our perceptions (patterning as comfort) Our history is foggy, and facts and values are confused from the start. The machines we make do not have this complex conflicted relation to the world, they’re not desirous or anxious. They have no sense of telos, even a blind drive for survival.

    It seems easier to want to ascribe qualitative states to man-made machines than to describe the mechanics of qualitative “experience” and “perception.” To a machine, the blueprint for a building and the building itself are identical, while animals require the presence of the building to understand the thing. And like the color red in doing that we’re not understanding the building or the color but our categorization of it, and all the details that we analogize in relation to what we’ve already stored away. We’re bombarded by perceptions and evocations resulting from perceptions. But all of that can be described in quantitative terms. What’s private -as experience- is that each of us contextualize the data according to our own history. Every animal has his or her own filing system and her own adaptive conditioning. Animals are drunken machines, each of us drunk in our own way.

    The limits of conceptualism it seems to me is in the unwillingness to mark the distinction between blueprints and buildings, between ideas and experience, because ideas are universally available and one’s experience of a building is private and therefore secondary, But what this means is that the ability to communicate always private experience atrophies, while experience is still our primary relation to the world. This conversation above seems more about desire than the world we will always only know as experience, while shying away from real questions regarding our biological machinery.

  • William Lycan supports a “goose-gander” thesis according to which there is no good objection to machine consciousness (in the “qualitative experience” sense) that is not an equally a good objection to human consciousness (in the same sense). But he does so by using thought experiments involving futuristic science fiction – an artificial information processing system named “Harry” that is functionally isomorphic to a human being in all its cognitive processing, including the same rich behavioral repertoire, etc. So, in my view, the real debate will arise over the subtext – things that Lycan does not explicitly defend but assumes for his argument. In particular, Lycan must think that the science fiction reveals what is important about the nature of consciousness, given high-level abstract information processing theories that would deem Harry to be conscious. But, as Lycan knows, that once traditional view of mind science is now contested. Multi-level theories of “embodied and embedded cognition” are all the rage (the term was coined by Andy Clark), meaning theories that judge the details of bioengineering to be relevant to mental phenomena, as well as how that bioengineering arose, evolutionarily speaking. As a consequence, others might not think that Lycan’s science fiction reveals what is important about the nature of consciousness. For example, artificial Harry might lack the appropriate engineering. According to this perspective, one could grant that if artificial system Harry were functionally isomorphic to a human being in all its cognitive processing, then it would have qualitative conscious experience. But perhaps the antecedent is impossible – not unimaginable (on some accounts), or conceptually impossible (on some accounts), but at least physically impossible, since it might well be a matter of law that only systems with the right neurophysical embodiment will have the appropriate information processing capacity to sustain conscious experience. Mere hypothetical cases can’t decide this issue, only evidence gleaned from the appropriate sciences. So the perhaps unexciting conclusion is that we must await the development of cognitive science, which discovers the laws governing conscious experience. Or, finally, to turn from embodied to embedded features, artificial Harry might lack the appropriate evolutionary history that determines the mental functions. Indeed, Lycan once held a version of teleological functionalism whereby the mind’s subsystems are functionally characterized in evolutionary terms (Lycan, Consciousness, 1987, chap.4). If Harry doesn’t have the right evolutionary history, one might argue, he doesn’t have the right mental functions either, including the functions of qualitative conscious experience. I find that these questions have no easy answers, and so I am not as confident as Lycan about machine consciousness.

  • Cara Spencer

    Here’s how I would put Lycan’s goose-gander thesis: there is no evidence that other people have qualitative experience that wouldn’t equally be evidence that certain possible machines also have it.

    So what’s our reason for ascribing qualitative experience to others? Lycan says it’s their behavior. And he’s clearly right that a machine could exhibit humanlike behavior. I think he goes wrong at step one, when he says that behavior is our only source of evidence for qualitative experience in others.

    Here is another source of evidence: I know that I have qualitative experience, and I can see that others are like me. So I can conclude that others have qualitative experience, too. When I look at a machine, I see that it isn’t like me. So I don’t have reason, in this case, to ascribe experience to it. And I’d argue that behavioral evidence about other people’s minds normally depends on this kind of evidence. If I don’t recognize that others are like me, then behavioral evidence isn’t probative in the normal way.

    That’s not to say that machines don’t have any mental states, or that we could never have evidence that they did. There could be other types of evidence for the mental lives of machines. That’s also not to say that behavioral evidence would be irrelevant without the recognition of similarity. If we were really faced with an alien, perhaps we could conclude that it was conscious on the basis of its behavior. But we would need more behavioral evidence than we would need from a fellow human to draw that conclusion. My only point here is that we have some evidence for qualitative experience in others that doesn’t carry over to the case of machines.

    Lycan says that behavioral reasoning doesn’t depend on the assumption of similarity, because if it did, children and naifs could never have evidence about other people’s beliefs. I would like to hear more about why he says so. It seems to me that children or naifs could easily recognize that another person is like them. You don’t have to know anything about the brain or how it works to make that basic recognition.

  • Daniel Farrell

    Haikonen’s rejection of Lycan’s reasoning based on his Homer Simpson claim, the only argument in his response, has no relevance to Lycan’s actual argument. Haikonen says:

    “Homer Simpson does not have any mental states because he does not really exist. He is just a cartoon character and a figment of imagination. But alas, so is also Harry. Why then, would Harry have any qualitative experiences? He does [n]ot, because he does not really exist.”

    This is obviously false. Harry could, and does in this thought experiment, exist in every sense of the word. You can reach out and touch Harry, you can talk to Harry, Harry has gravitational interactions with the rest of the universe, Harry moves through space and time. How could Harry possibly not exist? How could he be a figment of our imagination? Lycan’s Goose and Gander argument should be applied here, if Haikonen is going to make these assumptions about Harry he is obligated to find a reason to not make them against humans. Haikonen’s “reasoning is faulty because the extraction of real world facts from arbitrary figments of imagination does not really work.”

  • Thanks to all the commentators. I shall begin with the methodological points.

    Fictional characters and fantastical hypotheses. Haikonen observes that Harry is a merely imaginary being, and argues that Harry no more has qualitative experiences (or any other mental states) than does Homer Simpson, for the simple reason that like Homer he does not so much as exist. True, of course, but as Kerley points out, Haikonen has misconceived the dialectic. My opponent, the biochauvinist, began the debate by proclaiming that no matter how similar a robot or other nonbiologic system might be to a human being in terms of information processing etc., the robot could not have qualitative experiences. That’s a conceptual impossibility claim. To refute it, all I need is a coherent hypothetical case in which a nonbiologic system would have qualitative experiences. Hence Harry, and I’ve argued, though defeasibly, that he would have them.

    Scientists are understandably annoyed by fantastical examples (which, compounding the offense, philosophers sometimes pretentiously call “thought experiments”). All I claim for mine is that it refutes the biochauvinist conceptual impossibility thesis. Contra Endicott, I do not assume, because I don’t believe, that “science fiction reveals what is important about the nature of consciousness.” If no one had made any a priori impossibility pronouncement, there would be no need for science fiction. Incidentally, I would expect any scientist to agree at least provisionally with my goose-gander view; there may indeed be contingent reasons why a sentient being could not be built from nonorganic electronic parts, but any such reasons would have to be discovered empirically.

    Epistemology. I’ll now concede to Robinson and Spencer that knowledge of material similarity would add to the case for sentience. After all, it would remove what is for some people a major barrier; there must be some reason why biochauvinism persists. (Interestingly, small children are not even faintly tempted by the notion that computers think and genuinely converse with them, and their objection is precisely that computer s are not alive.) I do continue to maintain that our evidence regarding interplanetary visitors would be good enough, so I don’t agree with Spencer that absent material similarity, behavioral evidence would lose much of its force.

    I’m not sure why Robinson thinks I haven’t built in the relevant causes of Harry’s alleged experiences. I certainly have supposed that there are parallel causes. That the parallel causes are not organic just iterates the original issue. But I agree with Robinson and others that my Henrietta example does not settle anything.

    Embodiment/embedding. Here too there may be empirical reasons why no one could just build an android that would have qualitative experience (Endicott, Edenbaum). And as a die-hard teleofunctionalist, I entirely agree that a sentient Harry would have to have whatever it takes for his internal states to have the right functions. I do not agree that the latter requires specifically evolutionary selection; conscious selection by designers would do as well.

    Dualism. Jackson argues that a mind-body dualist will have special qualms about the goose-gander view: A dualist who grants causal interaction between qualitative experiences and bodily states must suppose that there are psychophysical laws, but could not presume to say whether those laws would relate the experiences to states functionally characterized or more specifically to neurophysiological or at any rate biologic states. That is indeed so, but I’m not sure why the same would not apply to us materialists.

    Machines vs. humans. Nickel speculates that I have misdrawn the distinction. It’s reasonable to consider Henrietta still human despite her now prosthetic brain. But I am entirely unconvinced by the example of the machine that does not look at all like you or me. If that machine is functionally isomorphic to us, it will interact with us just as Harry would, though perhaps we would not be quite as quick to award it sentience. Nor do I at all see why, even if Nickel is right about his own machine, my argumentative strategy based on Harry “does not get off the ground.”

    “Behavior” and psychological meaning. Long is quite right that I assume a univocal conception of behavior, and he is also right to question that assumption. (As he notes, we have already discussed the matter in print.) He holds that real behavior “is psychologically expressive because it arises out of the biological needs, interests, and concerns that develop naturally in living creatures”; Harry’s “behavior” looks like real behavior but does not express genuinely psychological states. At this point Long and I are at a stalemate. As a functionalist, I believe that Harry’s internal states are genuinely psychological, and I am unconvinced by arguments that biological life is required; but neither do I have any compelling argument against Long’s contrary view.

  • We’re sorry, but this intriguing conversation has ended at this venue, and this thread in the Forum has been closed. However, conversation may continue, and is continuing, in our Facebook group: http://www.facebook.com/group.php?gid=52472677549

    Please join us there.