Distributing/Disturbing the Chinese Room

Fifteen years ago, John Searle posed a challenge to “strong” artificial intelligence (the program to create in an artificial medium intelligence comparable to that of humans).  He confidently proclaimed his challenge would withstand the test of time, including any possible advances in computer speed, memory, and robotic appliances.  His challenge, the so-called Chinese Room thought experiment, attracted considerable interest in its heyday, second in controversial appeal only to Alan Turing’s famous “Turing Test,” whose themes it reflected as if in a funhouse mirror.  Today the piece has become something of a historical curiosity.  The “strong” AI program is now widely acknowledged as a failure (although for somewhat different reasons than Searle argued), so it would seem that resurrecting Searle’s rhetorical tour de force would be akin to applying electrical shocks to assembled cadaver parts best left in peace.  As the metaphor suggests, however, the Chinese Room is not so much dead as undead; while its ostensible purpose is moribund, the presuppositions and unconscious assumptions that inform it are still very much alive.  I think the Chinese Room is worth a second look not for the force of its argument but for what it reveals about contemporary ideas on what constitutes the essence of the human, especially intelligence, consciousness, and meaning. Excavating these and juxtaposing them with current controversies over the boundaries of the human will enable us to see what has changed, why it has changed, and what the change signifies in the decade and a half that has passed since Searle delivered the coup de grace that failed to deliver.

Intel Celeron CPU, via Uwe Hermanns Flickr photostream

Intel Celeron CPU, via Uwe Hermann's Flickr photostream

The Chinese Room experiment is easily explained.  Suppose, Searle writes, that “you are locked in a room, and in this room are several baskets full of Chinese symbols” (32).  You do not understand Chinese, but you have a rule book (in English) that tells you how to manipulate the symbols.  “So the rule might say: ‘Take a squiggle-squiggle sign out of basket number one and put it next to a squoggle-squoggle sign from basket number two'” (32).  “Now suppose that some other Chinese symbols are passed into the room, and that you are given further rules for passing back Chinese symbols out of the room” (32).  The instructions have been cleverly constructed so that an onlooker would think that after questions are passed into the room, appropriate answers issue forth.  This perception, however, is a mistaken illusion, for on “the basis of the situation as I have described it there is no way you could learn any Chinese simply by manipulating these formal symbols” (32). The point he hastens to drive home is that this is just what a so-called “intelligent” computer program does.  “If going through the appropriate computer program for understanding Chinese is not enough to give you an understanding of Chinese, then it is not enough to give any other digital computer an understanding of Chinese” (33).   Why?  “Because no digital computer, just by virtue of running a program, has anything that you don’t have.  All that the computer has, as you have, is a formal program for manipulating uninterpreted Chinese symbols. . . a computer has a syntax, but no semantics” (33).

We begin our interrogation by asking what role Chinese plays in this scenario.  Why not French, Swedish, or Thai?  Presumably Chinese appears because, as a non-alphabetic, non-Indo-European language, it will defeat any attempt by “you” (an English speaker) to guess at word meanings through cognates or common roots.  Chinese is also presumed to be so foreign to “you”  that Searle’s English-speaking audience will immediately sympathize with his description of Chinese ideograms as “squiggle-squiggles” and “squoggle-squoggles.”  Chinese thus functions as the inscrutable other, that which stands outside and apart from a reader’s cultural context and experience.  Furthermore, this alienness is presumed to be absolute.  No amount of “manipulating symbols” will ever give “you” any hint about what words mean through their association with one another-which symbol strings might function as nouns, for example, and which as verbs.  The Orientalist function of Chinese thus reinforces an absolutely crucial point for Searle:  that no bridge can ever be created between syntax and semantics on the basis of associating and manipulating symbols. Put another way, no bridge can be constructed between formal symbol manipulation and meaning.  Genuine (human) intelligence, Searle insists, has more than syntax; it also has content.  Thoughts and feelings are about something, and it is this “aboutness” (or intensionality, a term philosophers use to denote the “aboutness” quality) that marks the meaning-seeking drive essential to humans.

Given this narrative, one might object that given enough time, “you” may be quite likely to make  associations and hence to find meaning, however nascent, in the symbols “you” manipulate.  The division between syntax and semantics can remain inviolable only if no context can ever be created that would bridge the two–yet the simple act of mechanically matching symbols would be the first step in building such a context.  Also in play is the locus of human meaning-making. By placing a human in a locked room and inviting readerly identification by using the second person, Searle performs an act of literary violence upon “you,” reducing “your” capacity for human understanding to a rote mechanical act.  Nothing in the room is represented as extending your own cognitive capacities, which remain so severely stunted that “you” can function only as a non-comprehending automaton.  This painful reduction of human capacity is then equated with a digital computer, a construction that has the effect of making the computer signify as an extremely inadequate person, an idiot savant incapable of ever attaining a properly full and rich human understanding.  Only so does the odd construction make sense in which Searle writes that if you cannot understand Chinese, then no other digital computer can either.  The formula goes like this:  reduce the human to an automaton; equate the human automaton with the digital computer; imply that this reduction of human capacity is the natural (and only) state the computer occupies.

mri of brain, via Liz Henrys Flickr photostream

mri of brain, via Liz Henry's Flickr photostream

What does this construction suppress or make difficult to see?  Perhaps most importantly, it implies that the computer’s cognitive capacities reside solely in the CPU, the Central Processing Unit for which the encapsulated human stands.  But of course a computer is more than this, including memory, data storage and retrieval mechanisms, user interface, and so on.  Searle attempts to respond to this objection by saying that even if one includes the Chinese characters, baskets, rule books, etc. as part of the computational system, the divide between semantics and syntax still remains absolute.  He reasons that since none of these artifacts understand Chinese any more than “you” do, nothing essential changes.

Here his assumptions contrast starkly with contemporary theories of emergence in which systems exhibit behaviors that are more than the sum of their parts.  In particular, his assumptions are refuted by the idea of an extended cognitive system, a model that Andy Clark in Supersizing the Mind: Embodiment, Action, and Cognitive Extension refers to as EXTENDED.  Departing from fellow travelers such as Edwin Hutchins who argue that extended cognitive systems serve as scaffolding for human cognition, Clark performs the radically heuristic move of considering them as part of human cognition, a conclusion in direct contradiction to the model he calls BRAINBOUND.  Although Searle has a more capacious view of cognition than many BRAINBOUND theorists, in that he considers feelings as well as thoughts to be part of mind, he participates in a BRAINBOUND worldview in many ways, for example by focusing his attention on the human automaton as the CPU, while relegating the rest of the room’s artifacts to non-cognitive status.

If, on the contrary, we adopt the EXTENDED view that everything in the room is part of “your” extended cognitive system, the room can be said to “know” Chinese, at least in a behavioral sense.  As this qualification suggests, EXTENDED shifts the meaning of key terms, including “know,” “think,” and “understand.”  Once non-human cognizers are admitted as part of the system, self-aware consciousness can no longer be an adequate measure of what it means for the system to “know” Chinese.  The challenge posed by EXTENDED to Searle’s experiment is to question his assumptions that such terms must be understood in the context of self-aware consciousness, or at least embodied thought as represented by a human mind.

Another key term is meaning.  To see how it shifts in EXTENDED, consider the intimate relation between meaning and context.  As Othello discovers to his horror, meanings of words are notoriously context-dependent.  The context of full human life, at once evoked and reduced in the figure of “you” locked in the room, becomes in EXTENDED a cascading series of contexts.  The rule book, for example, knows which symbols match with which, the basket knows which symbols are incoming and outgoing, and so forth.  The EXTENDED model implies that context is not self-identical or solely human-centered but rather a chain of partially overlapping, partially discrete contexts interacting with each other as different cognizers within the system coordinate their activities.  In particular, meaning is tied in with the contexts in which information flows are processed.  As Edward Fredkin succinctly observes, “The meaning of information is given by the processes that interpret it.”  For a cell, these processes would include, for example, the flow of nutrients in and out of the cell walls, its metabolic activities, its expulsion of waste, and other such processes.  To acknowledge the non-conscious nature of such activities, the EXTENDED model typically replaces consciousness with the broader term “cognition.”  As I am using the term here, cognition requires as a minimum an information flow, embodied processes that interpret the flow, and contexts that support and extend the interpretive activities.  In the Chinese Room, the basket counts as a cognizer, the rule book as another.  Even the door slot through which the strings are passed can be considered a cognizer, for it receives information in the form of incoming characters, interprets these characters through processes that allow them to pass from outside to inside and from inside to outside, and constructs a context through its shape and position.

How does this view of cognition impact the absolute separation between syntax and semantics that Searle decrees?  As we have seen, Searle associates syntax with mechanical ordering, whereas semantics implies that thoughts and feelings have contents, and moreover that these contents are crucial components of understanding and knowledge.  If we agree with Searle that mind somehow emerges from brain, how does mind get from the brain’s non-conscious firing of neurons, mechanical operations of  neurochemical gradients, and other non-conscious activities to visions of God or mathematical proofs?  Contemporary answers to this age-old question, although still incomplete and controversial, nearly always involve the recursive feedback and feedforward loops acknowledged as crucial mechanisms necessary for emergence to occur.  A range of cognitive models as diverse as Maturana and Varela’s autopoiesis, Churchland et.al’s  neural nets, Edelman’s neuronal group selection and re-entry, and Hofstader’s fluid analogy computer programs incorporate recursive loops as central features.  The loops are crucial because they provide the means to bootstrap from relatively simple components to interactions complex enough to generate emergence.
Emergence does an end-run around Searle’s absolute distinction between syntax and semantics, for it implies that syntactical moves, if combined in structures making use of recursive loops and employing complex dynamics, can indeed bootstrap into semantics.

Although Searle positions his thought experiment as a refutation of strong AI, he shares with it certain key assumptions, particularly the emphasis on individual human thought.  Strong AI took as its model the single person thinking; this was the phenomenon researchers sought to duplicate in artificial media.  It is not surprising, then, that the strong AI literature is replete with scenarios in which intelligent computers complete with or supersede intelligent humans (Searle rehearses some of these in his argument), for in this view computers and humans seek to occupy the same ecological niche.  In the EXTENDED view, the emphasis on the individual is transformed by recognizing that the boundaries separating person from environment are permeable in both directions, inner to outer and outer to inner.  Always already a collective, the individual human is less a self-evident entity than the result of a certain focus of attention.  Shift the focus, and the scene modulates from “you” locked in a room to an extended cognitive system in which “knowing” Chinese is a systemic property rather than something inside “your” head.

Would this construction be likely to satisfy Searle?  Probably not, for it involves re-thinking and re-positioning terms he takes for granted.  From my point of view, this is precisely the point.  The value of re-visiting his thought experiment is not to argue, once again, whether it is right or wrong but to use it as a touchstone to gauge how far contemporary models have moved from his assumptions:  from cognition in the head to cognition distributed throughout a system; from the individual as the privileged unit of analysis to the complex dynamics of interacting components; from the language-specific unitary context of a culturally-bound and situated person (“you”) to a cascading series of overlapping contexts operating at macro- and micro-scale embodied specificities; from the human as a self-contained and self-determined person defined by its contrast to an automaton to the human as an assemblage containing both biological and non-biological parts, some of which may be automatons considered in isolation but which are capable of emergence when enrolled in an extended cognitive system.  In brief, the shift is from a person defined by individual consciousness to a collective defined by emergence.

This is the transformation through which our society is currently living.  Many would still identify with Searle’s assumptions, but EXTENDED poses a strong challenge that, at the very least, invites us to reconsider what constitutes the essence of the human.   Just as individual consciousness was the lynchpin of the liberal humanist subject, so Deleuzian assemblages, cognitive collectives, and dispersed subjectivities are the hallmarks of an age when globalization is blurring the boundaries of nationhood, transnational economies are transforming socioeconomic relations, and computer technologies are creating networks that make global communication an everyday fact of life.  As Gilles Deleuze observes in “Postscript on the Societies of Control,” the question is not whether the current configuration is better or worse than liberal humanism but rather what opportunities, challenges, and resistances are specific to the new models.  The first step in answering these questions is to recognize what those specificities are.  For that, we could do worse than to re-visit the Chinese room, excavating its assumptions as a measure of where we have come from so as better to decide where we want to go.

Endnotes

The “strong” AI proponents that Searle cites include Herbert Simon, Alan Newell, Freeman Dyson, Marvin Minsky, and John McCarthy.

Works Cited

Churchland, Paul.  1995.  The Engine of Reason, the Seat of the Soul: A Philosophical Journey into the Brain.  Cambridge:  MIT Press.

Clark, Andy.  2008.  Supersizing the Mind: Embodiment, Action, and Cognitive Extension.  New York: Oxford University Press.

Deleuze, Gilles. 1992.   “Postscript on the Societies of Control,” October 59 (Winter), 3-7.

Edelman, Gerald M. 1993.  Bright Air, Brilliant Fire:  On the Matter of the Mind.  New York: Basic Books.

Fredkin, Edward.  2007.  “Informatics and Information Processing versus Mathematics and Physics.”  Presentation at the Institute for Creative Technologies, Marina Del Ray, May 25.

Hofstader, Douglas. 1995.  Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought.  Cambridge: Basic Books.

Hutchins, Edwin.  1996.  Cognition in the Wild. Cambridge:  MIT Press.

Maturana, Humberto and Francisco Varela.  1973.  Autopoiesis and Cognition: The Realization of the Living.  Robert S. Cohen and Marx W. Wartofsky (Eds). Boston Studies in the Philosophy of Science 42.  Dordrecht:  D. Reidel Publishing Co.

Searle, John.  1984.  Minds, Brains, and Science.  Cambridge:  Harvard University Press.

Turing, Alan.  1950.  “Computing Machinery and Intelligence,” Mind:  A Quarterly Review of Psychology and Philosophy, 49.236: 433-460.

25 comments to Distributing/Disturbing the Chinese Room

  • I’m surprised you didn’t refer to Dan Dennett at all – it seems that he’s made very similar points to these a long time ago.

  • Katherine Hayles

    Thanks for bringing up Dennett. In fact, he was much on my mind while writing these comments, especially his argument in” Kinds of Minds” where he talks about how higher functions such as consciousness must emerge from lower-level processes and also considers that sub-cortical or non-cortical processes and entities might also be said to have minds.

  • Hayles poses the question of consciousness as a problem in the history of Artificial Intelligence; more generally, she revisits a moment in the development from a first- to a second-order cybernetics, from a formal concern with syntax to more situated concerns with meaning, embodiment, and a new, emergent, reflexivity that goes beyond individual consciousness. What concerns the Humanities, and for current purposes, ‘the human,’ in this presentation might be the way such reformulations open the field of inquiry to a range of subjects unavailable to a humanistic field that developed, after all, in a post-WWII context. How culturally narrow Searle sounds, when he sounds out those Chinese characters. How very Cold-War and pre-globalization.

    Hayles presents for consideration “The idea of an extended cognitive system“ where cognition is larger than consciousness and the reflective individual (the Delphic imperative to “know oneself” cited by Ian Hacking) is no longer the primary source of understanding. This cognitive expansion is consistent with the vastly extended range of subjects available today to humanistic study. Nobody will deny the richness of this topical expansion, many have and will find liberatory potential in cultural exchanges among nationalities, language groups, and genders. No doubt, given economic imperatives, more students in the U.S. are actually learning Chinese. But there’s also a loss of coherence in the proliferation of /things/ that humanists are now expected to cover. Over time, the past generation’s cultural limitations and perceptual blindnesses become visible and available for critique; stories become smaller and smaller, ever more situated but also, literally, endless. When one looks at the literary works that Hayles has written about elsewhere (most recently in Electronic Literature: What Is It?), one notices just how resistant such exercises are to /narrative/. We have, in such work, numerous formal constraints and medial specificities – the software, platforms, and writing spaces (many of them quickly obsolescent); the mixture of written and coded languages; image and text; music, sound, and noise. We have, in the current environment of discourse, a laboratory for the kind of extended cognition that Hayles asks humanists to consider, as a way to resituate the human in electronic environments.

    Surely, Hayles is pointing us in the right direction. But I also notice, reading this entry, just how powerful classical self-knowledge, provincial cultures, and perceptual blindness have been as means of narrative selection, and how powerful the idea of the self-contained individual consciousness has been as a narrative focalizer. I’m not sure how, or whether, these necessary narrative elements can or will survive in the current media environment, which (like extended human cognition) is filled with automatic, non-verbal, variously embodied, and potentially emergent processes. I should have a better idea, by the end of the summer, when I will have finished reading some 300 works of electronic literature and the critical writing that tries to contextualize this work, for an archival project commissioned by the Library of Congress in collaboration with http://www.archiveit.org. (see http://eliterature.org/wiki/index.php/Main_Page).

  • William Rasch

    Kate Hayles frames her discussion of the move from BRAINBOUND to EXTENDED views of cognition as an ongoing debate about what “constitutes the essence of the human.” We move, on her account, from a ghost-(‘consciousness,’ ‘mind,’ ‘soul’)-in-the-machine model to the machine or non-trivial system itself, in which all components are physical (or at least “visible”) and of equal value. This EXTENDED, context-bound model is of a piece with the age, as her citation of Deleuze attests. Whatever it is that constitutes the essence of the human can now have seemingly innumerable functional equivalents – rule books, baskets, and door slots, for instance.

    Yet another component seems equally necessary, namely the (deceived or undeceived) “onlooker” who is asked to pass a verdict on outcomes, or the even cleverer observer who withholds its verdict by expanding the context of meaning-generation and thus delay the “essence” question. This observer itself need not be “human,” but somehow must multiply levels of operation. Hayles herself models such ordering of levels. In re-opening the Chinese Room, Hayles is careful not to enter it through the same door as its initial contestants. The original argument is now moot; what isn’t are the presuppositions that are now being challenged by more recent discourses regarding intelligence and meaning. Thus, the original claims of the debate are not contested; rather, the original debate itself is. Hayles is not on the same floor as the Chinese Room, but (at least) one floor above or at one remove from the original scene.

    So, my question is: Is this facility to multiply and arrange levels what “constitutes the essence of the human,” or the essence of meaning? Put another way: Is there a distinction between operating and describing (acknowledging, of course, that description itself is also an operation)? Hayles cites Fredkin on meaning – “The meaning of information is given by the processes that interpret it” – and goes on to attribute meaning-generation to cells (much as Luhmann attributed “observation” to cells). The human machine is made up of cells (among other “things,” depending on how EXTENDED one wishes to be) and thus partakes of this level of nutritional, metabolic, and waste-expulsion meaning-generation. But the human machine extends itself also by way of symbols (the English language, say, in which these texts have been written) and can “describe” (operate differently than only on the previously described level) not just the meaning-generation of cells but also, reflexively, of symbols – just as, metaphorically speaking, Hayles escaped the Chinese Room to reminisce about it from a conceptual distance.

    So, my second question is: Have I, by “observing” the above, inadvertently yanked the ghost back into the machine? Perhaps another way of asking this is to wonder in what way (if any) reflexivity differs from recursivity.

  • Geof Winthrop-Young

    Between philosophy, literature and the sciences there lies a strange zone inhabited by odd creatures. They are quirky hybrids, the fugitive results of thought experiments that—much like the conceptual red-light district they populate—serve to connect as well as to separate the neighboring disciplines. Some resemble animals (Uexküll’s tick, Nagel’s bat, Maturana’s frog), some are spectral beings (Maxwell’s Demon), some are almost human and, inevitably, trapped either in closed rooms or in equally confined decision-making processes: the fooled human interrogator of the Turing Test, the Prisoners working out their Dilemma, and that most bizarre of all oddities, Rational Man, condemned to always making informed decision even buying very dumb stuff. And then there is the sinologically challenged inmate of the Chinese Room. As Kate Hayles points out, many of these creatures are undead rather than plain dead, given that some of the presuppositions that inform them are still very much alive. Kate, incidentally, knows the undead: equipped with the background of Professor Van Helsing and the panache of Buffy the Vampire Slayer, she has challenged (and sometimes impaled) Maxwell’s Demon, the Prisoner’s Dilemma, and many others.

    Kate, my problem is that I do not have a problem, and even less expertise, so I will only voice some very amateurish, intestinally located inquiries. As you know, the Chinese Room has sparked so much discussion that Pat Hayes once jokingly referred to cognitive science the ongoing research trying to show that Searle’s Chinese Room is false. Now, some of the critique is similar to yours. My first question, therefore, is: What do you consider the basic difference between your approach and the established systems reply to Searle that argues that while the person in the room may not learn Chinese, the room itself—i.e., the whole system–can.

    The reason for my question is the following. If deconstruction itself took on human shape and started to have wet dreams, my guess is it would come with something like the Chinese Room. It is its object of desire: an argument or thought experiment intended to showcase logocentric rationality at its finest that, however, is clearly based on suppressed presuppositions and hidden supplementations that make the argument work in the first place. You have summarized it very well: In order to show that the humans are nor computers, you perform a scene in which humans are like computers. Those CPU-centered computers, in turn, are projections of what that human are supposed to be. In short, establish implicit identity to establish explicit difference which is always already based on projections.

    Now, it is difficult to read your analysis without asking oneself: What presuppositions are at work here? The systems reply sometimes presented the room as a brain and, subsequently, argued that to concentrate on the cognitive faculties of the person inside is like assessing the brain by focusing exclusively on the hippocampus. The room becomes a brain capable of learning Chinese. And while the EXTENED argument clearly involves other parts of the body as well as items outside of the body, the question remains whether this extended assemblage is not a product of our reconceptualization of the brain itself as a heterogeneous assembly (a swarm inside the skull, as it were). In that case the EXTENDED paradigm would no doubt be more sophisticated than the Searle’s approach (kudos, incidentally, for never talking about intellectual progress), but ultimately it would rest on a similarly cranio-centric presupposition. .

  • Kate Marshall

    What I find so compelling about Kate Hayles’s location of the Searle challenge as a historical vantage point through which to view the becoming-collective of contemporary personhood in the control society is the status of Searle’s piece as, in Hayles’s words, “a rhetorical tour de force.” It seems entirely appropriate to approach and indeed register the shift “from a person defined by individual consciousness to a collective defined by emergence” by way of a locked-room mystery. For Searle can posit an inviolable separation between syntax and semantics only through the kind of literary game-playing that would make that distinction impossible to observe from the outside, even if that outside is a fictional one.

    If we are to recognize, as Hayles suggests, the Deleuzean narrative that spaces of enclosure become self-deforming modulations, and that the liberal humanist individuals who formerly occupied these analogical spaces are becoming “dividuals,” assemblages, and “dispersed subjectivities,” then the enclosure and the individual comprising Searle’s experiment appear to produce the very distinction that locks them in reference to one another.

    The locked room itself, however permeated or systematized, remains a fictional construct that can be taken seriously as such, and which makes a difference for the medial specificity of its preoccupations with syntax and semantics. It invites, perhaps, something other than a more conventional distrust of sophistry. To Hayles’s call to return to the Chinese room as “a measure of where we have come from so as better to decide where we want to go” I would add another historical layer, and view the early signs of the kinds of persons she describes through its blind spots. What Deleuze recognizes in Kafka, and what also appears in a range of fictional testing grounds for the status of personhood in modernity throughout the earlier portion of the twentieth century is precisely this person, who observes and constitutes herself as a version of collectivity.

  • Katherine Hayles points to crucial assumptions buried in Searle’s argument that might otherwise be overlooked in such a “historical curiosity.” Besides assumptions about the meaning of “meaning” or “context,” The BRAINBOUND and EXTENDED models that Hayles proposes also require us to rethink Searle’s assumption that language is intrinsic, or natural, to human beings. If language was “natural” to the “person defined by individual consciousness,” what is it to the “collective [i.e. humans and technology] defined by emergence?” Language is both—it is a naturalized technology. Hayles and I take up this claim at greater length in a forthcoming article.

    The Chinese Room experiment strips the befuddled translator of his/her natural relationship with language. Instead of internalizing Chinese through rich contexts (e.g. art, music, personal conversations, etc.), the translator has only slips of paper, a codebook, and, presumably, plenty of time. So, allow me to modify Searle’s experiment slightly: what if the translator could internalize the codebook? That is, if the rules could be rewritten as neural networks in the “translator’s” brain? Could we then say, he/she “understands” Chinese?

    How we answer depends upon whether we think language is built up from innate rules somewhere in the human brain or if it exists only as a cultural artifact that is imposed on the brain. At one extreme lies Chomsky’s generative grammar—the brain is pre-wired for language—and at the other is the cultural (contextual) constructionist—learning a different language necessarily means learning to think differently. The former sees language as BRAINBOUND and the latter as EXTENDED. Each is only partially right. Language seems to be both a human prosthesis but also intrinsically human (after all, we have not been able to teach our closest genetic relatives to use symbolic language).

    Terrance Deacon’s argument in the Symbolic Species is helpful here. Language does not exclusively shape the brain, nor does the brain solely determine the structure of language. The two co-evolve. A change in one feeds back, or feeds forward, into the other. Language can take hold because early hominid brains had some capacity for language. At the same time, the acquisition of language changes the plastic structures of the brain. The brain in turn adapts language to better suite its capabilities. The spiral continues as those brains and languages that are better learners and communicators survive to pass on their genes and their syntax/semantics.

  • Nathan Brown

    In the latter half of her post, Hayles turns to the concept of “emergence” in order to articulate the stakes of a systems reply to Searle’s Chinese Room. “In brief,” she writes, “the shift is from a person defined by individual consciousness to a collective defined by emergence.”

    This section of Hayles’s post might usefully be read in tandem with Steven Shaviro’s most recent entry on his influential blog, “The Pinocchio Theory”: http://www.shaviro.com/Blog/?p=756 In a post titled “Against Self-Organization,” Shaviro questions the dominantly positive normative value attributed to concepts of self-organization, complexity, and emergence in a range of contemporary discourses, and he closes his piece by calling for “an aesthetics of decision, instead of our current metaphysics of emergence.”

    Hayles is careful to make clear that, for her, “the question is not whether the current configuration is better or worse than liberal humanism but rather what opportunities, challenges, and resistances are specific to the new models.” But Shaviro’s post is nonetheless pertinent to our conversation here insofar as it raises the problem of how the concept of emergence can provide us with a properly critical perspective upon the contemporary political and economic situations to which Hayles alludes in the final paragraph of her remarks. If, as Hayles notes, one first step in arriving at such a critical perspective is to recognize the specificities of different organizational models, then the difficulty with theories of emergence as a means to that end is encapsulated by the title of Harold Morowitz’s book on the subject: _The Emergence of Everything: How the World Became Complex_. When the concept of “emergence”—derived from the self-organizing properties of collectively autocatalytic sets of molecules, or from the behavior of cellular automata in computer simulations—is applied to everything from coevolution in local ecosystems, to the problem of free will, to the business models of our “emerging global civilization” (as it is in the work of Stuart Kauffman), then one might wonder how much specificity that concept really has to offer.

    The danger of relying upon the concept of emergence in order to think the present, and particularly to think the subject of that present, is that it implies a natural order of things in which what Kauffman calls “the laws of complexity” are absolute—even if they are laws OF indetermination, and regardless of whether they are bottom-up rather than top-down. If one primary problem with both liberal humanism and “emergence” is that they a-critically correspond with the economic logic of early and late capitalism, then perhaps it would not be too grandiose to say that the task of contemporary critical theory in relation to theories of emergence is akin to the task taken up by Marx in relation to Hegel: to extract, from the mystical shell, the rational kernel of an entirely different thought of the collective.

  • Abe Geil

    If thought experiments about A.I. are a philosophical sub-genre onto themselves, Kate Hayles has surely established, as Geof Winthrop-Young already suggests, her own sub-genre of criticism: a practice which involves what I hope Kate doesn’t mind my calling a “symptomatic reading” of these thought experiments. By “symptomatic,” I mean to refer of course to the old method that Louis Althusser detected in Marx’s reading of 18th century political economy and which Althusser and company in turn brought to their reading of Marx’s Capital. That is, the procedure of reading a work for answers it provides to the questions it does not itself know how to pose. Anyone who has read Kate’s discussion of the “Turing test” in her prologue to How We Became Posthuman (which manages to be at once piercingly critical and moving) will immediately recognize the family resemblance to her discussion of Searle’s “Chinese Room” here. In the case of her reading of the Turing test, Kate locates in the terms of the experiment itself clues to the very questions of embodiment that Turing sought so tortuously to evacuate. Here again Kate enters into the terms of Searle’s thought experiment and inverts the very answer he wants them to yield. In a picture that Searle poses as the ultimate limit of non-cognition, Kate finds a rich and complex image of extended cognition.

    I’m seriously out of my depth with respect to the philosophy of mind tradition Searle is working in and equally so with the recent models of extended cognition and emergence. From this admittedly naïve position, I’d like to pose three questions to Kate (and to whomever else on this forum is interested).

    1) In what sense is cognition a “broader” term than consciousness? It’s clear from the minimal definition Kate supplies (i.e. information flow, embodied interpretation, interpretive context) that cognition is broader than consciousness in the sense that it can be ascribed to a far larger range of entities, including tissue cells and door slots. But this range of ascription is purchased by a radical reduction of what counts as the activity of cognizing, or, in a different idiom, what counts as thought. I wonder if the term consciousness might be retained—even within a model of extended cognition—as a name for what used to be called the emergence of “mind” from “brain,” that is, as a complex phenomenon that cannot be deduced in advance from the elements that give rise to it but is nonetheless intelligible, if only in retrospect, by reference to those elements. In other words, does extended cognition replace the need to talk about consciousness at all, or does it, rather, enrich our discussion of the phenomenon we call consciousness? It’s not so much that I think the concept of consciousness should be defended at all costs. It’s more that I want to push back against the claim that “consciousness” is inextricably bound to the “BRAINBOUND” model. Consider, for example, the phenomenological maxim of intentionality (a close cousin to the concept of “intensionality” which Kate attributes to Searle’s model of “genuine human intelligence.”). According to its Heideggarian variant, intentionality holds that consciousness is not encapsulated in the head but is in the first instance to be found already in things outside the skull. Take this passage from the first Division of Being and Time:

    “When Dasein directs itself towards something and grasps it, it does not somehow first get out of an inner sphere in which it has been proximally encapsulated, but its primary kind of Being is such that it is always ‘outside’ alongside entities which it encounters … [a]nd furthermore, the perceiving of what is known is not a process of returning with one’s booty to the ‘cabinet’ of consciousness after one has gone out and grasped it.” (89)

    2) What happens to the concept of subjectivity in the EXTENDED view of cognition? With Deleuze and Guattari’s What Is Philosophy? (published one year after the “control societies” piece), we might ask this question with respect to philosophy (the creation of concepts) and art (the creation of affects). On Deleuze and Guattari’s account, these activities require—if not a subject exactly—at least the persona of a creator, of a “creator’s signature.” At the same time, subjectivity need not be aligned exclusively with individual experience. One can certainly posit a collective subject. In this way, extended cognition appears at least compatible with this notion. But it nonetheless seems to me that subjectivity (or agency) requires something in addition to the minimal criteria for cognition offered here. It requires, in the realm of politics for instance, something in the name of which to act. It is important to note that in the Deleuze article Kate cites, his dismissal of the need to judge whether “control societies” are better or worse than “disciplinary societies” is followed immediately by the sentence: “There is no need to fear or hope, but only to look for new weapons.” The shift he marks is not between competing models of cognition but rather from one regime of Capitalism to another. What’s at stake for Deleuze is not cognition per se but the specificities of emancipatory struggle, the ways within each regime that “liberating and enslaving forces confront one another”(4). I might, then, focus the question about subjectivity in the following way: what, if any, transitivity is there between EXTENDED cognition and politics? Or, for that matter, does it even make sense to posit transitivity between any model of cognition and politics? Or ought politics to be understood as fundamentally extrinsic to such models, informed by them perhaps, but not developing immanently from them?

    3) Finally, what has changed in the 25 years since Searle proposed his “Chinese Room.” On the one hand, Kate is clear that she sees Searle’s thought experiment as a “touchstone” for changes in how we model cognition: “from cognition in the head to cognition distributed throughout a system.” In that sense, as she points out, the joke is on Searle: both “strong A.I.” and his challenge to it are equally moribund. But, on the other hand, Kate insists quite strongly that this is not merely a shift in models. “This is the transformation,” she writes, “through which our society is currently living.” Now, I realize that to render intelligible the levels of mediation between models of cognition and radical social transformation would be far, far beyond the scope of this forum. Nonetheless, I find myself having to ask the naïve question: is it that cognition itself has changed along with all the world-historical changes Kate names? Or is it that these changes have forced us to thematize for ourselves a broader picture of cognition as EXTENDED? Have these changes taken the blinkers off, so to speak? We might be inclined to scoff at Searle’s claim that his thought experiment would be time proof (or timeless), but surely models of extended cognition aspire to a similar condition of universalizability. The specificities by which cognition is extended may be historically conditioned, cognition may even be less or more extended (“super-sized”), but the nature of cognition itself could never have been anything other than extended. In other words, if the BRAINBOUND model is now a false picture that’s because it was always false. But what this suggests to me is that questions of what, for instance, Kate names “the liberal humanist subject,” cannot be handled by reference to models of cognition but rather require a form of historical thinking that is (perhaps like politics?) extrinsic, even alien, to the sort of thought that aspires to produce such models. (I’m not sure whether I really want to commit to this last point, but I put it polemically for sake of clarifying a question I genuinely have.)

  • Abe Geil

    I wish I’d read Nathan Brown’s post before writing mine. It offers a more informed (and more concise!) articulation of some things I was clumsily trying to get at in the second part of my response.

  • katehayles

    As I indicated earlier in my respone to Geoffrey Winthrop-Young, Searle did imagine exactly the situation that James suggests here, of the man in the room who memorizes the rule book. Searle argues that this changes nothing, that the matching of symbols is still mechanical and non-interpretive. One school of response to Searle’s argument suggests not just that the entire room understands Chinese (the system theory approach), but that the man’s cognitive system plus all the accompaniments in the room allow the construction of a “virtual” mind, on analogy presumably to some techniques in Artificial Life where a “virtual” computer is created to run inside the actual computer through simulation techniques. While the man’s consciousness may not understand Chinese, the “virtual” mind running along his consciousness does. The ambiguity of this proposal lies, of course, in the relation of the virtual mind to conscious mind. Two configurations are possible: the virtual mind runs inside consciousness, or consciousness is encapsulated with the larger virtual mind. In some ways, this ambiguity mirrors the ambiguity James discusses of where language is positioned, inside or outside the mind. One way to think about the feedback loops that connect syntax to semantics is as the co-evolution of the virtual mind and conscious mind together, just as language and the brain co-evolved together. Incidentally, the co-authored article to which James refers is forthcoming in a special issue of the “History of Human Sciences” journal edited by Michel Ferrari (copies available on request).

  • katehayles

    Thanks to Abe Geil for his insightful questions and probings. To start first with the last question, I think it is not a question of saying that BRAINBOOUND is false and that EXTENDED is true, for both models exist not in isolation but embedded within complex historical configurations within which they have certain kinds of resonances and affiliations. As I argued, BRAINBOOUND is part of a historical formation that emerged and evolved over the several centuries it took for liberal humanist subjectivity to become the reigning cultural model of personhood, a development that spiraled out into almost every aspect of society and culture, including economics (rational actor theory, for example) and politics. It would be more appropriate to ask what function BRAINBOUND serves, and what other aspects of cultural configurations it reinforces and extends. The same is true of EXTENDED. While it reveals certain shortcomings in earlier models such as BRAINBOUND, without a doubt it has its own limitations and blind spots, which will probably become more visible as it gains dominance and is challenged by yet newer models.

    As to consciousness and cognition, cognition in the broader term in the sense that consciousness is associated with the neocortext (although with inputs from other parts of the brain and indeed other parts of the body), while cognition includes consciousness but also other cortical areas (such as the limbic system, for example), the Central Nervous System (CNS), the periperal nervous system, the viscera (with their complex feedback loops crucial for the operation of emotions, as Antonio Damasio points out), and other organ and nerve systems, as well (in EXTENDED) objects in the environment.
    Abe makes a good point when he questions whether consciousness now loses its cachet. In the excitement over EXTENDED and its cousins, consciousness is indeed now regarded in some quarters as an epiphenomenon, suggesting that its blind spots make it a free rider on the processes that really count. In my view, this discounting of consciousness is as much an error as thinking consciousness is the be-all and end-all. Surely consciousness adds amazing possibilities for human evolution (as I think William Rasch makes clear in his second post above), including not only symbol formation but the manipulation of symbols and then of symbols that stand for groups of symbols, etc. In short, for theoretical and abstract thought.

  • Empirically, it’s easy enough to dismiss arguments for ‘the end of books’: I agree that printed books, like just about everything in this era of overproduction and ecological destruction, continue to flourish even as the discussion of such works becomes more distributed, less concentrated in classrooms or university press publications. What intersts me, though, is the emergence of ‘end-of-books’ discourse along with the cognitive and electronic medial developments of the past few decades. Those developments are so capacious, they bring so many /things/ to consciousness (all those baskets, doors, rooms adjoining the Chinese room where Western humanists were able once to have a quiet, focused discussion): this new openness, this expansion of levels, while opening possibilities and avenues of exploration, tends to frustrate the conclusiveness and coherence that readers have sought in narrative.

    With the determined endlessness of cognitive discourse comes a desire to end something – maybe not ‘books,’ but perhaps the canon of print literature (including the institutions that supported a literary canon), is what we should regard as closed. At least, I’m prepared to take this as a working proposition: the closure of the print canon in the present, as a precondition to a literary emergence (quite unlike and perhaps distinct from the print legacy) in electronic, cognitive environments.

  • Flip this Chinese Room –

    Katharine Hayle’s piece may be the Chinese Room on another floor (as William Rasch has suggested), yet it does quite a bit more than just re-arrange the furniture; it is a functional remodel. The piece offers a fine opportunity for us and “you” to assess this room once more – to notice that you can eat and sleep here, compose poems, talk to friends, play cricket. In short, if the room is the mind altogether (and EXTENDED – as well), then anything that happens, happens here. The salient points of this home improvement package have been pointed out by Joe Tabbi, James Pulizzi, and others on the forum – and their observations reveal some formerly-hidden assumptions that made Searle’s room such an uncomfortable place in which to sit.

    In early AI theory, Marvin Minsky (In The Society of Mind) advanced the notion that the mind consisted of a huge aggregation of mini-minds, severely limited in their ability, that have evolved to perform specific tasks. In Kate’s Extensive Remodel, “you” can sort baskets of Chinese symbols or cook spaghetti, lay the table, set out candles – all with your inherent set of mini-minds that don’t really know how to do any of the above. The brain simulates the mind.
    To “put out the light” is not necessarily to be left in the dark.

  • katehayles

    Margie brings up an interesting question about the relation between EXTENDED and Minsky’s “Society of Mind” (lots of small agents running their individual programs). I think the two are probably compatible with one another, but EXTENDED does put more emphasis on the articulation of consciousness with other bodily and external interactions. In Minsky’s model, by contrast, consciousness is strictly an emergent property (an epiphenomenon) and the emphasis falls instead on what kinds of programs the agents are running, and what kind of emergent properties come out of their interactions.

  • John Searle’s Chinese room ” thought experiment” not only fails as a decisive argument against strong AI, but also illustrates one philosopher’s unquestioning adherence to a perspective that presumes the universality –or at least the necessary centrality–of the liberal humanist ego and attendant notions of intentionality, consciousness, and meaning. In her book, How We Became Posthuman, Katherine Hayles carefully demonstrates how and why we have moved away from this perspective toward one characterized by instances of distributed cognition, enabled primarily by (but not restricted to) computer-mediated communication and its new forms of agency. Revisiting Searle’s argument here, she makes the case once again, simply and compellingly, this time mobilizing Andy Clark’s EXTENDED (as opposed to the BRAINBOUND) model of human cognition. Clark’s model, which draws on contemporary theories of emergence, provides a global, collective, and systemic understanding of cognition — in stark contrast to Searle’s individual consciousness-centered view. This contrast allows Hayles to mark the distance separating Searle’s claims from contemporary approaches to cognition, which are still a “transformation we are living through,” she notes. Indeed, as she concedes in a very apt metaphor, Searle’s Chinese Room is hardly dead; rather, and because of “the presuppositions and unconscious assumptions that inform it,” it remains “undead.” In what follows I want to consider what may well be an underlying aspect of this undead state, which is the ambiguous status of biologically-based cognitive systems in contemporary AI. As such, they remain essential as models to learn from and mimic, and yet AI’s ultimate objective is to build alternatives that transcend them.

    In my recently published book, The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, I also revisited Searle’s position, basically siding with the distributed view (i.e., the room “knows” Chinese) as argued by Douglas Hofstadter, Daniel Dennett, and David Chalmers. 1 But I also bring out one of Searle’s less acknowledged critical points. Let me quickly summarize this discussion. In responding to Searle’s Chinese Room argument, Hofstadter et al advance the “systems reply”–which is very similar to Hayles’– that although the man in the room does not understand Chinese, the room itself does. Basically, they assert that all the things in the room—the man, the pieces of paper, the code and instruction books, and so forth—constitute a distributed system that exhibits intelligent behavior. Searle himself rejects this argument out of hand, since for him understanding (which he silently substitutes for intelligent behavior) requires conscious mental states, which cannot reasonably be attributed to a room. However, in The Conscious Mind, David Chalmers retorts with a more elaborate counterargument, involving the step-by-step substitution of the man’s neurons with computational demons and then with a single demon in which the neuron-level organization of the human brain is completely duplicated. 2 With these substitutions, the room becomes a dynamical system in which neuronal states are determined by the rules and manipulations of the symbols, and the system as a whole is sufficiently complex to have “the conscious experiences [including qualia] of the original system” (325), that is, of a human being. This rendering of the room as a dynamical system is necessary because, as Chalmers argues, the slips of paper are not merely a pile of formal symbols but “a concrete dynamical system with a causal organization that corresponds directly to the original brain,” and it is “the concrete dynamics among the pieces of paper that gives rise to conscious experience” (325). With his fiction of neuronal substitution and demons, Chalmers thus makes fully visible what was submerged in Searle’s version: the full systemic complexity of the computational room. Chalmers also attacks Searle precisely on the issue of how the computer in the room (or the computational process that is the room) is implemented. He agrees with Searle’s assertion that a computer program is purely syntactical but points out that the program must be implemented physically in a system with “causal dynamics.” In more fully accounting for the causal dynamics of the activities in the Chinese room, albeit by means of a simulation accomplished with artificial neurons organized like the brain, Chalmers argues that the system would be capable of having conscious states. He thereby joins the ranks of contemporary philosophers who share the view that “the outlook for machine consciousness [and strong AI] is good in principle” (331).

    It should be noted that Searle himself never denies the possibility that “a machine could think.”3 To the contrary, we are machines and we can certainly think, he asserts. But we are biological machines, and intentionality (or consciousness) is a biological phenomenon. Thus his argument really falls into two parts. The first part, illustrated by the Chinese room thought experiment, asserts a negative: that no program running on a digital computer is capable of intentionality (i.e., consciousness or thought). This also means that “the computational properties of the brain are simply not enough to explain its functioning to produce mental states” (40). The second part, which is under-developed and therefore usually ignored, argues that thinking, or consciousness, is essentially biological in nature and therefore cannot be reproduced without a causal, material system equivalent in complexity to our own biochemical system. It means that thinking requires a body located within—and which would be a part of—the physical world. While the first part of Searle’s argument was (correctly) understood to be a hostile critique of the operational and functionalist approach of early AI, the second now finds wide agreement among contemporary neuroscientists and those with a biologically-inspired approach to the building of intelligent machines.4

    However, whether revisiting or first-timers, readers of Searle’s thought experiment may legitimately wonder how and where human consciousness and the experiential self fit into the contemporary perspective that has displaced Searle’s. The most original and exciting theories seem to be emerging from neurophilosophy and the convergence of neuroscience with empirically-based philosophical studies of consciousness; that is, from studies that proceed from the double perspective of the neural correlates of consciousness and the rich phenomenology of our variegated experience, which includes focalized attention, sensory perception, memory, feelings and thoughts, dreams, the experience of phantom limbs, and out-of-body experiences. The perspective is functionalist and evolutionary, and thus directed toward not only explaining consciousness but its adaptational benefits. A striking example is Thomas Metzinger’s recent book, The Ego Tunnel: The Science of the Mind and the Myth of the Self. Those in the Humanities will recognize it as a version of constructivism, but the construction here is performed not by culture but by the human brain. Indeed, the thoroughness with which Metzinger argues that not only the world as we experience it but the experiencing self at the center are two parts of the same functional construct is daunting and not a little unsettling, inasmuch as these constructs of the brain –the world-model and the self-model–are immediate and transparent, while the neural machinery that produces them remains experientially invisible and inaccessible. Our naively realist belief in the solidity of the world we experience and the seeming continuity of the self dissolve however as Metzinger shows how these constructs enable the brain to efficiently integrate multi-leveled systems of information-processing in ways that make us flexible and highly adaptive.

    Whether Metzinger’s thesis turns out to be correct or not, there is little doubt among scientists today that our experience of ourselves as conscious individual egos looking out upon and acting within a larger world is a biologically evolved and highly functional endowment that gives us hugely beneficial adaptive capacities, which we are not really capable of rejecting, although we can acquire a sense of their limits and contingency. This biologically evolved sense of a conscious self is of course taken up, shaped, relayed, extended, and valued for itself in a variety of ways by human culture and technology. But to what extent can it also be mimicked or re-created artificially by information-processing machines that we are now also learning how to build and evolve? For Metzinger, the possibility of constructing an artificial Ego Machine –of transforming an information-processing system into a subject of experience– may soon be within our grasp. But the real question for him is the ethical one of whether we should do it or not. In contrast, then, to those in “geek rapture” over the prospect of the Technological Singularity (i.e., the possibility of our initiating a run-away escalation in which human-level intelligent machines build or evolve super-human intelligent machines), Metzinger argues that there are ethical issues here that we must soon face and resolve, since our initial efforts are likely to result in stunted machines that are nevertheless capable of suffering, the spread of suffering in the world being something that as ethical beings we should never allow ourselves to do. At the same time, Metzinger is not immune to the mystique of the Singularity, and in a staged dialogue between the First Postbiotic Philosopher and a Human Being he has the former point out the deficits that stem from our biochemical embodiment — not only our primate brain, violent emotions, and terrible monkey body, but also our ineradicable egotism, endless ability to suffer, and deep existential fear of our individual deaths. Although our Postbiotic successors will have transcended these deficits, they will still find us of considerable research interest, the Philosopher concludes. And here, perhaps without knowing it, Metzinger echoes a theme repeated by a number of contemporary sci-fi writers such as Rudy Rucker and Paul Di Filippo: that biological life, and particularly human emotional life, is “incredibly information-deep and information-rich” (see in particular the latter’s story, “Distributed mind”). Indeed, perhaps one secondary benefit of the quest for AI will be to make us newly appreciative of the special richness and complexity of our biological existence.

  • Davin Heckman

    First, I must confess to being well outside of my comfort zone on this comment. So, I apologize if my comments fail to take into account the breadth of available scholarship on this topic. I have always appreciated Hubert Dreyfus’ critique of AI. In particular, Dreyfus’ keen understanding of Heidegger’s account of being is incredibly useful for discussions such as these, particularly when we debate the relationship between BRAINBOUND and EXTENDED notions of cognition (and, along with it, individual and collective notions of personhood).

    In my opinion, where the discussion gets mired down is in the distinctions that we draw (for example, brainbound vs. extended and individual vs. collective). For Heidegger, being is all of the above. It is when, for instance, the tools of the workshop become integrated into the consciousness the individual consciousness at the center of the region. At once, being is radically individual, in that it can accrue a vast region of things within its narcissistic grasp and radically anti-individual in that this grasp happens unconsciously, disturbing notions of the individual as contained within a particular body. This ties being to the forgetting of being. In terms of artificial intelligence, in terms of programming a machine to pursue such a practice as a matter of rule, I think it would be very difficult to do much more than simulate such mode of being which is disordered as disordered, as forgetful as it is mindful.

    A more recent take on Heidegger can be found in the works of Bernard Stiegler. Stiegler’s view of the human is based on this idea that to be human is to be supplemented, to be in default, to be forgetful. To be human is to have techniques and technology. From computers to the many metaphors we use to understand human cognition, we are using tools to supplement our state of being. This view certainly would disturb many of the enlightenment and theistic notions of “the human,” but it is equally disturbing to the notion of the “posthuman.” For such a view remains, basically, human-centered. In spite of compromising historical notions of the human, it still positions human consciousness at the center of worldly experience. (In a certain sense, the only real challenge to “the human” in this scheme would be those systems which actively intervene against human agency in a large, systemic way).

    Stiegler’s take on the “I” and the “We” (from Acting Out) would seem useful in this discussion, particularly because it dispenses with the individual vs. collective divide, and argues that the collective is the means through which the individuality is experienced, and that the individual is what allows the experience of the collective. Again, in terms of artificial intelligence, I think it attests partially to the value of distributed cognition as necessary for “consciousness”… but it also allows this consciousness to be situated in an individual being. On the one hand, being human requires us to be self aware, only some of the time, and to forget ourselves precisely at those points when we are engulfed in the fullness of being. At times, we are even caught up by the desires of others.

    The things that we create, from our tools to our ideas, are supplements to the daily business of being, originating in active contemplation and effort and then being “taken-for-granted”. As such, they are always contained within the domain of consciousness. Even when hypothetical AI functions autonomously, it does not function on its own accord, it functions for us. We might figuratively break out of this anthropocentric worldview, if it helps us to think about something, but even this cognitive framework is an instrument more than it is the truth about human consciousness. At the end of the day, we are still in the best position to relate with the contents of our own mind, body, social circle, and region of influence.

    This isn’t to say that a computer cannot theoretically do such things, but I do have to wonder if there might not be some merit in Searle’s overwrought conclusion. Being able to carry out some functions of intelligence is very different from actually being conscious. An obvious point, but the science of AI is more ambitious than this, even if it would require evolutionary steps along the way. Presumably, the point of AI is not a computer that could mimic the thought processes of the flat worm, it would be computers that could parallel and eventually exceed human thought processes. The critical point would not be its hardware and software, it would be, as Dr. Hayles notes, where and when and how this machine were to bootstrap itself into the realm of semantics.

    ****

    From here on down, I am just getting speculative…

    To disclose my own biases, I have a hard time imagining the possibility that someone would create a machine that was so functional but free to ignore or reject or forget its rules. Such a machine would be so materially different in design, origin, and purpose from the “human being,” that it would have no peers unless we created them. Its operating system would not be a perfect duplicate of ours, but if it were, the story of its creation would be radically different. Even if we fed it a mythology of origin, its purpose would be basically “to prove a point.” If it were truly self-aware, it would develop its own psychology, one which may very well be pathological in our view (unless we placed restrictions on the things it was allowed to think about). In any case, such a machine would inhabit an entirely different narrative region from our own.

    Here, I think, is where the rubber hits the road. What “meaning” could a machine provide for events? If a machine were capable of selecting and forgetting data to create a narrative that were indistinguishable from a human narrative, it would have ARTIFICIAL (simulated) human intelligence. But to have a being that resembled our own (in function rather than form), this intelligence would have to be relevant to the machine from its own perspective which it could relate to its fellow machines. It would have to be able to develop a system of technics by which it could mediate its relationship to itself, its community, and its environment. And, at the very least, it would have to be aware of the fact that machines and software are currently being used, manipulated, and controlled by humans.

    One view is that human being is tied to the existential question of freedom, and that meaning-making is the expression of the thought processes surrounding decisions—we might not necessarily be able to do whatever we want in any situation, but we are capable of thinking about what it is that we do in a variety of different ways. We supply meaning to contexts. Narrativity presumes that the “why” and “how” are as important, if not more important, than the “what.” More than statements of fact, narratives are expressions of will. To free up a machine to this variability of thought seems like it would a) be a tremendous feat of programming and b) be a real challenge given the technical orientation of the research culture of the field.

    Could a researcher with an ethical commitment to “rights” and “dignity” based in the very freedom and autonomy that allows such research create a solitary creature with such rights and dignity, but without a capacity to exercise them in a meaningful way? Wouldn’t programming such priorities in a meaningful way be a pre-requisite to creating a recognizable artificial intelligence? Would simulating “freedom” and “dignity” in such a way that an intelligent machine could believe it prove or disprove the basis of human rights? If this view is an error, would a conscious machine be free to believe in such an error, or would it be bound by a certain set of rules to refute it? If we are not equipped to judge the soundness of the machine’s thought, except through tightly bound sets of rules we programmed into the machine, can we know if the machine is intelligent or not? In the end, the success or failure of “artificial intelligence” relies upon the definition of the experiment being offered at the outset. A limited definition which offers a stripped down, empirical view of human cognition is easier to prove. A more baroque definition which takes into account a great number of unfalsifiable views (which consciousness seems obsessed with) is unprovable by design.

  • In a way, there’s little to be said. Little to be said because the current thinking (as in John Johnston’s helpful pointer to Metzinger, etc.) renders Searle a “historical curiosity” (as Kate notes in the beginning of her essay). At the same time, little to be said since anything said recapitulates the positions and responses Searle already set out around the experiment, positions and responses which continue to be persuasive and provocative, as the mere fact of this discussion shows. I imagine Searle would continue to respond to any theory of systems, emergence, extension, etc. that we might propose. Don’t get me wrong: it is useful to arrive at complex and detailed accounts of “extended” (etc.), yet such accounts do not resolve the continued fascination of the experiment.
    Why does it fascinate? I marvel at how it lingers (undead), and in doing so points to something about life. I would say that Searle continues to persuade us of something, no matter how moribund. Persuades us of what? I’d rather say of individuation or identity rather than intelligence. One obscurity of the experiment is that it may not say anything about intelligence but rather more about the problematic of individuation. I don’t think it’s enough to connect this lingering problematic (problematic of lingering) to the residual force of the liberal humanist subject; or rather, for me the most crucial discussions here ran along lines set out by Nathan Brown or Abe Geil. Abe states: “questions of what, for instance, Kate names ‘the liberal humanist subject,’ cannot be handled by reference to models of cognition but rather require a form of historical thinking that is (perhaps like politics?) extrinsic, even alien, to the sort of thought that aspires to produce such models.” Kate zeroes in on what at stake here in her reply to Abe: “It would be more appropriate to ask what function BRAINBOUND serves, and what other aspects of cultural configurations it reinforces and extends. The same is true of EXTENDED.”

    This is where Kate’s “disturbing” title is as important as “distributing.” The point, I take it, is to describe new assemblage around the problematic of individuation (thus the reference to Deleuze).
    What are the parts of the assemblage? First, to what degree does the assemblage require both the person and the room? (I think of Margie’s comment here.) Are both needed and modulated within this space of enclosure? The assemblage allies person and room to produce a subject of understanding. What is this alliance? It is cultural. As Kate points out, Chinese is crucial to the experiment. The subject of the experiment is a subject of a country, of a national language, of a set of exchanges between nationalities and histories, and so on. The alliance is productive through this exchange process. The alliance is also artifactual. It requires built spaces, means of entry and exit (as Kate Marshall’s incisive comment shows), knowledge of writing artifacts and skills (reading, collation, editing, filing, etc). It requires community – the texts are received and transmitted – and labor (the whole set up is a translation project administered under prison-like conditions).

    The alliance is also is bodily. It constrains and bounds a body to produce a unique writing event. The person in the room can be said to “sign” the texts, since the issue is whether the person understands the texts, that is, it is an issue of the singular experience of the person in the room. With this, the experiment is bound to the responsibilities surrounding the signature (legal, political, etc.). (Here I think David Heckman’s comments on rights and dignity are important.) The alliance is also narrative (as Joe Tabbi and other point out). It involves the ability to tell a spatially and temporally bound story (bound both to room and to the experience of the person in it).

    Kate points out that the choice of the Chinese language emphasizes cultural distance and exoticness, as if to say that even the most “extreme” translation is possible and still does not count as understanding. (Why not Martian?) Does the experiment work the other way? Let’s say I am in the room and the texts are in English, not Chinese. Searle deals with this, writing: “suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker.” So, I understand English. And yet there must always be questions that requires me to use the rulebooks, to look up a word or a usage, and so on. No doubt my eventual translations of English questions will be even more flawless than the already flawless Chinese translations. Still, I inhabit my native language not because of more complete rulebooks, as it were, but precisely because of understanding, in Searle’s sense. And yet I cannot be said to understand every English text by default (or at least this seems to be the rationale for English Departments). Is the answer a more specific experiment? Can I imagine that the texts are in English as spoken on the south shore of Boston (that is, a sort of regional “mother tongue” in my case; you fill in your own locale). Of course, the same problems arise: I am still following a program and not “understanding.” I ask: can there be any version of the test involving the manipulation of texts and symbols that would be called understanding? How do we conceive of writing and texts in this way? Does this not require imagining a “Sandy Baldwin” set of texts in order to collapse the distinction between syntactic manipulation of the program and semantic understandings? What would these texts be? Would they not disturb the conditions of alliance suggested above? (No one else could understand them. They would be singularly tied to intentionality of my mind. There would be little point passing the messages in and out of the room.) I would say that this question is the question of literature at the core of the experiment. There is an absurdity to addressing the Chinese Room in this way, of course, but: it points both to the mind as real and epiphenomenal (as Searle argues); it also points to the modulation of political and cultural assemblage involved; it also again returns to the problematic of individuation I began with; finally, it points towards literature as a problematic that cuts across this discussion.