Comments on: Distributing/Disturbing the Chinese Room http://nationalhumanitiescenter.org/on-the-human/2009/05/distributingdisturbing-the-chinese-room/ a project of the National Humanities Center Mon, 13 Feb 2012 19:42:46 +0000 hourly 1 By: Sandy Baldwin http://nationalhumanitiescenter.org/on-the-human/2009/05/distributingdisturbing-the-chinese-room/comment-page-1/#comment-106 Wed, 17 Jun 2009 09:20:58 +0000 http://nationalhumanitiescenter.org/on-the-human/humannature/?p=242#comment-106 In a way, there’s little to be said. Little to be said because the current thinking (as in John Johnston’s helpful pointer to Metzinger, etc.) renders Searle a “historical curiosity” (as Kate notes in the beginning of her essay). At the same time, little to be said since anything said recapitulates the positions and responses Searle already set out around the experiment, positions and responses which continue to be persuasive and provocative, as the mere fact of this discussion shows. I imagine Searle would continue to respond to any theory of systems, emergence, extension, etc. that we might propose. Don’t get me wrong: it is useful to arrive at complex and detailed accounts of “extended” (etc.), yet such accounts do not resolve the continued fascination of the experiment.
Why does it fascinate? I marvel at how it lingers (undead), and in doing so points to something about life. I would say that Searle continues to persuade us of something, no matter how moribund. Persuades us of what? I’d rather say of individuation or identity rather than intelligence. One obscurity of the experiment is that it may not say anything about intelligence but rather more about the problematic of individuation. I don’t think it’s enough to connect this lingering problematic (problematic of lingering) to the residual force of the liberal humanist subject; or rather, for me the most crucial discussions here ran along lines set out by Nathan Brown or Abe Geil. Abe states: “questions of what, for instance, Kate names ‘the liberal humanist subject,’ cannot be handled by reference to models of cognition but rather require a form of historical thinking that is (perhaps like politics?) extrinsic, even alien, to the sort of thought that aspires to produce such models.” Kate zeroes in on what at stake here in her reply to Abe: “It would be more appropriate to ask what function BRAINBOUND serves, and what other aspects of cultural configurations it reinforces and extends. The same is true of EXTENDED.”

This is where Kate’s “disturbing” title is as important as “distributing.” The point, I take it, is to describe new assemblage around the problematic of individuation (thus the reference to Deleuze).
What are the parts of the assemblage? First, to what degree does the assemblage require both the person and the room? (I think of Margie’s comment here.) Are both needed and modulated within this space of enclosure? The assemblage allies person and room to produce a subject of understanding. What is this alliance? It is cultural. As Kate points out, Chinese is crucial to the experiment. The subject of the experiment is a subject of a country, of a national language, of a set of exchanges between nationalities and histories, and so on. The alliance is productive through this exchange process. The alliance is also artifactual. It requires built spaces, means of entry and exit (as Kate Marshall’s incisive comment shows), knowledge of writing artifacts and skills (reading, collation, editing, filing, etc). It requires community – the texts are received and transmitted – and labor (the whole set up is a translation project administered under prison-like conditions).

The alliance is also is bodily. It constrains and bounds a body to produce a unique writing event. The person in the room can be said to “sign” the texts, since the issue is whether the person understands the texts, that is, it is an issue of the singular experience of the person in the room. With this, the experiment is bound to the responsibilities surrounding the signature (legal, political, etc.). (Here I think David Heckman’s comments on rights and dignity are important.) The alliance is also narrative (as Joe Tabbi and other point out). It involves the ability to tell a spatially and temporally bound story (bound both to room and to the experience of the person in it).

Kate points out that the choice of the Chinese language emphasizes cultural distance and exoticness, as if to say that even the most “extreme” translation is possible and still does not count as understanding. (Why not Martian?) Does the experiment work the other way? Let’s say I am in the room and the texts are in English, not Chinese. Searle deals with this, writing: “suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker.” So, I understand English. And yet there must always be questions that requires me to use the rulebooks, to look up a word or a usage, and so on. No doubt my eventual translations of English questions will be even more flawless than the already flawless Chinese translations. Still, I inhabit my native language not because of more complete rulebooks, as it were, but precisely because of understanding, in Searle’s sense. And yet I cannot be said to understand every English text by default (or at least this seems to be the rationale for English Departments). Is the answer a more specific experiment? Can I imagine that the texts are in English as spoken on the south shore of Boston (that is, a sort of regional “mother tongue” in my case; you fill in your own locale). Of course, the same problems arise: I am still following a program and not “understanding.” I ask: can there be any version of the test involving the manipulation of texts and symbols that would be called understanding? How do we conceive of writing and texts in this way? Does this not require imagining a “Sandy Baldwin” set of texts in order to collapse the distinction between syntactic manipulation of the program and semantic understandings? What would these texts be? Would they not disturb the conditions of alliance suggested above? (No one else could understand them. They would be singularly tied to intentionality of my mind. There would be little point passing the messages in and out of the room.) I would say that this question is the question of literature at the core of the experiment. There is an absurdity to addressing the Chinese Room in this way, of course, but: it points both to the mind as real and epiphenomenal (as Searle argues); it also points to the modulation of political and cultural assemblage involved; it also again returns to the problematic of individuation I began with; finally, it points towards literature as a problematic that cuts across this discussion.

]]>
By: Davin Heckman http://nationalhumanitiescenter.org/on-the-human/2009/05/distributingdisturbing-the-chinese-room/comment-page-1/#comment-105 Fri, 05 Jun 2009 18:27:07 +0000 http://nationalhumanitiescenter.org/on-the-human/humannature/?p=242#comment-105 First, I must confess to being well outside of my comfort zone on this comment. So, I apologize if my comments fail to take into account the breadth of available scholarship on this topic. I have always appreciated Hubert Dreyfus’ critique of AI. In particular, Dreyfus’ keen understanding of Heidegger’s account of being is incredibly useful for discussions such as these, particularly when we debate the relationship between BRAINBOUND and EXTENDED notions of cognition (and, along with it, individual and collective notions of personhood).

In my opinion, where the discussion gets mired down is in the distinctions that we draw (for example, brainbound vs. extended and individual vs. collective). For Heidegger, being is all of the above. It is when, for instance, the tools of the workshop become integrated into the consciousness the individual consciousness at the center of the region. At once, being is radically individual, in that it can accrue a vast region of things within its narcissistic grasp and radically anti-individual in that this grasp happens unconsciously, disturbing notions of the individual as contained within a particular body. This ties being to the forgetting of being. In terms of artificial intelligence, in terms of programming a machine to pursue such a practice as a matter of rule, I think it would be very difficult to do much more than simulate such mode of being which is disordered as disordered, as forgetful as it is mindful.

A more recent take on Heidegger can be found in the works of Bernard Stiegler. Stiegler’s view of the human is based on this idea that to be human is to be supplemented, to be in default, to be forgetful. To be human is to have techniques and technology. From computers to the many metaphors we use to understand human cognition, we are using tools to supplement our state of being. This view certainly would disturb many of the enlightenment and theistic notions of “the human,” but it is equally disturbing to the notion of the “posthuman.” For such a view remains, basically, human-centered. In spite of compromising historical notions of the human, it still positions human consciousness at the center of worldly experience. (In a certain sense, the only real challenge to “the human” in this scheme would be those systems which actively intervene against human agency in a large, systemic way).

Stiegler’s take on the “I” and the “We” (from Acting Out) would seem useful in this discussion, particularly because it dispenses with the individual vs. collective divide, and argues that the collective is the means through which the individuality is experienced, and that the individual is what allows the experience of the collective. Again, in terms of artificial intelligence, I think it attests partially to the value of distributed cognition as necessary for “consciousness”… but it also allows this consciousness to be situated in an individual being. On the one hand, being human requires us to be self aware, only some of the time, and to forget ourselves precisely at those points when we are engulfed in the fullness of being. At times, we are even caught up by the desires of others.

The things that we create, from our tools to our ideas, are supplements to the daily business of being, originating in active contemplation and effort and then being “taken-for-granted”. As such, they are always contained within the domain of consciousness. Even when hypothetical AI functions autonomously, it does not function on its own accord, it functions for us. We might figuratively break out of this anthropocentric worldview, if it helps us to think about something, but even this cognitive framework is an instrument more than it is the truth about human consciousness. At the end of the day, we are still in the best position to relate with the contents of our own mind, body, social circle, and region of influence.

This isn’t to say that a computer cannot theoretically do such things, but I do have to wonder if there might not be some merit in Searle’s overwrought conclusion. Being able to carry out some functions of intelligence is very different from actually being conscious. An obvious point, but the science of AI is more ambitious than this, even if it would require evolutionary steps along the way. Presumably, the point of AI is not a computer that could mimic the thought processes of the flat worm, it would be computers that could parallel and eventually exceed human thought processes. The critical point would not be its hardware and software, it would be, as Dr. Hayles notes, where and when and how this machine were to bootstrap itself into the realm of semantics.

****

From here on down, I am just getting speculative…

To disclose my own biases, I have a hard time imagining the possibility that someone would create a machine that was so functional but free to ignore or reject or forget its rules. Such a machine would be so materially different in design, origin, and purpose from the “human being,” that it would have no peers unless we created them. Its operating system would not be a perfect duplicate of ours, but if it were, the story of its creation would be radically different. Even if we fed it a mythology of origin, its purpose would be basically “to prove a point.” If it were truly self-aware, it would develop its own psychology, one which may very well be pathological in our view (unless we placed restrictions on the things it was allowed to think about). In any case, such a machine would inhabit an entirely different narrative region from our own.

Here, I think, is where the rubber hits the road. What “meaning” could a machine provide for events? If a machine were capable of selecting and forgetting data to create a narrative that were indistinguishable from a human narrative, it would have ARTIFICIAL (simulated) human intelligence. But to have a being that resembled our own (in function rather than form), this intelligence would have to be relevant to the machine from its own perspective which it could relate to its fellow machines. It would have to be able to develop a system of technics by which it could mediate its relationship to itself, its community, and its environment. And, at the very least, it would have to be aware of the fact that machines and software are currently being used, manipulated, and controlled by humans.

One view is that human being is tied to the existential question of freedom, and that meaning-making is the expression of the thought processes surrounding decisions—we might not necessarily be able to do whatever we want in any situation, but we are capable of thinking about what it is that we do in a variety of different ways. We supply meaning to contexts. Narrativity presumes that the “why” and “how” are as important, if not more important, than the “what.” More than statements of fact, narratives are expressions of will. To free up a machine to this variability of thought seems like it would a) be a tremendous feat of programming and b) be a real challenge given the technical orientation of the research culture of the field.

Could a researcher with an ethical commitment to “rights” and “dignity” based in the very freedom and autonomy that allows such research create a solitary creature with such rights and dignity, but without a capacity to exercise them in a meaningful way? Wouldn’t programming such priorities in a meaningful way be a pre-requisite to creating a recognizable artificial intelligence? Would simulating “freedom” and “dignity” in such a way that an intelligent machine could believe it prove or disprove the basis of human rights? If this view is an error, would a conscious machine be free to believe in such an error, or would it be bound by a certain set of rules to refute it? If we are not equipped to judge the soundness of the machine’s thought, except through tightly bound sets of rules we programmed into the machine, can we know if the machine is intelligent or not? In the end, the success or failure of “artificial intelligence” relies upon the definition of the experiment being offered at the outset. A limited definition which offers a stripped down, empirical view of human cognition is easier to prove. A more baroque definition which takes into account a great number of unfalsifiable views (which consciousness seems obsessed with) is unprovable by design.

]]>
By: John Johnston http://nationalhumanitiescenter.org/on-the-human/2009/05/distributingdisturbing-the-chinese-room/comment-page-1/#comment-104 Wed, 03 Jun 2009 20:05:25 +0000 http://nationalhumanitiescenter.org/on-the-human/humannature/?p=242#comment-104 John Searle’s Chinese room ” thought experiment” not only fails as a decisive argument against strong AI, but also illustrates one philosopher’s unquestioning adherence to a perspective that presumes the universality –or at least the necessary centrality–of the liberal humanist ego and attendant notions of intentionality, consciousness, and meaning. In her book, How We Became Posthuman, Katherine Hayles carefully demonstrates how and why we have moved away from this perspective toward one characterized by instances of distributed cognition, enabled primarily by (but not restricted to) computer-mediated communication and its new forms of agency. Revisiting Searle’s argument here, she makes the case once again, simply and compellingly, this time mobilizing Andy Clark’s EXTENDED (as opposed to the BRAINBOUND) model of human cognition. Clark’s model, which draws on contemporary theories of emergence, provides a global, collective, and systemic understanding of cognition — in stark contrast to Searle’s individual consciousness-centered view. This contrast allows Hayles to mark the distance separating Searle’s claims from contemporary approaches to cognition, which are still a “transformation we are living through,” she notes. Indeed, as she concedes in a very apt metaphor, Searle’s Chinese Room is hardly dead; rather, and because of “the presuppositions and unconscious assumptions that inform it,” it remains “undead.” In what follows I want to consider what may well be an underlying aspect of this undead state, which is the ambiguous status of biologically-based cognitive systems in contemporary AI. As such, they remain essential as models to learn from and mimic, and yet AI’s ultimate objective is to build alternatives that transcend them.

In my recently published book, The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, I also revisited Searle’s position, basically siding with the distributed view (i.e., the room “knows” Chinese) as argued by Douglas Hofstadter, Daniel Dennett, and David Chalmers. 1 But I also bring out one of Searle’s less acknowledged critical points. Let me quickly summarize this discussion. In responding to Searle’s Chinese Room argument, Hofstadter et al advance the “systems reply”–which is very similar to Hayles’– that although the man in the room does not understand Chinese, the room itself does. Basically, they assert that all the things in the room—the man, the pieces of paper, the code and instruction books, and so forth—constitute a distributed system that exhibits intelligent behavior. Searle himself rejects this argument out of hand, since for him understanding (which he silently substitutes for intelligent behavior) requires conscious mental states, which cannot reasonably be attributed to a room. However, in The Conscious Mind, David Chalmers retorts with a more elaborate counterargument, involving the step-by-step substitution of the man’s neurons with computational demons and then with a single demon in which the neuron-level organization of the human brain is completely duplicated. 2 With these substitutions, the room becomes a dynamical system in which neuronal states are determined by the rules and manipulations of the symbols, and the system as a whole is sufficiently complex to have “the conscious experiences [including qualia] of the original system” (325), that is, of a human being. This rendering of the room as a dynamical system is necessary because, as Chalmers argues, the slips of paper are not merely a pile of formal symbols but “a concrete dynamical system with a causal organization that corresponds directly to the original brain,” and it is “the concrete dynamics among the pieces of paper that gives rise to conscious experience” (325). With his fiction of neuronal substitution and demons, Chalmers thus makes fully visible what was submerged in Searle’s version: the full systemic complexity of the computational room. Chalmers also attacks Searle precisely on the issue of how the computer in the room (or the computational process that is the room) is implemented. He agrees with Searle’s assertion that a computer program is purely syntactical but points out that the program must be implemented physically in a system with “causal dynamics.” In more fully accounting for the causal dynamics of the activities in the Chinese room, albeit by means of a simulation accomplished with artificial neurons organized like the brain, Chalmers argues that the system would be capable of having conscious states. He thereby joins the ranks of contemporary philosophers who share the view that “the outlook for machine consciousness [and strong AI] is good in principle” (331).

It should be noted that Searle himself never denies the possibility that “a machine could think.”3 To the contrary, we are machines and we can certainly think, he asserts. But we are biological machines, and intentionality (or consciousness) is a biological phenomenon. Thus his argument really falls into two parts. The first part, illustrated by the Chinese room thought experiment, asserts a negative: that no program running on a digital computer is capable of intentionality (i.e., consciousness or thought). This also means that “the computational properties of the brain are simply not enough to explain its functioning to produce mental states” (40). The second part, which is under-developed and therefore usually ignored, argues that thinking, or consciousness, is essentially biological in nature and therefore cannot be reproduced without a causal, material system equivalent in complexity to our own biochemical system. It means that thinking requires a body located within—and which would be a part of—the physical world. While the first part of Searle’s argument was (correctly) understood to be a hostile critique of the operational and functionalist approach of early AI, the second now finds wide agreement among contemporary neuroscientists and those with a biologically-inspired approach to the building of intelligent machines.4

However, whether revisiting or first-timers, readers of Searle’s thought experiment may legitimately wonder how and where human consciousness and the experiential self fit into the contemporary perspective that has displaced Searle’s. The most original and exciting theories seem to be emerging from neurophilosophy and the convergence of neuroscience with empirically-based philosophical studies of consciousness; that is, from studies that proceed from the double perspective of the neural correlates of consciousness and the rich phenomenology of our variegated experience, which includes focalized attention, sensory perception, memory, feelings and thoughts, dreams, the experience of phantom limbs, and out-of-body experiences. The perspective is functionalist and evolutionary, and thus directed toward not only explaining consciousness but its adaptational benefits. A striking example is Thomas Metzinger’s recent book, The Ego Tunnel: The Science of the Mind and the Myth of the Self. Those in the Humanities will recognize it as a version of constructivism, but the construction here is performed not by culture but by the human brain. Indeed, the thoroughness with which Metzinger argues that not only the world as we experience it but the experiencing self at the center are two parts of the same functional construct is daunting and not a little unsettling, inasmuch as these constructs of the brain –the world-model and the self-model–are immediate and transparent, while the neural machinery that produces them remains experientially invisible and inaccessible. Our naively realist belief in the solidity of the world we experience and the seeming continuity of the self dissolve however as Metzinger shows how these constructs enable the brain to efficiently integrate multi-leveled systems of information-processing in ways that make us flexible and highly adaptive.

Whether Metzinger’s thesis turns out to be correct or not, there is little doubt among scientists today that our experience of ourselves as conscious individual egos looking out upon and acting within a larger world is a biologically evolved and highly functional endowment that gives us hugely beneficial adaptive capacities, which we are not really capable of rejecting, although we can acquire a sense of their limits and contingency. This biologically evolved sense of a conscious self is of course taken up, shaped, relayed, extended, and valued for itself in a variety of ways by human culture and technology. But to what extent can it also be mimicked or re-created artificially by information-processing machines that we are now also learning how to build and evolve? For Metzinger, the possibility of constructing an artificial Ego Machine –of transforming an information-processing system into a subject of experience– may soon be within our grasp. But the real question for him is the ethical one of whether we should do it or not. In contrast, then, to those in “geek rapture” over the prospect of the Technological Singularity (i.e., the possibility of our initiating a run-away escalation in which human-level intelligent machines build or evolve super-human intelligent machines), Metzinger argues that there are ethical issues here that we must soon face and resolve, since our initial efforts are likely to result in stunted machines that are nevertheless capable of suffering, the spread of suffering in the world being something that as ethical beings we should never allow ourselves to do. At the same time, Metzinger is not immune to the mystique of the Singularity, and in a staged dialogue between the First Postbiotic Philosopher and a Human Being he has the former point out the deficits that stem from our biochemical embodiment — not only our primate brain, violent emotions, and terrible monkey body, but also our ineradicable egotism, endless ability to suffer, and deep existential fear of our individual deaths. Although our Postbiotic successors will have transcended these deficits, they will still find us of considerable research interest, the Philosopher concludes. And here, perhaps without knowing it, Metzinger echoes a theme repeated by a number of contemporary sci-fi writers such as Rudy Rucker and Paul Di Filippo: that biological life, and particularly human emotional life, is “incredibly information-deep and information-rich” (see in particular the latter’s story, “Distributed mind”). Indeed, perhaps one secondary benefit of the quest for AI will be to make us newly appreciative of the special richness and complexity of our biological existence.

]]>
By: katehayles http://nationalhumanitiescenter.org/on-the-human/2009/05/distributingdisturbing-the-chinese-room/comment-page-1/#comment-103 Mon, 01 Jun 2009 14:01:17 +0000 http://nationalhumanitiescenter.org/on-the-human/humannature/?p=242#comment-103 Margie brings up an interesting question about the relation between EXTENDED and Minsky’s “Society of Mind” (lots of small agents running their individual programs). I think the two are probably compatible with one another, but EXTENDED does put more emphasis on the articulation of consciousness with other bodily and external interactions. In Minsky’s model, by contrast, consciousness is strictly an emergent property (an epiphenomenon) and the emphasis falls instead on what kinds of programs the agents are running, and what kind of emergent properties come out of their interactions.

]]>
By: Marjorie Coverley Luesebrink http://nationalhumanitiescenter.org/on-the-human/2009/05/distributingdisturbing-the-chinese-room/comment-page-1/#comment-102 Sat, 30 May 2009 13:25:43 +0000 http://nationalhumanitiescenter.org/on-the-human/humannature/?p=242#comment-102 Flip this Chinese Room –

Katharine Hayle’s piece may be the Chinese Room on another floor (as William Rasch has suggested), yet it does quite a bit more than just re-arrange the furniture; it is a functional remodel. The piece offers a fine opportunity for us and “you” to assess this room once more – to notice that you can eat and sleep here, compose poems, talk to friends, play cricket. In short, if the room is the mind altogether (and EXTENDED – as well), then anything that happens, happens here. The salient points of this home improvement package have been pointed out by Joe Tabbi, James Pulizzi, and others on the forum – and their observations reveal some formerly-hidden assumptions that made Searle’s room such an uncomfortable place in which to sit.

In early AI theory, Marvin Minsky (In The Society of Mind) advanced the notion that the mind consisted of a huge aggregation of mini-minds, severely limited in their ability, that have evolved to perform specific tasks. In Kate’s Extensive Remodel, “you” can sort baskets of Chinese symbols or cook spaghetti, lay the table, set out candles – all with your inherent set of mini-minds that don’t really know how to do any of the above. The brain simulates the mind.
To “put out the light” is not necessarily to be left in the dark.

]]>
By: joseph tabbi http://nationalhumanitiescenter.org/on-the-human/2009/05/distributingdisturbing-the-chinese-room/comment-page-1/#comment-101 Sat, 30 May 2009 07:36:55 +0000 http://nationalhumanitiescenter.org/on-the-human/humannature/?p=242#comment-101 Empirically, it’s easy enough to dismiss arguments for ‘the end of books’: I agree that printed books, like just about everything in this era of overproduction and ecological destruction, continue to flourish even as the discussion of such works becomes more distributed, less concentrated in classrooms or university press publications. What intersts me, though, is the emergence of ‘end-of-books’ discourse along with the cognitive and electronic medial developments of the past few decades. Those developments are so capacious, they bring so many /things/ to consciousness (all those baskets, doors, rooms adjoining the Chinese room where Western humanists were able once to have a quiet, focused discussion): this new openness, this expansion of levels, while opening possibilities and avenues of exploration, tends to frustrate the conclusiveness and coherence that readers have sought in narrative.

With the determined endlessness of cognitive discourse comes a desire to end something – maybe not ‘books,’ but perhaps the canon of print literature (including the institutions that supported a literary canon), is what we should regard as closed. At least, I’m prepared to take this as a working proposition: the closure of the print canon in the present, as a precondition to a literary emergence (quite unlike and perhaps distinct from the print legacy) in electronic, cognitive environments.

]]>
By: katehayles http://nationalhumanitiescenter.org/on-the-human/2009/05/distributingdisturbing-the-chinese-room/comment-page-1/#comment-100 Fri, 29 May 2009 18:05:40 +0000 http://nationalhumanitiescenter.org/on-the-human/humannature/?p=242#comment-100 Thanks to Abe Geil for his insightful questions and probings. To start first with the last question, I think it is not a question of saying that BRAINBOOUND is false and that EXTENDED is true, for both models exist not in isolation but embedded within complex historical configurations within which they have certain kinds of resonances and affiliations. As I argued, BRAINBOOUND is part of a historical formation that emerged and evolved over the several centuries it took for liberal humanist subjectivity to become the reigning cultural model of personhood, a development that spiraled out into almost every aspect of society and culture, including economics (rational actor theory, for example) and politics. It would be more appropriate to ask what function BRAINBOUND serves, and what other aspects of cultural configurations it reinforces and extends. The same is true of EXTENDED. While it reveals certain shortcomings in earlier models such as BRAINBOUND, without a doubt it has its own limitations and blind spots, which will probably become more visible as it gains dominance and is challenged by yet newer models.

As to consciousness and cognition, cognition in the broader term in the sense that consciousness is associated with the neocortext (although with inputs from other parts of the brain and indeed other parts of the body), while cognition includes consciousness but also other cortical areas (such as the limbic system, for example), the Central Nervous System (CNS), the periperal nervous system, the viscera (with their complex feedback loops crucial for the operation of emotions, as Antonio Damasio points out), and other organ and nerve systems, as well (in EXTENDED) objects in the environment.
Abe makes a good point when he questions whether consciousness now loses its cachet. In the excitement over EXTENDED and its cousins, consciousness is indeed now regarded in some quarters as an epiphenomenon, suggesting that its blind spots make it a free rider on the processes that really count. In my view, this discounting of consciousness is as much an error as thinking consciousness is the be-all and end-all. Surely consciousness adds amazing possibilities for human evolution (as I think William Rasch makes clear in his second post above), including not only symbol formation but the manipulation of symbols and then of symbols that stand for groups of symbols, etc. In short, for theoretical and abstract thought.

]]>
By: katehayles http://nationalhumanitiescenter.org/on-the-human/2009/05/distributingdisturbing-the-chinese-room/comment-page-1/#comment-99 Fri, 29 May 2009 17:51:40 +0000 http://nationalhumanitiescenter.org/on-the-human/humannature/?p=242#comment-99 Nathan’s astute comments here point to the need to clarify what emergence can and cannot offer. The potential of emergence, and also its limitation, is the possibility that the systemic dynamics will come up with new possibilities that humans have not (yet) thought up on their own. Once set in motion, humans do not determine the system’s output beyond setting its initial conditions and the parameters within which the simulation will proceed. There are compelling examples in the field of artificial life where the emergent results of relatively simple systems have indeed produced astonishing, and in some cases practically useful, results. But nothing dictates that this will be the case; as with evolution, there are successes and failures. Generally speaking, even the successes require far more tweaking than AL proponents like to admit to be genuinely useful in real-world applications.
So emergence, while in my view a powerful and pervasive view of how systems evolve and change, is not the be-all and end-all. A more productive model seems to be a system in which human desire, decision, and choice are joined with the evolutionary dynamics of emergence. For example, in Karl Sims’ art installation “Galapagous,” (http://www.karlsims.com/galapagos/index.html), evolved patterns were displayed on twelve computer screens, in front of which were pressure pads. Visitors literally voted with their feet; those patterns that attracted the most visitors for the longest periods where chosen as the parents of the next generation of evolved patterns. The emergent dynamics of the system produced the possibilities, thus taking advantage of the emergence as a way to generate surprise or unanticipated results, but the sophisticated aesthetic sense of human observers provided the selection process of what counted as “fit” in that environment. This kind of model partially answers Nathan’s observation about emergence being the “natural” order of things. Emergent processes are indeed pervasive in nature (which, incidentally, does not mean, as Nathan seems to imply, that they lack specificity), but for human purposes, the results may or not be useful or beneficial. All kinds of results are “natural” (including swine flu, tsunamis, earthquakes, wildfires, etc.), but that doesn’t mean they are necessarily desirable and may even be disasterous from a human point of view.

]]>
By: katehayles http://nationalhumanitiescenter.org/on-the-human/2009/05/distributingdisturbing-the-chinese-room/comment-page-1/#comment-98 Fri, 29 May 2009 17:36:40 +0000 http://nationalhumanitiescenter.org/on-the-human/humannature/?p=242#comment-98 As I indicated earlier in my respone to Geoffrey Winthrop-Young, Searle did imagine exactly the situation that James suggests here, of the man in the room who memorizes the rule book. Searle argues that this changes nothing, that the matching of symbols is still mechanical and non-interpretive. One school of response to Searle’s argument suggests not just that the entire room understands Chinese (the system theory approach), but that the man’s cognitive system plus all the accompaniments in the room allow the construction of a “virtual” mind, on analogy presumably to some techniques in Artificial Life where a “virtual” computer is created to run inside the actual computer through simulation techniques. While the man’s consciousness may not understand Chinese, the “virtual” mind running along his consciousness does. The ambiguity of this proposal lies, of course, in the relation of the virtual mind to conscious mind. Two configurations are possible: the virtual mind runs inside consciousness, or consciousness is encapsulated with the larger virtual mind. In some ways, this ambiguity mirrors the ambiguity James discusses of where language is positioned, inside or outside the mind. One way to think about the feedback loops that connect syntax to semantics is as the co-evolution of the virtual mind and conscious mind together, just as language and the brain co-evolved together. Incidentally, the co-authored article to which James refers is forthcoming in a special issue of the “History of Human Sciences” journal edited by Michel Ferrari (copies available on request).

]]>
By: katehayles http://nationalhumanitiescenter.org/on-the-human/2009/05/distributingdisturbing-the-chinese-room/comment-page-1/#comment-97 Fri, 29 May 2009 17:27:51 +0000 http://nationalhumanitiescenter.org/on-the-human/humannature/?p=242#comment-97 Kate’s remarkable dissertation develops the concept of “corridoricity” or the ways in which architectural features such as corridors, sewers, air shafts and hallways function as channels of communication in modernist novels between the narrative diegesis and the novels’ reflexively comments upon their own status as media. These kinds of feedback loops signal one way in which the distinction between syntax and semantics breaks down, as the diegetic and meta-diegetic level evolve together toward more complex versions of narrative and subjectivity. I recommend it to anyone interested in the point Kate raises here, of how the “status of personhood in modernity” moves toward the person “who observes and constititues herself as a version of collectivity. “

]]>