Contemplating Singularity

Most researchers agree that there is no reason in principle why we will not eventually develop conscious machines that rival or surpass human intelligence. If we are crossing to a new era of the posthuman, how have we gotten here? And how should we understand the process?

Cultural theorists have addressed the topic of the posthuman singularity and how, if at all, humanity will cross that divide. Most scholars have focused on the rhetorical and discursive practices, the metaphors and narratives, the intermediation of scientific texts, science fiction, electronic texts, film, and other elements of the discursive field enabling the posthuman imaginary. While recognizing that posthumans, cyborgs and other tropes are technological objects as well as discursive formations, the focus has been directed less toward analyzing the material systems and processes of the technologies and more toward the narratives and ideological discourses that empower them. We speak about machines and discourses “co-constituting” one another, but in practice, we tend to favor discursive formations as preceding and to a certain extent breathing life into our machines. The most far-reaching and sustained analysis of the problems has been offered by N. Katherine Hayles in her two recent books, How We Became Posthuman and My Mother Was a Computer. Hayles considers it possible that machines and humans may someday interpenetrate. But she rejects as highly problematic, and in any case not yet proven, that the universe is fundamentally digital, the notion that a Universal Computer generates reality, a claim that is important to the positions staked out by proponents of the posthuman singularity such as Morowitz, Kurzweil, Wolfram and Moravec. For the time being, Hayles argues, human consciousness and perception are essentially analog, and indeed, she argues, currently even the world of digital computation is sandwiched between analog inputs and outputs for human interpreters. How we will become posthuman, Hayles argues, will be through interoperational feedback loops between our current mixed analog-digital reality and widening areas of digital processing. Metaphors, narratives and other interpretive linguistic modes we use for human sense-making of the world around us do the work of conditioning us to behave as if we and the world were digital.

I propose to circumvent the issue of an apocalyptic end of the human and our replacement by a new form of Robo Sapiens by drawing upon the work of anthropologists, philosophers, language theorists, and more recently cognitive scientists shaping the results of their researches into a new argument for the co-evolution of humans and technics, specifically the technics of language and the material media of inscription practices. The general thrust of this line of thinking may best be captured in Andy Clark’s phrase, “We have always been cyborgs.” From the first “human singularity” to our present incarnation, human being has been shaped through a complicated co-evolutionary entanglement with language, technics and communicational media.

Is there any foundation for relating this approach to the biological evolution of human cognition to a theory of signification and the notions of media machines? Terrence Deacon, Merlin Donald and others have pursued this question deep into the structure of symbolic communication and its embodiment in the neural architecture of evolving human brains. Their work on the evolution of language is suggestive for considering the formative power of media technologies in shaping the human and some of the critical issues in current debates about posthumanity. For Deacon and for Donald what truly distinguishes humans from other anthropoids is the ability to make symbolic reference. This is their version of the Singularity; Homo symbolicus, the human singularity. Although language evolution in humans could not have happened without the tightly coupled evolution of physiological, anatomical and neurological structures supporting speech, the crucial driver of these processes, according to Deacon, was outside the brain; namely, human cultural evolution. The first step across the symbolic threshold was most likely taken by an australopithecine with roughly the cognitive capabilities of a modern chimpanzee. Symbolic communication did not spontaneously emerge as a result of steady evolution in size and complexity of hominid brains. Rather symbolic communication emerged as a solution to a cultural problem. To be sure language could not have arisen without a primitive prerequisite level of organization and development of the neurological substrates that support it. But in Deacon’s view those biological developments were more directly driven by the social and cultural pressures to regulate reproductive behavior in order to take advantage of hunting-provisioning strategies available to early stone-tool-using hominids. Deacon argues this required the establishment of alliances, promises and obligations linking reproductive pairs to social (kin) groups of which they were a part. Such relationships could not be handled by systems of animal calls, postures and display behaviors available to apes and other animals and could only be regulated by symbolic means. A contract of this sort has no location in space, no physical form of any kind. It exists only as an idea shared among those committed to honoring and enforcing it. Without symbols, no matter how crude in their early incarnation, that referred publicly and unambiguously to certain abstract social relationships and their future extensions, including reciprocal obligations and prohibitions, hominids could not have taken advantage of the critical resources available to them as habitual hunters. In short, symbolic culture was a response to a reproductive problem that only symbols could solve: the imperative of representing a social contract. What was at stake here was not the creation of social behavior by the social contract as described by Rousseau, but rather the translation of social behavior into symbolic form.

Once the threshold had been crossed to symbolic communication natural selection shifted in dramatic ways. Deacon bases his model on James Mark Baldwin’s original proposals for treating behavioral adaptation and modification as a co-evolutionary force that can affect regular Darwinian selection. Baldwinian evolution treats learning and behavioral flexibility as a force amplifying and biasing natural selection by enabling individuals to modify the context of natural selection that affects their future offspring. Deacon uses Baldwinian evolution in a provocative way to address the question of the co-evolution of language and the brain. Though not itself alive and capable of reproduction, language, Deacon argues, should be regarded as an independent life form that colonizes and parasitizes human brains, using them to reproduce. Although this is at best an analogy—the parasitic model being too extreme—it is useful to note that while the information that constitutes a language is not an organized animate being it is nonetheless capable of being an integrated adaptive entity evolving with respect to human hosts. This point becomes more salient when we think of language as carried by communication systems and examine the effects of media, including electronic media, more broadly.

For Deacon, the most important feature of the adaptation of language to its host to recognize is that languages are social and cultural entities that have evolved with respect to the forces of selection imposed by human users. Deacon argues that the greater computational demands of symbol-use launched selection pressure on increased prefrontalization, more efficient articulatory and auditory capacities, and a suite of ancillary capacities and predispositions which eased the new tools of communication and thought. Each assimilated change added to the selection pressures that led to the restructuring of hominid brains.

In Deacon’s theory evolutionary selection on the prefrontal cortex was crucial in bringing about the construction of the distributed mnemonic architecture that supports learning and analysis of higher-order associative relationships constitutive of symbolic reference. The marked increase in brain size over apes and the beginnings of a stone tool record are the fossil remnant effects of the beginnings of symbol use. Stone tools and symbols were the architects of the Australopithecus-Homo transition and not its consequences.

Symbolic reference is not only the source of human singularity. It is also the source of subject formation in all its varied manifestations. Deacon bases his theory of reference on (arguably a modified version of) Charles Sanders Peirce’s semiotics. Peirce made the distinction between iconic, indexical and symbolic forms of reference; where icons are mediated by similarity between sign and object, indices are mediated by some physical or temporal connection between sign and object, and symbols are composed of relations between indices and mediated by formal or conventional links rather than by more direct neurological connection between sign and object.

Supported by the evidence of contemporary neuroscience on the plasticity of the neocortex and its capacity to adapt to intricate challenges of a changing cognitive environment, Deacon argues that rather than being rigidly hardwired to structures inside the brain, symbolic communication created a mode of extrabiological inheritance with a powerful and complex character, and with an autonomous life of its own. The individual mind is a hybrid product, partly biological and partly ecological in origin, shaped by a distributed external network whose properties are constantly changing. The leap to the symbolizing mind did not depend on a built-in hard-wired tendency to symbolize reality. The direction of flow was from culture to the individual mind, from outside-to-inside. A number of theorists, including Andy Clark and Kate Hayles have been interested in expanding this analysis to include media other than speech and writing, especially technologically mediated and computer based forms of communication. It is to that argument I want to turn now.

In several books and pathbreaking articles Andy Clark has developed a compelling thesis about what he calls “extended mind” that provides the perfect bridge between Deacon’s work on the evolution of symbolic reference and our considerations of media in the posthuman singularity. In the Extended model of cognition thinking and cognition depend directly and noninstrumentally upon the ongoing work of the body and/the extraorganismic environment. According to Clark:

According to EXTENDED, the actual local operations that realize certain forms of human cognizing include inextricable tangles of feedback, feed-forward, and feed-around loops: loops that promiscuously crisscross the boundaries of brain, body, and world. The local mechanisms of mind, if this is correct, are not all in the head. Cognition leaks out into body and the world.

In discussing the parity principle at the basis of their important paper on the extended mind Clark and David Chalmers argue when the human organism is linked with an external entity creating a two-way interaction, the coupled system consisting of components both external and internal to the brain should be seen as a cognitive system in its own right. All the components, including the external components, play an active causal role and jointly govern behavior in the same way that cognition usually does. If by removing the external component the behavioral competence of the system drops, the external component should be viewed as much a causal factor in the cognitive process whether or not it is wholly in the head. In Clark and Chalmers’ vision of cognition the boundary between external and internal perception and action disappears, so that iPhones, calculators, computational aids and less exotic cultural props such as the tray of letters in a game of Scrabble become components of the extended mind. In the years since they first published their paper (1998) Chalmers has become convinced that the extended mind is most likely even more widely extended than to the domain of beliefs and specifically cognitive processes. What about extended desires, extended reasoning, extended perception, imagination and emotions?

I think there is no principled reason why the physical basis of consciousness could not be extended in a similar way. It is probably so extended in some possible worlds: one could imagine that some of the neuronal correlates of consciousness are replaced by a module on one’s belt, for example. There may even be worlds where what is perceived in the environment is itself a direct element of consciousness.

Brain-machine interfaces such as cochlear implants, artificial prosthetic hippocampus chips, retinal implants and DARPA’s “brain-in-the-loop” imaging systems for its Cognitive Threat Awareness Program are all examples of where the extended mind might be heading.

The Extended Mind treatment of language in terms of hybrid representational forms, coordination dynamics and complementarity between biological and artifactual contributions provides a supportive framework for Hayles’ theory of intermediation described above by offering an account of how the transactions between bodies and our inscription practices might take place and how to understand the “entanglement” Hayles describes of media with the formation of human subjects. The key point in Clark’s model is that language is fundamentally an external resource, and even processes of internal thought, silent rehearsal, and other forms of “off-line” linguaform representation for problem-solving are internal recapitulations of the relevant external vehicles. Of course, there are internal representations in this model, but Clark-Chalmers part company with defenders of neural mentalese (Churchland) or a hardwired language of thought (Fodor). Stressing hybrid representational forms and coordination dynamics of a brain that is fundamentally a pattern-completing engine, the proposal is that external artifactual resources of the symbolic environment are co-opted without being replicated by special biological structures or translated into another internal code. Exposure to external material symbols and epistemic artifacts does not result in the installation of new internal representational forms in the brain, or as Dennett proposed by installing a new virtual serial machine via “myriad microsettings in the plasticity of the brain.”

What then about the posthuman? Are we transitioning to some new form of self adapted to our environment of ubiquitous computing technology, and if so, how is this self assembled and transformed by the machinic processes of our technoscientific milieu? Since the rise of Homo Sapiens between 200,000 to 100,000 years ago, there has been little change in brain size or, as far as can be determined, in brain structure. A critical contributing factor to the rapid cultural evolution that took off with Sapiens and has continued at an ever increasing pace since is the development of supplements to individual internal biological memory in the form of visuographic systems and external memory media, especially written records and other forms of symbolic storage. Rather than being limited by our neural architecture these external material supports have only enhanced the symbolizing power of the mind. In a sense, the recent development of the internet and distributed forms of electronic communication only further accelerate a process that has defined and shaped human being since that first singularity. From the perspective of the work in evolutionary cognitive science we have discussed, any change in the way information gets processed and represented inevitably constitutes a change in the cognitive economy of the subject, a difference in psychic architecture and ultimately of consciousness itself.

2 comments to Contemplating Singularity

  • This is an excellent exposition, thanks. The theory of the extended mind has also interesting roots in the works of Gregory Bateson and Humberto Maturana where cognition is being described in the context of cybernetics. Another interesting association that come to mind is John Searle’s work on intentionality reasoning and speech acts where he describes a mechanism by which speech acts constitutes abstract reasons for action which are in a sense semi-independent from the agent who create them, and are supported by a wider cultural or social consensus (in the forming of promises for one example).

    Another interesting connection I have found has to do with understanding the contour of an individual mind as depending on the ratio between the volume of information transfer within the central nervous system compared to the volume of information between the nervous system and its environment via sensory/motor interaction. As soon as this ratio of volume will start to converge from a very large number to 1 (as result from various technological augmentations), the contour of the mind as residing in the brain will dissolve.
    The idea is further developed here: http://spacecollective.org/Spaceweaver/3680/My-cranium-is-open-source

  • I’m curious about your rhetoric, which seems something of a bait-and-switch. Your first and second paragraphs read as though you’re going to consider arguments about a posthuman singularity in which intelligent machines eclipse us at some point in the future, a notion that’s been debated to death for quite awhile. But then you set that discussion aside at the beginning of paragraph three and just leave it there flopping beside the road. Instead you undertake a discussion of the extended mind.

    OK. It’s not that I’m particularly interested in yet another discussion of the transhuman posthuman – without an analytically useful notion of intelligence such discussions are rather empty. But why bother even bringing up that particular notion of posthuman singularity at all if you’re not going to do anything but displace it by, in effect, redefining the human as always already cyborg and posthuman? Are you implying that the transhumanists are mistaken about the the object of their desire/anxiety? Or that we’re going to transcend ourselves independently of whether or not those superintelligent machines ooze up out of the silicon?