Final Thoughts of a Disenchanted Naturalist

In Geoffrey Harpham’s first contribution to “On the Human” he wrote,

One of the most striking features of contemporary intellectual life is the fact that questions formerly reserved for the humanities are today being approached by scientists in various disciplines such as cognitive science, cognitive neuroscience, robotics, artificial life, behavioral genetics and evolutionary biology.

“Approached” is too weak a word. A better word might be “overtaken.” In the 21st century the understanding of ourselves that the humanities have claimed to provide will finally be replaced by or grounded in scientific knowledge. As I shall argue, the more likely outcome is replacement, and the consequence must be a different role for the humanities from the one they have aspired to.

Hard science—first physics, then chemistry and biology—got going in the 1600s. Philosophers like Descartes and Leibniz almost immediately noticed its threat to human self-knowledge. But no one really had to choose between scientific explanations of human affairs and those provided in history and the humanities until the last decades of the 20th century. Now, every Friday the  “Science Times” reports on how neuroscience is trespassing into domains previously the sole preserve of the interpretive humanities. Neuroscience’s explanations and the traditional ones compete; they cannot both be right. Eventually we will have to choose between human narrative self-understanding and science’s explanations of human affairs. Neuroeconomics, neuroethics, neuro-art history and neuro lit crit are just tips of an iceberg on a collision course with the ocean liner of human self-knowledge.

Let’s see why we will soon have to face a choice we’ve been able to postpone for 400 years.

It is hard to challenge the hard sciences’ basic picture of reality. That is because it began with everyday experience, recursively reconstructing and replacing the everyday beliefs that turned out to be wrong by standards of everyday experience. The result, rendered unrecognizable to everyday belief after 400 years or so, is contemporary physics, chemistry and biology. Why date science only to the 1600s? After all, mathematics dates back to Euclid and Archimedes made empirical discoveries in the 3rd century BC. But 1638 was when Galileo first showed that a little thought is all we need to undermine the mistaken belief neither Archimedes nor Aristotle had seen through but that stood in the way of science.

Galileo offered a thought-experiment that showed, contrary to common beliefs, that objects can move without any forces pushing them along at all. It sounds trivial and yet this was the breakthrough that made physics and the rest of modern science possible. Galileo’s reasoning was undeniable: roll a ball down an incline, it speeds up; roll it up an incline, it slows down. So, if you roll it onto a frictionless horizontal surface, it will have to go forever. Stands to reason, by common sense. But that simple bit of reasoning destroyed the Aristotelian world-picture and ushered in science. Starting there, 400 years of continually remodelling everyday experience has produced a description of reality incompatible with common sense—including quantum mechanics, general relativity, natural selection and neuroscience.

Descartes and Leibniz made important contributions to science’s 17th century “take off.” But they saw exactly why science would be hard to reconcile with historical explanation, the human “sciences,” the humanities, theology and in our own interior psychological monologs. These undertakings trade on a universal, culturally inherited “understanding” that interprets human affairs via narratives that “make sense” of what we do. Interpretation is supposed to explain events, usually in motivations that participants themselves recognize, sometimes by uncovering meanings the participants don’t themselves appreciate.

Natural science deals only in momentum and force, elements and compounds, genes and fitness, neurotransmitter and synapses. These things are not enough to give us what introspection tells us we have: meaningful thoughts about ourselves and the world that bring about our actions. Philosophers since Descartes have agreed with introspection, and they have provided fiendishly clever arguments for the same conclusion. These arguments ruled science out of the business of explaining our actions because it cannot take thoughts seriously as causes of anything.

Descartes and Leibniz showed that thinking about one’s self, or for that matter anything else, is something no purely physical thing, no matter how big or how complicated, can do. What is most obvious to introspection is that thoughts are about something. When I think of Paris, there is a place 3000 miles away from my brain, and my thoughts are about it. The trouble is, as Leibniz specifically showed, no chunk of physical matter could be “about” anything. The size, shape, composition or any other physical fact about neural circuits is not enough to make them be about anything. Therefore, thought can’t be physical, and that goes for emotions and sensations too. Some influential philosophers still argue that way.

Neuroscientists and neurophilosophers have to figure out what is wrong with this and similar arguments. Or they have to conclude that interpretation, the stock in trade of the humanities, does not after all really explain much of anything at all. What science can’t accept is some “off-limits” sign at the boundary of the interpretative disciplines.

Ever since Galileo science has been strongly committed to the unification of theories from different disciplines. It cannot accept that the right explanations of human activities must be logically incompatible with the rest of science, or even just independent of it. If science were prepared to settle for less than unification, the difficulty of reconciling quantum mechanics and general relativity wouldn’t be the biggest problem in physics. Biology would not accept the gene as real until it was shown to have a physical structure—DNA—that could do the work geneticists assigned to the gene. For exactly the same reason science can’t accept interpretation as providing knowledge of human affairs if it can’t at least in principle be absorbed into, perhaps even reduced to, neuroscience.

That’s the job of neurophilosophy.

This problem, that thoughts about ourselves—or anything else for that matter—couldn’t be physical, was for a long time purely academic. Scientists had enough on their plates for 400 years just showing how physical processes bring about chemical processes, and through them biological ones. But now neuroscientists are learning how chemical and biological events bring about the brain processes that actually produce everything the body does, including speech and all other actions. Moreover, Nobel-prize winning neuro-genomics has already combined with fMRI and clever psychophysical experiments to reveal how misleading introspection and interpretation is about how the brain drives behavior.

These findings cannot be reconciled with explanation by interpretation. The problem they raise for the humanities can no longer be postponed. Must science write off interpretation the way it wrote off phlogiston theory—a nice try but wrong? Increasingly, the answer that neuroscience gives to this question is “afraid so.”

Few people are prepared to treat history, (auto-) biography and the human sciences like folklore. The reason is obvious. The narratives of history, the humanities and literature provide us with the feeling that we understand what they seek to explain. At their best they also trigger emotions we prize as marks of great art.

But that feeling of understanding, that psychological relief from the itch of curiosity, is not the same thing as knowledge. It is not even a mark of it, as children’s bedtime stories reveal. If the humanities and history provide only feeling (ones explained by neuroscience), that will not be enough to defend their claims to knowledge.

The only solution to the problem faced by the humanities, history and (auto) biography, is to show that interpretation can somehow be grounded in neuroscience. That is job #1 for neurophilosophy. And the odds are against it. If this project doesn’t work out, science will have to face plan B: treating the humanities the way we treat the arts, indispensible parts of human experience but not to be mistaken for contributions to knowledge.

Geoff Harpham concluded his original contribution to “On the Human” thus:

We stand today at a critical juncture not just in the history of disciplines but of human self-understanding, one that presents remarkable and unprecedented opportunities for thinkers of all descriptions. A rich, deep and extended conversation between humanists and scientists on the question of the human could have implications well beyond the academy. It could result in the rejuvenation of many disciplines, and even in a reconfiguration of disciplines themselves-in short, a new golden age.

Nothing could rejuvenate the humanities more than the recognition that they do not compete with the sciences in providing knowledge, and that the sciences can never compete with them in providing pleasure.

5 comments to Final Thoughts of a Disenchanted Naturalist

  • David Duffy

    Non-overlapping magisteria of knowledge and pleasure ;)

    I have a couple of quibbles:

    Biology would not accept the gene as real until it was shown to have a physical structure: is as untrue as similar statements about chemistry and atoms.

    Must science write off interpretation the way it wrote off phlogiston theory—a nice try but wrong?: I won’t address history, but in the case of personality and temperament, one of the main tools is asking individuals how they would characterise themselves eg “I often avoid meeting strangers because I lack confidence with people I do not know”. Responses to a number of such questions can be mathematically analysed to extract underlying consistent features that psychologists believe to correspond to characteristics of the individual brain physiology. And these stable personality traits can be correlated with measured behaviour, mental illness, structure, physiology and genotype (well, to some extent). Famously, the first two higher order factors (“Harm Avoidance/Neuroticism”, “Novelty Seeking/Extraversion”) correspond to the Greek humours. So introspective interpretations (or perhaps observation and recollection of one’s own behaviour) that cohere with scientific understanding are alive and well, at least in this corner of psychology: personality, psychometrics, behaviour genetics.

  • Rosenberg thinks that science and “explanation by interpretation” are incompatible and mutually exclusive, so we must choose between them. Merely physical systems such as ourselves can’t really refer to external goings-on, so higher level (e.g., historical) accounts of human action involving motivations, meanings, intentions, purposes, desires, etc. are mere simulacra of knowledge. But microphysical and neural explanations don’t compete with human-level explanations; rather they elucidate the mechanisms subserving reference and cognition involving abstract concepts, including those Rosenberg himself deploys. Understanding these mechanisms constitutes a philo-scientific research agenda; that there may be no canonical physicalist account of reference doesn’t mean it’s impossible. Indeed, his debunking of non-scientific explanations, here and in his book, makes use of a higher-level referential vocabulary that, according to his own thesis, doesn’t convey knowledge. That we understand his thesis strongly suggests it’s false. Unless, of course, we’re all under the *illusion* of understanding, in which case so is Rosenberg.

    We can and must have both a physicalist story about the production of speech and behavior and an intentionalist, purposive, human-level psychological story. That the latter is sometimes misleading doesn’t impugn its overall utility in explaining human action. And it’s the utility, the predictive power of explanations, both in the hard sciences and the humanities, that gives us knowledge and that certifies their elements (fermions and bosons, beliefs and desires) as real. These two ontologies are not in competition or mutually exclusive. The humanities and the human sciences, irreplaceable in their explanations, have nothing to fear from physicalism or naturalism.

  • Rosenberg has implied, and elsewhere stated, that freewill is an illusion, and that human action and behavior is caused and pre-determined by brain-states as being explained (or confidence – faith? – that they will soon to be explained) by neuroscience. I have two misgivings. One, he seems to neglect the claims by some very competent neuroscientists and philosophers about the limitations of neuroscience: Tallis, Gazzaniga, Pigliucci, et al. Second, he seems unaware of a significant objection to determinism.
    First. When neuroscientists, by examining my brain-states, using fMRIs or some more advanced techniques, can correctly describe what I was thinking or seeing or saying (and in what language I was saying it) when the brain-scan was taken (WITHOUT being told what I was doing when the scan was taken) . . . well, until they can do that, I’ll remain skeptical that it CAN be done. I’m an empiricist –just claiming that something can or one day will be done, or that research seems to be moving in that direction is little more than hopeful hand-waving! It moves too quickly from correlations to causes, from necessary to sufficient conditions, and from small studies to sweeping generalizations. For comments about the latter, see publications by John Ioannidis.
    Second, and in my opinion more significant. Karl Popper wrote:
    “For according to determinism, any theories-such as, say, determinism – are held because of a certain physical structure of the holder (perhaps of his brain). Accordingly we are deceiving ourselves (and are physically so determined as to deceive ourselves) whenever we believe that there are such things as arguments or reasons which make us accept determinism. Or in other words, physical determinism is a theory which, if it is true, is not arguable, since it must explain all our reactions, including what appear to us as beliefs based on arguments, as due to purely physical conditions.” [Objective Knowledge, 223-24]
    A typical determinist philosophers’ response to this is that determinism does NOT imply that we do not reason or argue; rather it implies that all our reasoning is caused by antecedent conditions, and that does not at all mean that all our reasoning is incorrect. Well, true enough; but it also does not mean that any of our reasoning is correct, which raises the question, “How can we tell when it is or isn’t correct?” The answer, seems to me, is that if determinism is true then we can’t tell.
    Consider an illustration. (I do not claim that determinists think the brain is just a very complex computer. However, a computer is a useful example of a completely deterministic system.) Suppose a computer so constructed that for some calculations it always gives the wrong answers and for others it always provides the right answers. Is there any way that computer could double-check (OK, a little anthropomorphism here) it’s own calculations and correct the mistaken ones? No, because it always “thinks” that its calculations are correct; it will always calculate the way it was predetermined to calculate by its hardware and software. Likewise, if determinism is true then, as Popper noted, whatever we think is what we are predetermined to think by antecedent conditions. This doesn’t imply that determinism is false, but only that if it is true then we cannot have any adequate basis for thinking that ANY of our reasoning is trustworthy. If we provide reasons for view X, we do so merely because we were predetermined to do so; and those who offer reasons against view X do so merely because that’s what they were predetermined to do. Whether any of the reasons offered are good or trustworthy is irrelevant. This approach was spelled out in detail by J.N.Jordan [“Determinism’s Dilemma,” Review of Metaphysics, Sept. 1969].
    David Barash puts the upshot of all this nicely: “It seems clear that human beings are the most flexible and adaptable creatures on earth, capable of choosing their own destiny. At the same time, it is also clear that there is a definite genetic influence on many aspects of our behavior, especially when it comes to sex, violence, parenting, even tendencies for altruism and selfishness. The more we understand that influence, the more free we are to chart our own course.” http://faculty.washington.edu/dpbarash/faq.html The humanities provide needed help in charting our course.
    So, do we ever have good, trustworthy (but admittedly never infallible) reasons for believing some things? I think we do, and I suspect that you do too. Thus a closing apparent paradox:
    If determinism is true, then we can’t have good, trustworthy reasons for believing anything (including for believing that determinism is true); but we do have some (at least fairly) good reasons for believing that determinism (and lots of other stuff) is true. Therefore (by modus tollens) determinism is false.

  • Charels T. Wolverton

    I agree with Tom Clark that human behavior can be described in different vocabularies for different purposes – some scientific, others “human-level”. But I don’t see Prof Rosenberg as disagreeing with that. He specifically addresses “knowledge”, which requires that a description pass muster within an appropriate peer group (Rorty’s Sellarsian “truth is what your peers let you get away with saying”), and observes that as the scientific aspects of human behavior are better understood, the peer group for assessing the knowledge-worthiness of descriptions in the human-level vocabularies will increasingly include those who have relevant scientific knowledge, and therefore will require that the descriptions be scientifically sound if they are to be accepted as “knowledge” (in the above sense). Of course, even descriptions that fail that test may be praised on other grounds, eg, aesthetic. If I’m right, then Prof Rosenberg’s claim seems no more than – if he’ll pardon the expression – common sense.

  • Charels T. Wolverton

    The essence of Frank Williams’ Popper quote seems to be that determinism implies that decision making is an illusion. To which I can only reply “doh!”. For a somewhat more sophisticated critique of the full passage from which the quote is taken, see:

    http://www.psych.umn.edu/faculty/meehlp/100Determinism-freedomMind-body.pdf

    And while I agree with Williams that we can’t tell if our “reasoning” – ie, any particular argument for determinism – is correct, neither can we tell if any argument for any position is correct. But one must nevertheless argue on – or not.

    There are obvious possibly disconcerting consequences to strict determinism, but that doesn’t negate it. Feigl and Meehl suggest that Popper’s fear of “the nightmare of determinism” may rest on a confusion with strict predictability. But absent predictability, there’s really nothing to fear – the illusion works just fine.