Do You Know what You’re Doing?

Do you know what you’re doing?

Maybe Not.

In a remarkable archival study, Pelham and colleagues (2002: 474) found that “women were about 18% more likely to move to states with names resembling their first names than they should have been based on chance” — 36% more likely for the perfect matches Virginia and Georgia (Pelham et al. 2002: 474).  They also found that men named Geoffrey or George were 42% more likely than expected to be geoscientists based on the frequency of names used as controls, such as Daniel and Bennie (Pelham et al. 2002: 480).  Apparently, a city, or job, won’t smell the same with just any name.  I’m not suggesting, nor does the study suggest, that names are the only thing that matter, but for big decisions like these, it’s disconcerting that they matter at all.  I’d not expect George to justify his decision to spend his life among rocks by appealing to the first three letters of his name  — and I doubt George would, either.   The best guess is that such influences most often proceed unconsciously: we don’t know what we’re doing, or (a bit more precisely) why we’re doing it, and in many cases, I submit, we wouldn’t feel much like a “rational animal” if we did.

For example, in a diabolical demonstration by Dutton and Aron (1974), an attractive female approaches men in a park and asks them to fill out a questionnaire, afterwards giving respondents her phone number and offering to discuss the study further.   Which brave souls will seize the opportunity, and ask her out? Well, some participants were approached on a scary footbridge swaying over a deep gorge, while others were approached after they had crossed the bridge and were relaxing on a park bench.  The result: 65% of men in the “bridge condition” asked our heroine for a date, compared to 30% in the “bench condition.” Evidently, in the bridge condition the men misattributed arousal due to fear – they were often perspiring, short of breath, with a rapidly beating heart — to arousal due to sexual interest.  Are you nervous because she’s attractive, or is she attractive because you’re nervous?

I realize you think you know what you’re doing.  I think I know what I’m doing too.  But I don’t know that this conviction can be trusted.  In Johansson and colleagues’ (2005; cf. Johansson et al. 2006) studies of “choice blindness,” people were shown pairs of photographs depicting a female face, and asked to choose the one they found more attractive.  For 12 trials nothing was amiss, but in 3 trials the experimenter contrived to treat the photo people did not choose as though they did choose it.  In no more than 26% of all trials was the manipulation detected, regardless of whether the appearance of the paired faces was high or low in similarity. And when people were asked to explain their choices for 3 of the non-manipulated trials and 3 manipulated trials (e. g., “she looks very hot,” “I like earrings,”), the explanations were effectively indistinguishable. In other words, there was next to nothing in the explanations, such as evidence of deceit or hesitation, to differentiate the reasons participants gave for the choices they did make from the reasons they gave for the choices they didn’t make.  Apparently, though they didn’t know what they did, they had no trouble coming up with reasons why they did it.

If I’m right, such experiments should have you doubting the extent to which you exert “rational control” over your behavior.  Of course, a mere two or three studies, however exciting, shouldn’t convince you, but it turns out there are reams of experiments tending in the same unsettling direction (Wegner 2002; Wilson 2002; Stanovich 2004; Haidt 2006; Haybron 2008; Greene forthcoming; Bargh and Uleman 1989; Bornstein and Pittman 1992; Wyer 1997; Chaiken and Trope 1999; Hassin et al. 2005).  I’m not exactly sure what should be made of all this, but I’m sure something needs to be made of it, and when this gets done, I’m pretty sure some familiar conceptions of the human will have to be made over.  But that’s a lot for one post, and I’ll have to save it for my book to be, A Natural History of the Self.

Meanwhile, I’ll ask it again:

Do you know what you’re doing?

And why are you doing it?

16 comments to Do You Know what You’re Doing?

  • Won’t our conceptions of the human have to be made over only if those conceptions painted a particularly rational picture of humans to begin with? If I am being honest, I’ll admit that I’m buying a Coke not just because I’m thirsty, but also because I saw that colorful sign on the bus, that cute girl in Meta-ethics seminar was drinking one, and the cooler they are in is so well lit. I might imagine Perfectly Rational Agents in the seminar room, but that doesn’t mean that I think humanity is constituted in our similarity to them. Even Plato included the appetites and spirit in part of the human soul …

    So if we are in fact pushed around by all these forces we don’t often notice, does that change which ones we think *ought* to push us around? Does it change the nature of the thing that is pushed around (if we accept that such a thing exists!)?

  • It is unclear to me that any of the subjects of the studies John Doris mentions did not know what they were doing. Some did not know all of the factors that led them to do what they were doing. Some subjects in the study by Johansson et al. quickly forgot what they had done (which face they had chosen as more attractive) and were able to rationalize choices they wrongly thought they had made.

    So, the studies to not show that you do not know what you are doing, although they show that you may not know everything about why you are doing it.

    Do the studies show that you do not exercise considerable “rational control” over your behavior? Maybe they show that you do not exercise “total rational control.” You may be influenced by nonrational considerations of a sort you don’t recognize as having influencing you. It may still be true that considerable rational control is involved in moving to a new state, given the decision to move there. And some rational control may be involved in asking someone out. Maybe the choice of which face is more attractive is relatively random, but some rational control may be involved in doing whatever has to be done in order to indicate which photo is more attractive.

  • I agree with both Doris and Harman. What is interesting here is how these facts do or do not fit with people’s intuitions about where their behavior comes from. I agree with Doris that these findings will come as a surprise to most people – lay and expert alike. We probably all would have thought that our behavior was less susceptible to these unconscious influences than it turns out they are. However, I also think these findings could lead to the wrong intuitions – an over-correction of sorts. I have seen people discuss these kinds of findings as if they prove that our intentions don’t matter, or even that we do not have free will. That, to me, seems like an overreaction.

    On a similar note, these kinds of effects tend not to wash out individual differences in behavior. Even when aroused, some people will ask out the beautiful women and some won’t. When told to act like a prison guard, some will be more abusive than others. Of course Doris wasn’t saying otherwise, but as a personality psychologist I feel compelled to point this out because I have heard many people interpret these findings as the death knell for individual differences.

    Even more interesting, however, is that individual differences, or personality traits, do not solve the problem of agency that Doris raises here. Just because people still act differently from one another even when they are in the same situation does not mean that their actions are under personal control. For better or for worse, even dispositional influences on behavior sometimes (frequently?) operate without conscious awareness.

    Nevertheless, I am still hopeful that agency/intention/personal control plays an important role in behavior. Even after taking into account what we know about situational and dispositional influences on behavior, there is a lot of unexplained variance left over, and that leaves plenty of room for very important kinds of personal control. How we would demonstrate empirically that this unexplained variance is associated with personal control, I haven’t a clue.

  • Lauren Olin

    I agree that these studies raise doubts about the degree to which we’re aware of why we do things in some situations. It’s not altogether surprising that we lack insight into the full range of factors underpinning life decisions just as we do in making decisions about smaller things, like what to have for lunch. But does it follow that there is something inherently irrational about making choices on the basis of hunches or instinct?

    Duped participants in the choice blindness study might be confabulating, but this doesn’t necessarily reflect a lack of rational control with respect to their desires at the moment of choice. The confederate on the bridge must have appeared more attractive to the subjects she stopped – perhaps because they were anxious, perhaps because she struck them as daring or vulnerable – but it doesn’t follow that asking her out reflects a lack of rational control. Some subjects decided not to. Maybe George didn’t know that the first letters of his name would steer him to a career in Geology, but that doesn’t mean it didn’t feel right to him to become a Geologist, and it doesn’t exclude the exercise of rational control in his pursuit of that goal. In fact, there is no reason to think he’s not happier among rocks.

    I’m not sure what to make of all this either. But it seems unclear that acting partly on the basis of unconsciously prompted intuitions indicates a lack of rational control more than it indicates, say, a healthy lack of neuroticism.

  • Perhaps this distinction is useful: between the factors which cause an agent to act in a certain way and the reasons for which an agent acts. The former is simply an exhaustive list of all the causal forces, psychological or otherwise, which have pushed an agent towards doing X. The latter, however, is an essentially subjective feature of action: the reasons which the agent, from their perspective, takes to be the ones for which he or she is acting.

    What these kinds of studies show is that the list of causal factors is larger than we might has supposed. What they do NOT show is that we do not act for reasons. This, for many philosophers, is the important component of action, the one which generates moral responsibility, connects with a person’s character, etc.

  • Shaun Nichols

    As usual, John Doris makes an eloquent challenge to received ways of thinking about ourselves. But I manage to stave off depression by consoling myself that the studies he discusses don’t really undermine the idea that we do know what we’re doing for important decisions. Admittedly, it is hard to define what will count as important, so let me try to argue that the results of two of the most striking studies pose less of a threat than it might seem.

    First, consider the Georgia goes to Georgia study. John quotes from the Pelham study: “women were about 18% more likely to move to states with names resembling their first names than they should have been based on chance” — 36% more likely for the perfect matches Virginia and Georgia”. This is fairly shocking at first glance. But how much of behavior does it actually explain? To put it in perspective let’s just assume that since there are 50 states, for any state, the chance that it will be selected by a mover is .02. (This isn’t accurate of course, since there are different population levels in the 50 states, but it’s good enough to make the point.) Given that assumption, the chance that Georgia will move to any state (except Georgia) is roughly .02, and if she’s 36% more likely to move to Georgia, that bumps the likelihood up to .0272. That is, the difference introduced by the name effect is just a bit more than .007. While it’s surprising that one’s name has any such effect at all, it is reassuring (to me anyway!) that the name effect doesn’t really explain very much of the phenomenon of selecting a home state.

    The second study that seems pretty troubling is the Johansson et al. study on choice blindness. This study is noteworthy because it is an honest-to-god experimental demonstration of confabulation. Most of the stock examples of confabulation are anecdotal, and the anecdotes don’t always live up to the billing (for a brief rant on this, see http://www.epl.web.arizona.edu/docs/Fiala_Nichols_Confabulation_Confidence_Introspection.pdf).
    In contrast to the anecdotal reports that fill the literature, Johansson and colleagues provide genuinely experimental evidence that people confabulate reasons for their choices. Moreover, in a subsequent paper, Johansson et al. use detailed linguistic analyses and find little evidence that people exhibit different linguistic behavior when they are tricked vs. when they are not tricked.

    But maybe things aren’t so bad. First, a minor quibble about their use of word frequency and latent semantic analysis. It is impressive that there few differences turn up on these measures, but a simpler and perhaps more sensitive method is reaction time: when people have been tricked do they take longer to produce a contentful response about why they made the choice they did? Their analyses don’t tell us.

    A further problem with the choice-blindness work was suggested to me by Beth Campbell, a grad student at Arizona working on vision. It might be that the kind of judgment in the choice-blindness task – facial preferences – isn’t the sort of thing that involves decision making. That is, the factors that generate our facial preferences aren’t the kinds of things that are consciously available. This seems plausible and perhaps holds for other perceptual preferences as well. I prefer Sierra Nevada over most other beers, but for many pairwise comparisons I can’t articulate why. My taste buds deliver a preference without an accessible reason. The idea that perceptual systems deliver preferences without delivering reasons isn’t entirely counterintuitive. By contrast, it would be troubling indeed if I typically couldn’t tell you why I decided to get in my car, why I decided to put on my swimming suit, or why I decided to board the airplane at gate 37. Those kinds of choices are paradigmatically driven by reasons, but there is no experimental evidence of choice blindness in such cases. Yet.

  • I don’t even think I know how I’m *feeling* right now, so naturally I’m sympathetic to the worries John Doris raises here. (At least I think I am.) The trick is figuring out whether lapses of rational awareness and control are extensive enough that anyone should be worried about them. (Or at least, more worried than they already were, pace our raging appetites for food, sex, and the like.) As the other commentators have noted, you could take these sorts of studies to reveal at best minor, or not particularly threatening, limitations of rational control.

    Yet I’m inclined to read studies like these as “foot-in-the-door” examples pointing, along with a broad range of other research—including the situationist and heuristics and biases literatures, not to mention anthropological work—toward a view of human functioning that may not be so friendly to some traditional views of the rational animal. Perhaps it will turn out that healthy human functioning quite fittingly involves very substantial nonrational, even counter-rational, influences on cognition and behavior—and not just regarding pedestrian activities where (direct) rational control would obviously be inefficient, but in core areas of life, including setting our priorities. Depending on how the details work out, one could imagine such a picture of human life sitting rather poorly with standard issue Aristotelian, economic, and other views of the good life. (It perhaps bears mentioning that this hardly means denying an important role for rational processes in human life. The question is whether they play the roles required by our theories etc.)

    This is all highly speculative, of course. The point I want to make here is just this: it will probably be hard to assess the threat these studies pose to our self-image without developing, at least in part, a positive account of human functioning. Even a long list of provocative studies can usually be dismissed piecemeal. But such a dismissal is much harder to pull off if, looking at many lines of research across disciplines, we arrive at a plausible psychological framework on which those results should be *expected*. (And better yet, prove not to be a bug, but a feature—a way it is actually good for people to be.) I suspect that framework will, in good measure, vindicate Doris’s hunch. And it may not be all that depressing either.

  • admin

    The list of references, provided by John Doris:

    References
    • Bargh, J. A,, and Uleman, J. S. (eds.). 1989. Unintended Thought. New York and London: The Guilford Press.
    • Bornstein, R. F., and Pittman, T. S. (eds.) 1992. Perception without Awareness: Cognitive, Clinical, and Social Perspectives.  New York and London: The Guilford Press.
    • Chaiken, S., and Trope, Y. (eds.) 1999. Dual-Process Theories in Social Psychology. New York and London: The Guilford Press.
    • Doris, J. M. In preparation. A Natural History of the Self. Oxford: Oxford University Press.
    • Doris, J. M. Forthcoming. “Skepticism about Persons.” Philosophical Issues 19: Metaethics.
    • Dutton, D. G., and Aron, A. P. 1974. “Some Evidence for Heightened Sexual Attraction Under Conditions of High Anxiety. Journal of Personality and Social Psychology 30: 510-17.
    • Greene, J. forthcoming.  The Moral Brain . . .  and What to Do about It. New York Penguin.
    • Haidt, J. 2006. The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom. New York: Basic Books.
    • Hassin, R. R., Uleman, J. S., and Bargh, J. A. (eds.). 2005. The New Unconscious.  New York: Oxford University Press.
    • Haybron, D. 2008. The Pursuit of Unhappiness: The Elusive Psychology of Well Being.  Oxford: Oxford University Press.
    • Johansson, P., Hall, L., Sikström, S., and Olsson, A. 2005. “Failure to Detect Mismatches Between Intention and Outcome in a Simple Decision Task.” Science 310: 116-119.
    • Johansson, P., Hall, L., Sikström, S., Tärning, B. and Lind, A. 2006. How Something Can Be Said About Telling More Than We Can Know.” Consciousness and Cognition 15: 673-92.
    • Pelham, B. W., Mirenberg, M. C., and Jones, J. K. 2002. “Why Susie Sells Seashells by the Seashore: Implicit Egotism and Major Life Decisions.” Journal of Personality and Social Psychology 82: 469-487.
    • Stanovich, K. E. 2004. The Robot’s Rebellion: Finding Meaning in the Age of Darwin. Chicago: University of Chicago Press.
    • Wegner, D. M. 2002. The Illusion of Conscious Will. Cambridge, MA: MIT Press.
    • Wilson, T. D. 2002. Strangers to Ourselves: Discovering the Adaptive Unconscious. New York: Belknap.
    • Wyer, R. S. (ed.) 1997. The Automaticity of Everyday Life (Advances in Social Cognition, Volume X).  Mahwah, NJ: Lawrence Erlbaum Associates. 

  • M.A. Curtis has written a whole book on why we do the things we do (though it doesn’t address moving to certain states!). It’s “Dominance & Delusion” and it attempts to answer why we behave as we do, believe as we do, think as we do. We look for reasons — he adopts a dispassionate attitude in his quest for answers and finds that we try to find rationales for our behavior and excuse that which we see as problematical. Quite interesting — but with complex ideas presented simply and straight-fowardly, so they’re easy to understand.

  • Forgot to mention that you can even read an excerpt of the book (Dominance and Delusion”) to check it out.

  • Doris has an eye for the unsettling, even entertaining, features that challenge to how we think of ourselves. Perhaps surprisingly, this helps sharpen the picture of what makes us human. Here, he presents some poignant examples from a disturbingly broad class of cases that collectively show that often the best explanation for some decision or action includes factors that an untutored subject would be unlikely to recognize or acknowledge.

    In my own case, I think I’m unlikely to include these other factors (does a potential new address match my name?) because I often treat explaining my decisions and actions to involve providing reasons that make sense of what I do, where ‘making sense’ means showing why my move is reasonable, or rational, or just a good idea from where I stand when I do it. I say ‘often’ because I don’t always explain my actions this way. When it serves some purpose (if I want to distance myself from my actions), I might find myself explaining why I’ve done what I’ve done in terms of external influences and causes, rather than my reasons.

    What I find most interesting is why I think I know what I’m doing, even if I don’t have a good grip on the best explanation for why I’ve done it. Here’s a sketch of a simple story. Suppose I usually do have conscious access to reasons or information that is part of a decent explanation for why I’ve done what I’ve done — it makes reasonable sense of it. However, the evidence shows that in lots of cases there’s further information affecting my decisions and deeds that I don’t know causes me to do what I’ve done. Still, it’s part of, or all of, the best explanation for why I’ve done what I’ve done. And, sometimes, I’m not even potentially conscious of it as explaining or making intelligible what I’ve done. Though I can still come to know though science or close study that it impacts my decision, the processes themselves are largely subconscious.

    So, I have conscious access to a serviceable candidate explanation for most of what I do and decide. But the evidence shows that often I lack access to the full or best explanation. The reason I think I know what I’m doing is that I mistake the absence of consciousness (of my action’s best or full explanation) for the absence of a further cause or better explanation of it (beyond that of which I am conscious). This is like looking around, seeing nothing, and concluding that there’s nothing around when, in fact, you’ve missed a critical detail. I mistake my lack of awareness of a further cause, reason, or explanation for the absence of a further cause, reason, or explanation. So I think I know what I’m doing. And often I’m incomplete, or just wrong.

    Why worry? Once psychology admits there’s more to the mind than we recognize on introspection, what’s most interesting about what makes us human could be what doesn’t come to mind.

  • Social psychology is often in the business of showing people the extent of their irrationality. Doris describes studies of people whose home states and professions bear names resembling their own and people who confabulate when having to justify choices that aren’t – unbeknownst to them – in fact theirs. Doris provides these examples as evidence for the “irrational animal” in each of us – in other words, “we don’t know why we’re doing what we’re doing”. I think two related questions are worth addressing in this discussion: 1) how would a “rational animal” answer the “why” question? and 2) are there different kinds (or levels) of answers to the why question depending on the “what” – what one is in fact doing?

    The first point is related to one raised by Dan Haybron: “it will probably be hard to assess the threat these studies pose to our self-image without developing, at least in part, a positive account of human functioning.” Before we dismiss answers as irrational we need an account of the rational answers. To take an example from evolutionary psychology: why won’t (most) people sleep with their siblings? Psychologist Jon Haidt uses this example to show the irrationality of people’s moral judgments. When he presents his subjects with a hypothetical scenario of brother-sister incest, subjects first object (“it’s wrong”), and then when asked to explain why appeal to the possibility of birth defects, psychological harm and so on. Haidt points out that subjects persist in their objections even once these concerns have been addressed one by one (e.g., birth control, psychological resilience). Thus the subjects are said to be “morally dumbfounded”. But what would it look to not be morally dumbfounded? How would a “rational animal” have answered? What are the right answers, for example, to other questions about basic moral behavior – why it’s good to behave fairly, or why it’s not good to hurt people gratuitously? Before judging whether the answers are good or bad, there must be some account of the good answers and the bad answers, whether they are in folk-psychological terms of beliefs and desires, evolutionary psychological terms, etc.

    The second point is related to one made by Shaun Nichols: “It might be that the kind of judgment in the choice-blindness task – facial preferences – isn’t the sort of thing that involves decision making.” Indeed, it seems that “the sort of thing” matters when it comes to explanation. Mate selection, incest aversion, caring for one’s children – these are behaviors that may be understood and explained on multiple levels. A mother cares for her young because she loves them (belief-desire terms) and because surviving and thriving offspring will allow her genes to propagate (evolutionary psychological terms) – and undoubtedly for numerous other reasons. Also, depending on how behavior is carved up (e.g., caring for one’s offspring versus feeding one’s baby this kind of formula over that), it may make more pragmatic sense to prefer one kind of explanation over another. But again a detailed account of what counts as an rational explanation – and in what terms – helps in determining how and when people are behaving as irrational animals.

  • I suppose we all have our favorite studies showing the subtle power of framing effects. Mine is a willingness-to-pay study by Hayes and colleagues of genetically modified foods.* They seated paid volunteers at lunchtime tables and gave them each a pork sandwich. In the middle of each table was one additional food item; they were told it was an irradiated pork sandwich. One group of participants was given a positive frame. Irradiation of pork, they were told, is a safe procedure used in many countries for many years that results in a 10,000 fold reduction in Trichinella organisms in the meat.

    A second group was given a negative frame: the meat has received the rads equivalent of 30 million chest x-rays. Predictably, almost everyone in the first group wanted the irradiated sandwich. They would actually bid against each other, usually up to 20 cents, for the right to exchange their ordinary sandwich for the safer one. Predictably, the second group would not volunteer two cents for the cancerous slab.

    Surprisingly–to me anyway–was the result Hayes got when he subjected a third group to both treatments. I’d have expected a bimodal distribution; some folks biting while others turning up their noses. Wouldn’t you think at least some people would see through the “30 million chest x-rays” rhetoric and go for the reduced Trichinella story? To the contrary, no individuals in the group receiving both the attractive and repulsive descriptions wanted to bid. Subjects receiving both frames exhibited instead exactly the same behavior as those receiving only the negative frame.

    Shouldn’t at least some rational actors hearing a negative description of an ordinary commercial food stuff be able to sift the wheat from the chaff? Especially when they’ve also been presented with a positive description? Shouldn’t a few of us be able to see through anti-science rhetoric, distinguish relative degrees of plausibility, and occasionally reach the ‘right’ conclusion? Alas, it appears that this almost never happens.

    That’s a sobering result. How deflationary is it for belief in things like rationality and free will? Doris knows these tricky waters better than most. I’ll be looking for guidance from his natural history of the self.

    * D. Hayes, J. Fox, J. Shogren. Consumer preferences for food irradiation: how favorable and unfavorable descriptions affect preferences for irradiated pork in experimental auctions, J. Risk Uncertainty 24 (2002): 75-95.

  • Josh Greene

    Doris’ provocative post raises many questions. In particular, I’m curious about the boundary conditions of his provocative thesis: How much does one have to know about what one is doing in order to know what one is doing? If you buy your sister a birthday gift, does knowing what you are (really) doing require that you understand that you are helping spread copies of your genes? If your sarcasm is really a defense mechanism, do you have to know that in order to know what you just did with your words? Where do our reasons end and the reasons (or “reasons”) generated by forces external to us begin?

  • Eddy Nahmias

    Josh summarizes nicely some of the points in the previous posts. It seems too demanding to say that we don’t know what we are doing or that we are not acting on our reasons just because we are influenced in some way by situational (or emotional or whatever) factors whose influence we are unaware of. I think two related counterfactuals become relevant to help understand these issues:

    1. Had the unknown factor not existed, how would I have acted differently (e.g., in what ways would we want to say I performed a different action)?
    2. Had I been aware of the influence of the factor, would I have attempted to avoid its influence on me? (or a different question: Were I to learn about the influence the factor had on me, would I accept that influence as legitimate?)

    It is very difficult to see how we could empirically examine these questions, but there are ways to get at them.

    For some of the relevant studies, it seems that even if the answer to 1 is that I would have acted differently, the answer to 2 makes it clear it doesn’t matter much (“Oh, I wouldn’t have picked the right pair of stockings–so what? My goal was to pick the pair I liked best, and I did.”) For others, it is hard to answer question 1 (how many of the Virginias who moved to Virginia would not have moved there if they’d been named Georgia? And would Virginia say she didn’t want to be influenced by this whimsical factor?).

    But for other studies, the worries run deeper (personally, I tie these worries to the free will debate, since it seems we have diminished freedom to the extent that we are influenced to make decisions based on factors we would not accept as good reasons). It seems clear that some people who would otherwise be helpful to those in need are not helpful because of the unrecognized influences of unhelpful bystanders or of being in a hurry or of smelling something gross, *and* it seems that most of us would not accept these influences as legitimate (we would counteract their influence were we able). Of course, we may in fact be able to counteract some of these influences the more we learn about them, though the jury is still out on the degree to which such knowledge “frees us.”