Do You Know What You’re Doing?: follow-up

Thanks to everyone for their challenging remarks. This post contains such responses as I’ve been able to make for the posted comments; I didn’t take them up in the order posted, so I’ve italicized author names to make them easier to find.

Bommarito (like Olin) seems to find the experimental results unsurprising, given the commonplace that most (or all) people are not “Perfectly Rational Agents.”  Hard to argue this; if you’re not surprised, you’re not surprised.  But like Vazire, I am surprised, and would not have predicted (sans familiarity with social psychology) phenomena like the “Georgia effect.”  (Much as people in the early 1960s did predict the shocking rates of obedience in Milgram’s studies [see Doris 2002: 49], however familiar the results may seem today.)  I’m surprised because I find remarkable not so much the brute fact that people diverge from ideals of rationality, but the ways in which they do so.  As Haybron observes, it’s not astonishing when people are moved in ways “contrary to reason” by sex, drugs, and rock’n’roll.  Such considerations seem to tap motives that are both powerful and of a sort one is willing to count as compelling reasons in some, even many, circumstances.  (Perfectly reasonable to desire sex, one might think, so long as it is with the right person, in the right place, of the right sort, and so on.)  But the experimentally adduced motives don’t look like that:  they aren’t intuitively plausible candidates for the honorific reason.  So it seems that we have learned something new from the experiments: not that we are imperfectly rational agents, but that our divergence from standards of rationality may be effected by processes that are both unexpected and difficult to detect (quite unlike sex, drugs, and rock’n’roll).

Smyth is quite right to note the importance of the reasons/causes distinction. In fact, the challenge does not elide this distinction, but exploits it.  For what the studies point to is a class of motives that would not be counted as reasons by the actor, were she made aware of them.  Behaviors so described are plausibly thought of as irrational (or at least arational), so the studies identify a difficulty about human rationality.

None of this to deny that there are knotty questions about rationality.  As Young observes (with Nichols), for many behaviors it is quite unclear what a “rational explanation” would look like, or even that such an explanation is required.  Suppose I don’t have much articulate to offer when declining to partake in your soufflé of yak feces.  It’s “just yucky,” I say, much as I might say, regarding Haidt’s infamous example of sibling incest, it’s “just wrong.”  To lack a rationalization is not necessarily to act irrationally; to establish the latter, one seems to need a theory of rationality.

Yet it may be possible to motivate our difficulty on the theoretical cheap:  I’ve helped myself to a thinly subjective notion of rationality, in attempting to identify a class of cases where the operative motive is unlikely to be considered normatively compelling from the actor’s perspective.  Theoretical uncertainty notwithstanding, these cases look to be at least prima facie examples of irrationality, and I’ve proceeded on the assumption that such instances are enough to get the problematic off the ground.  Of course, much depends on the particulars of each case — if you like, the etiology of the judgments and actions at issue.  Some etiologies may be vindicating, and some debunking; the challenge intimated by the experimental literature is the prospect of surprising numbers of surprising etiologies that the actors themselves would regard as debunking. In the end, we’ll need substantive argument on particular normative issues, and (perhaps) more general argument on the theory of reasons (for example, on the authority of the subjective perspective).  But I think we can make a beginning without resolution of all the theoretical issues – as is usually the case.

In effect, Olin doubts that I’ve identified debunking etiologies: she is inclined to deny that the phenomena in question are instances of irrationality.  For example, the lovers on a bridge study does not show that the men asked out someone they found unattractive, which would look irrational; as Olin has it, the study identifies a causal factor plausibly implicated in the men finding the woman attractive, which is in turn implicated in their asking her out.  So on Olin’s reading, it’s not “they asked her out because they were frightened on the bridge,” but “they asked her out because they found her attractive.”  To put it more perspicuously, on Olin’s view it’s both, but so long as the second holds, there’s nothing much funny going on.
There’s something to this:  if I find someone attractive because of their clothes, or because of the champagne and oysters, I find them attractive all the same.   To say the heart (or other bits of anatomy) has its reasons is to say, in a sense, that the heart needn’t have reasons at all.  But I’m not sure the bridge case can be assimilated to the champagne and oysters case.   For I think that in evaluating one’s attractions (or career choices, etc.), the psychological history matters, and not all histories are created equal.  For matters of the heart, think of whatever sinister Freudian story you want.  Or try this: the philosophically ubiquitous mad scientist mucks about in your brain, making someone you previously found quite disgusting seem quite irresistibly attractive.  If you found out, how would you feel?  You might not care, and you might even decide to “go with it.”  But you might just as easily feel duped, or feel like you’d been manipulated.  We might say that your unlikely shift in attraction was not properly based on who you are, but on who the mad scientist is.  Described this way, the mad scientist’s science project seems creepy, and I’d have the same thought if I learned, say, that my inclinations of the heart were based on a fear of heights (so that I always date people who work on the ground floor).  And to observe, as I suspect Olin would, that attractions often are capricious in such ways, just is to make a point akin to that I want to make:  there’s likely more caprice, and less reason, in your decisions that you might have thought.

Nichols and I have gone ‘round about the Georgia effect many times, and here he seems to have upset my wagon with some straightforward calculations.  There is a Georgia effect, it turns out, but it looks to be tiny, increasing the likelihood of moving to Georgia by a nearly invisible .007 percent.  (Nichols calculations assume an equal likelihood of moving to each state.  If fact, number 9 Georgia has something like 20 times the population of number 50 Wyoming, but as Nichols observes, this does not substantially impact his point.)   In making this point, Nichols raises the issue of effect size, an extremely difficult and contentious issue in the methodology of social science.  As a philosopher, I’m in over my head here, but I’ll chance a few remarks.

The people who brought us the Georgia effect, Pelham and colleagues (2002), also found people were about 15% more likely than expected by chance to reside in cities with names resembling their surnames.  Once again, this might seem a ripe target for Nichols’ arithmetic, but as Pelham et al. (2002: 473) observe, the magnitude of this effect is about 3 times the effect size corresponding to the long term house advantage in roulette – and the house comes out ahead.

In a suggestive discussion, Meyer and colleagues (2001) report effect sizes in medicine in the form of correlation coefficients:

Aspirin consumption/reduced risk of heart attack: .02
Chemotherapy/surviving breast cancer: .03
Ever smoking/lung cancer within 25 years: .08

These numbers look puny too.  But it’s arguable – very arguable – that these effect sizes have clear practical implications: take the Aspirin, endure the chemo, and don’t light up. At this point, Nichols may want to say that these effects do not allow confident conclusions about particular outcomes for particular individuals, such as whether Sweet Georgia Brown will move to Georgia.  (Full disclosure: I’ve elsewhere registered a similar complaint about considerably larger effect sizes in personality psychology.) Quite true.  But wait.  Aspirin consumption (and “Georgia”) must be making a difference in some individual cases; or there would not be a significant correlation.  (Give the multitude of operative influences in any case, we cannot confidently say for whom the difference was made, but this sort of uncertainty is actually part of the problem, as we’ll now see.)

Like Haybron, I understand phenomena like the Georgia effect as a “foot in the door”; once we see that something like that can make a difference, we’re bound to admit there might be lots of such goofy influences, as indeed an enormous literature intimates.  And while each individual influence may be small (as in influences on our health), the additive effect may, for all we know, be quite potent.  If such additive effects may be plaguing our decisions, is rationality in doubt? Is there a good way to rule their presence out?

We can sharpen the difficulty by considering some remarks by Harman, who rightly observes that the studies are compatible with people exercising “considerable rational control.”  Obviously, the studies do not of show that self-ignorance is total: presumably the lovers on the bridge believed (rightly) that they were asking a woman out, and not a rutabaga.   As Greene intimates, a critical part of the question seems to be how much self-awareness is required for rational agency.  To put it another way, is self-ignorance limited to minor causal factors, or does it extend to major factors (or constellations of factors) that substantially determine behavioral outcomes?

I think self-ignorance extends to major factors.  Worse, I think some of these major factors are ones that those influenced would repudiate. Finally, I think such phenomena are widespread.

This is speculative.  But there is trouble enough without definitively establishing the empirical claims.  I suppose Harman would grant, as I think he should, that there are at least some cases where the kind of influences we are worrying about determine behavioral outcomes. And whatever we think about particular cases, I suppose Harman would grant, as I think he should, that there are some cases of such influence that should be counted as instances of irrationality, whether the actor has partial self-awareness or not.  We can argue (as I did with Olin) about whether individual cases look like irrationality; if you don’t like the ones I’ve used, there are many others, such as Comstock’s amusing example of the cancerous pork slabs.  But all that’s needed here is the highly plausible supposition that there are some such cases.

For to know that a person is acting rationally, one needs to rule out the presence of such factors – if you like, call these irrational influences defeaters.   But how is such a ruling out to be accomplished?  The answer, I think, is obscure; if I’m right about studies like the choice blindness experiments, it would be a mistake to appeal to something like introspection, or self-reports of experience.

The envisaged argument takes a familiar skeptical form:  a skeptical hypothesis is one that cannot be “ruled out,” and would falsify some belief, or category of beliefs, if true: if I cannot rule out the possibility of an epistemically malicious demon, or that I am an envatted brain, or that I’m frolicking about in a Matrix, I haven’t knowledge of the external world.  The present skepticism maintains that for any putative instance of something like rational agency, we cannot rule out the possibility of an alternate explanation of behavior that does not reference the materials of rational agency.  If we cannot rule out these hypotheses, we do not know that there are instances of rational agency.  Like other skeptical arguments, the argument is not that the phenomenon in question does not exist, but rather that there are not strong enough grounds for believing it does. (This argument is a much-compressed version of that appearing in Doris, forthcoming.) So far as I see, this argument has bite even if we concede to Harman, as I think we should, that the empirical record is compatible with considerable self- awareness.

Like O’Callaghan, what I find striking is the facility with which people produce “serviceable candidate explanations” for their behavior, even when their information, including about their own psychological states, is seriously inaccurate or incomplete.  I suspect that the kind of cases that strike Nichols and Young – call them instances of “rational dumbfounding” – are in fact pretty unusual.  Speaking for myself, I don’t often hear people say, “I have no idea why I did that.”  This is perhaps unsurprising, since such utterances will often not play very well.  While it may be that we’re genuinely clueless about matters of mate choice, for example, I doubt this is the understanding we typically go forward with. (Perhaps the married members of the audience will help us with an informal experiment:  go home and announce to your spouse,  “You know honey, I have no idea why I married you.”  I suspect the results, if listeners took the statement literally, would be illuminating.)  I’d be inclined to put O’Callaghan’s point like this:  what is most unsettling is just how well people can navigate the space of reasons, when this space is only tenuously related to the space of motives.   How does the discourse of reason work, then, and what purpose(s) does it serve?

Like Vazire, I want to find a place for personal self-direction that may fairly be considered rational agency, meaning I want to resist the skeptical argument.  I don’t suppose that this resistance will in the end amount to an eradication of skeptical anxiety — agency may well be harder to come by than one might suppose in a state of pre-scientific bliss — but I do believe substantial amelioration is possible.  But like Vazire, I believe there are difficult empirical questions here, ones that are not easy to systematically address; given the presence of phenomena like confabulation, what should be counted as evidence of genuine personal control?   But like Haybron, I believe the questions will become more tractable with the aid of the right sort of theory building.

I propose we undertake this theory building from the bottom up.  Start with an empirical question: how do human organisms regulate their behavior?  I expect that there will be more than one answer to this question: human organisms are likely possessed of multiple control systems.  Then we might ask, in a broadly pre-theoretical way, which – if any — control systems regulate behavior in ways that are apt for the attribution of agency.  On me view, this is a broadly Strawsonian exercise:  which forms of behavioral control appear to support behavior appropriately subject to “reactive attitudes” like anger and admiration?  At this point, we might indulge in some incipient theory building, and try to make some generalizations about the forms of control properly associated with reactive attitudes.  For example, we might find that many human achievements that merit admiration – keeping a marriage together, say, or writing a book – emerge socially and over time, and we might therefore want to focus some of our attention on socially and temporally extended processes, rather than on individual organisms and individual actions (as much philosophical action theory has done).  We’d then want to pursue longitudinal research on groups, and what we learn here might further refine both our pre-theoretical judgments and theoretical commitments. (I understand Vazire’s lab is now initiating such a project.)

My methodological proposal may seem both numbingly familiar and maddeningly vague.  Fair enough: it is, very substantially, an underformed riff on the Goodman-Rawls account of reflective equilibrium.  But notice that it contrasts sharply to a top-down approach to agency, such as that intimated by Bommarito’s comment, where one first develops, in a substantially a priori fashion, a ideal notion of agency (The “Perfectly Rational Agent”) and afterwards looks to the empirical literature to determine the extent to which actual human behavior approximates or diverges from this standard.  I won’t defend my methodology here.  I’ll only observe that in adopting it, one might reach an understanding of rational agency that is different from familiar philosophical understandings – and more human.

— John Doris

PS: I’m not familiar with the work of M.A. Curtis, which Lizzie commends.  Based on the linked excerpt, it represents a project quite different from, and much more ambitious than, mine.

References

  • Doris, J. M. 2002. Lack of Character: Personality and Moral Behavior.  New York: Cambridge University Press.
  • Doris, J. M. In preparation. A Natural History of the Self. Oxford: Oxford University Press.
  • Doris, J. M. Forthcoming. “Skepticism about Persons.” Philosophical Issues 19: Metaethics.
  • Meyer, G. J., Finn, S. E., Eyde, L., Kay, G. G., Moreland, K. L., Dies, R. R., et al. 2001. “Psychological Testing and Psychological Assessment: A Review of Evidence and Issues.” American Psychologist 56: 128–165

1 comment to Do You Know What You’re Doing?: follow-up