Knowledge of our own thoughts is just as interpretive as knowledge of the thoughts of others

Philosophers have traditionally assumed that knowledge of our own thoughts is special. Descartes famously believed that knowledge of our current thoughts is infallible. He also believed that those thoughts themselves are self-presenting, so that whenever one entertains a thought, one is capable of infallible knowledge of it. Many figures in the history of philosophy have shared these beliefs (including Aristotle, Augustine, and Locke). This is no longer true today. Most now accept that it is possible to be mistaken about one’s current thoughts, and that many such thoughts occur unconsciously, in ways that aren’t available to one. Nevertheless, almost everyone in the field believes that our knowledge of some subset of each of the main kinds of thought (judgment, decision, desire, and so on) is both authoritative and available to us through some form of privileged access. It is believed that our knowledge of these thoughts is much more certain than the knowledge that we have of the thoughts of other people, and normally cannot be challenged from a third-person perspective. Moreover, the mode in which we acquire this knowledge is unavailable to others, even in principle. Philosophical discussions of self-knowledge typically start from a statement of these assumptions, and proceed to develop theories that purport to explain them.

Professor Peter Carruthers

Professor Peter Carruthers

Carruthers (2011) argues that these assumptions, and the philosophical theories based on them, are directly challenged by an extensive range of evidence from across cognitive science. In their place is proposed the Interpretive Sensory-Access (ISA) theory of self-knowledge. This holds that there is just a single mental faculty (the “mindreading” faculty) that is responsible for all our knowledge of propositional attitudes, whether those thoughts are our own or other people’s; it claims that the faculty in question has only sensory access to its domain, utilizing “globally broadcast” attended sensory outputs (including inner speech and visual and other forms of imagery); and it claims that our access to all attitudes (whether our own or other people’s) is equally interpretive in character. One outcome, it is argued, is that there are hardly any kinds of conscious propositional attitude; another is that there is no such thing as conscious agency.

Notice that the philosophical assumptions described above are inherently contrastive in nature. They presume that one’s knowledge of one’s own thoughts is very different in kind from one’s knowledge of the thoughts of other people. (The ISA theory, in contrast, claims that these are very similar, differing only in that there are some forms of sensory information available for interpretation in the first person that are not available in the third, such as our own inner speech and visual imagery.) Those assumptions therefore cannot be defended from empirical attack in the way that philosophers of perception defend their direct-perception accounts from the findings of vision science. The latter group say that they are only intending to make claims about the personal level, and make no commitments regarding the subpersonal processes that support our direct perceptual access to the world. A similar move is not available in the domain of self-knowledge, since what the data show is that knowledge of our own thoughts is not different in kind from knowledge of the thoughts of other people.

The relevant evidence is of many different forms, ranging from experimental studies in social psychology demonstrating the ease with which people’s reports of their current attitudes can be “pushed around” by minor contextual modifications, through introspection-sampling studies, studies of people’s metacognitive abilities to monitor and control their own learning and reasoning, studies of systematic failures of self-knowledge and/or other-knowledge in autism and schizophrenia, as well as brain-imaging data for self versus other tasks. Some of these forms of evidence count directly against some philosophical theories, some count against all or almost all. The data are reviewed in detail and their significance discussed in Carruthers (2011).

Perhaps the most directly relevant set of data consists of numerous psychological studies demonstrating people’s willingness to confabulate about their own current or very recent thoughts, attributing thoughts to themselves that we have every reason to believe they never entertained, and making errors in self-attribution that directly parallel the errors that we make in attributing thoughts to other people. These studies show that, at least in these cases, people are using the same mindreading faculty that they employ when attributing thoughts to other people, relying on sensory forms of evidence that stands in need of highly fallible interpretation.

Recent defenders of the philosophical status quo who know about some of this data admit that they are forced to become dual method theorists as a result (Nichols and Stich, 2003; Goldman, 2006). That is, they are forced to admit that sometimes people employ self-directed mindreading when attributing thoughts to themselves (hence the instances of confabulation), while on other occasions they have knowledge of their own thoughts that is authoritative and privileged. The main problem for dual method theories, however, is to explain the patterning in the data. For this, they need to provide some principled account of the circumstances in which people access their thoughts directly and the circumstances in which they rely on self-directed mindreading. No such account has yet been provided that can accommodate all the data. Indeed, many instances of confabulation concern perfectly ordinary everyday thoughts occurring in circumstances where people should have been paying attention to their thoughts. In these cases one would expect that people should have had authoritative access to their thoughts if such a thing is ever possible.

Let me illustrate these points by discussing one body of data deriving from the “dissonance” tradition in social psychology, where hundreds of supporting references could be provided. In a typical experiment, subjects will be induced to write an essay arguing for a conclusion that is the contrary of what they believe. In one condition, subjects may be led to think that they have little choice about doing so (for example, the experimenter might emphasize that they have previously agreed to participate in the experiment). In the other condition, subjects are led to think that they have freely chosen to write the essay (perhaps by signing a consent form on top of the essay-sheet that reads, “I freely agree to participate in this experiment.”)

The normal finding in such experiments is that subjects in the free-choice condition (and only in the free-choice condition) change their reported attitudes on the subject-matter of the essay. And this happens although there are typically no differences in the quality of the arguments produced in the two conditions. If subjects in the free-choice condition have previously been strongly opposed to a rise in university tuition costs, for example (either measured in an unrelated survey some weeks before the experiment, or by assumption, since almost all people in the subject pool have similar attitudes), then following the experiment they might express only weak opposition or perhaps even positive support for the proposed increase. Such effects are generally robust and highly significant, even on matters that the subjects rate as important to them, and the changes in reported attitude are often quite large.

We know that freely undertaken counter-attitudinal advocacy gives rise to negatively valenced states of arousal, which dissipate as soon as subjects express an attitude that is more consistent with their advocacy (Elliot and Devine, 1994). Indeed, even pro-attitudinal advocacy will give rise to changes in expressed attitude in circumstances where subjects are induced to believe that their honest advocacy will turn out to have bad consequences (Scher and Cooper, 1989). And in circumstances where subjects are offered a variety of methods for making themselves feel better about what they have done (an attitude questionnaire, a question about their degree of responsibility, and a question about the importance of the topic), they will use whatever method is offered to them first (Simon et al., 1995; Gosling et al., 2006). For example, if asked first about the importance of the question of tuition raises, they will say that it is of little importance (even though in questionnaires administered a few weeks previously they rated it as of high importance), thereafter going on to express an unchanged degree of opposition to the change and rating themselves as highly responsible for what they did.

The best explanation of these patterns of result is that subjects’ mindreading systems automatically appraise them as having freely chosen to do something bad, resulting in negative affect. Then when confronted with the attitude questionnaire they rehearse various possible responses, responding affectively to each in the manner of Damasio (1994). They select the one that “feels right” in the circumstances, which is one that provides an appraisal of their actions as being significantly less bad. And as a result of making that selection, their bad feelings go away. For example, saying (and hearing themselves say) that they do not oppose a raise in tuition (contrary to what they believe) enables their earlier actions to be appraised as not bad, and as a result they cease to feel bad. In contrast, it seems quite unlikely that subjects should really be changing their minds prior to selecting an answer on the questionnaire, with their novel belief then being available to be authoritatively reported. For we know for sure that they do not change their beliefs unless offered the chance to express them, and there is no plausible mechanism via which a question about one’s beliefs should lead to the formation of a new belief in these circumstances (which can then be veridically reported).

Such phenomena are fully consistent with the ISA theory of self-knowledge. Indeed, they are predictable from it when combined with independently warranted psychological theories (such as the use of mental rehearsal and prospective affect in action selection; Damasio, 1994; Gilbert and Wilson, 2007). But they are deeply problematic for most standard philosophical accounts. For one would think that a direct question about one’s beliefs (e.g. about the badness of a tuition raise, or about the importance of the issue) would have the effect of activating the relevant belief from memory. And there seems no reason why a judgment of this sort should remain unconscious or be otherwise inaccessible to the subject. But if subjects had authoritative access to this activated belief, then it would be mysterious how they could at the same time express an inconsistent belief and make themselves feel better by doing so. For if they say one thing while being aware that they think something else, then they should be aware of themselves as lying. And that ought to make them feel worse, not better.

These counter-attitudinal essay-writing data can be combined with many other studies of confabulation to support the ISA account: subjects’ mindreading systems monitor and interpret their own behavior, both overt (such as an episode of essay writing) and covert (such as sentences rehearsed in inner speech), much as the overt behavior of others is monitored and interpreted. And this is the only mode of access that people have to their own thoughts. For if they also had privileged and authoritative access to some of their own thoughts, then the data would not display the patterning that it does.

Why, then, do people across time and place have such a powerful intuition of infallible (or at least authoritative) access to their own thoughts? And why are they inclined to believe that their thoughts are self-presenting (or at least accessible in a privileged way)? One answer would be that people have these intuitions because they are true. Compare the universality of believing that water is wet: people believe this because water is wet, and because everyone has access to plenty of data to indicate that it is. Likewise, then, there might be voluminous and easily available evidence that supports the existence of direct access to our own attitudes. But the only such evidence (to the extent that it exists at all) is the general reliability of people’s reports of their own attitudes, which often turn out to be consistent with our observations of their behavior. But this can’t begin to support the claims that error and ignorance with respect to one’s own mental states are impossible. Nor does it close off the possibility of skepticism about self-knowledge. (Compare the fact that visual perception, too, is generally reliable; yet skepticism in this domain has been common, whereas no philosophers have ever been skeptics about knowledge of their own thoughts.) And neither, even, does it support the idea that our access to our own mental states is somehow privileged and especially authoritative. All it supports is general reliability.

A better explanation of the universality of our intuitions about self-knowledge is that they derive from a pair of inference-rules that are built into the structure of the mindreading faculty itself (whether innately or by learning):

  1. One thinks that one is in mental state M → One is in mental state M.
  2. One thinks that one is not in mental state M → One is not in mental state M.

Carruthers (2011) argues on reverse-engineering grounds that just these rules are likely to be built into the mindreading system, providing heuristic short-cuts in the process of behavior interpretation (especially behavior in which subjects ascribe mental attitudes to themselves). As a result, if a question is raised about the provenance of a belief about one’s own attitudes, or about the possibilities of mistake or ignorance, then one will initially be baffled. For an application of the inference rules (1) and (2) with oneself as subject leaves no room for such possibilities. It will require systematic reflection on the significance of such phenomena as self-deception, or the findings of cognitive science, for one to realize that mistakes about one’s own attitudes are possible, and that some of one’s attitudes might be inaccessible to one.

In addition, these rules also function to “short-circuit” processes that might otherwise lead one to be aware of ambiguities in one’s own inner speech. This means that we are never confronted by the manifestly interpretive character of our access to the thoughts that underlie our own speech. Or so I will now suggest.

When interpreting the speech of another person, the mindreading system is likely to work in conjunction with the language faculty to arrive at a swift “first pass” representation of the attitude expressed, relying on syntax, prosody, and salient features of the conversational context. But it is part of the mindreading system’s working model of the mind and its relationship to behavior that people can be overtly deceptive, and that their actions can in various ways disguise their real motives and intentions. One would expect, then, that whenever the degree of support for the initial interpretation is lower than normal, or there is a competing interpretation in play that has at least some degree of support, or the potential costs of misunderstanding are much higher than normal, a signal would be sent to executive systems to “slow down” and issue inquiries more widely before a conclusion is reached. In these cases we become aware of ourselves as interpreting the speech of others.

When interpreting one’s own speech, however, the mindreading system is likely to operate rather differently. For possession of inference rules (1) and (2) means that it implicitly models itself as having direct access to the mind within which it is lodged. Moreover, even among people who know about cognitive science, and/or who believe that self-deception sometimes occurs, such ideas will rarely be active and salient in most normal contexts. Hence it is likely that once an initial “first pass” interpretation of one’s own speech has been reached, no further inquiries are undertaken, and no signals are sent to executive systems triggering a “stop and reflect” mode of processing. As a result, the attitude that initially seems to be expressed is the attitude that one attributes to oneself, not only by default but almost invariably. So although the process of extracting attitudes from speech is just as interpretive in one’s own case as it is in connection with other people, it is rarely if ever consciously interpretive.

Carruthers (2011) argues that when the full range of evidence is considered, the Interpretive Sensory-Access (ISA) theory emerges as significantly better than any of its more-traditional philosophical rivals. If so, then philosophers and others need to begin exploring what this should mean, and how related topics might be impacted. As noted earlier, one outcome is said to be that there are hardly any types of conscious attitude, and another is that there is no such thing as conscious agency. What this means for our conception of ourselves as subjects, and for our beliefs about our moral responsibility, are matters requiring urgent attention.



  • Carruthers, P. (2011). The Opacity of Mind: An Integrative Theory of Self-Knowledge. Oxford University Press.
  • Damasio, A. (1994). Descartes’ Error. Papermac.
  • Elliot, A. and Devine, P. (1994). On the motivational nature of cognitive dissonance: Dissonance as psychological discomfort. Journal of Personality and Social Psychology, 67, 382-394.
  • Gilbert, D. and Wilson, T. (2007). Prospection: Experiencing the future. Science, 317, 1351-1354.
  • Goldman, A. (2006). Simulating Minds. Oxford University Press.
  • Gosling, P., Denizeau, M., and Oberlé, D. (2006). Denial of responsibility: A new mode of dissonance reduction. Journal of Personality and Social Psychology, 90, 722-733
  • Nichols, S. and Stich, S. (2003). Mindreading. Oxford University Press.
  • Scher, S. and Cooper, J. (1989). Motivational basis of dissonance: The singular role of behavioral consequences. Journal of Personality and Social Psychology, 56, 899-906.
  • Simon, L., Greenberg, J., and Brehm, J. (1995). Trivialization: The forgotten mode of dissonance reduction. Journal of Personality and Social Psychology, 68, 247-260.

14 comments to Knowledge of our own thoughts is just as interpretive as knowledge of the thoughts of others

  • Peter Carruthers’s objections to the traditional notion of self-knowledge are intriguing, and those of us who think that there is something right about that notion surely need to take Carruthers’s challenge seriously. What I mean here by the ‘traditional’ notion of self-knowledge is the notion that our epistemic access to our own thoughts is special in two of the senses mentioned by Carruthers: It is ‘authoritative’ in that it is more certain than the knowledge that we have of the thoughts of other people, and it is ‘privileged’ in that it is not a type of access that others can have to our own thoughts. According to Carruthers, there is an extensive range of evidence from across cognitive science which suggests that this conception of self-knowledge is wrong. The best explanation of the data, Carruthers claims, is that our epistemic access to our own thoughts is, like our access to other people’s thoughts, sensory and interpretive in nature. The only difference is that, for each subject, there is a kind of sensory information that is only available for interpretation to the subject herself, namely, her own visual imagery and inner speech.

    Before we have a look at some of the relevant empirical evidence reported by Carruthers, notice that there is a prima facie difficulty with the idea that our epistemic access to our own thoughts is interpretive. Suppose, for example, that I have sensory access to an episode of inner speech in which I say “university fees should not be raised” What kind of evidence would warrant the conclusion that I believe that university fees should not be raised? As far as I can see, the reason why my self-attribution of that belief counts as knowledge must be that I also know, or at least I am justified in believing that, for any proposition P, I usually believe that P when I say “P” in inner speech. But it is hard to see how I could be justified in believing that unless, in the past, I had often been able to confirm that I did believe a proposition when I, so to speak, uttered it in inner speech. And how could I confirm such an alignment of my beliefs and my inner speech acts unless I had some non-inferential epistemic access to my beliefs? Thus, on the face of it, it seems that the idea that our epistemic access to our thoughts is interpretive presupposes a notion of self-knowledge that resembles the traditional notion. Now, a natural thought in reaction to this worry is that this picture of interpretive self-knowledge (namely, interpretive self-knowledge as inferential self-knowledge) does not really do justice to what Carruthers means by ‘interpretive.’ I believe that this is a thought worth exploring, and I will attempt to do so in what follows.

    According to Carruthers, the most relevant data suggesting that our knowledge of our own thoughts is interpretive comes from psychological studies that show that we often confabulate about our own thoughts, which is in tension with the traditional notion of self-knowledge as authoritative. Furthermore, these studies are also meant to show that the mistakes which we make while self-attributing thoughts are of the same kind as those which we make when we attribute thoughts to other people. And this suggests that we attribute thoughts to ourselves in the same way in which we attribute thoughts to other people, which conflicts with the traditional notion of self-knowledge as privileged. The relevant kind of studies is illustrated by an experiment from the dissonance tradition in social psychology. In this experiment, a group of subjects who write an essay arguing for a conclusion that is contrary to what they believe, and who are led to think that they have freely chosen to do so, later change their reports about their beliefs regarding the subject matter of the essay. By contrast, a group of subjects who write such an essay under the impression that they have little choice about doing so do not change their reports about their beliefs regarding its subject matter.

    It is clear that studies of this sort suggest that our access to our own thoughts is far from certain. I think that more needs to be said in order to establish that it is just as unreliable as our access to other people’s thoughts, though. The traditional theorist of self-knowledge can admit that we make mistakes about our own thoughts. (After all, nobody who espouses the traditional notion of self-knowledge denies, for example, the possibility of self-deception.) The issue regarding the authority of self-knowledge is whether we are better at telling which thoughts we have than we are at telling what other people think. Presumably, the idea is that the volume of similar data discussed in Carruthers (2011) is so substantial that there is indeed reason to think that we make just as many mistakes while trying to tell what we are thinking as we make when we try to tell what other people are thinking. Notice, though, that the two features of the traditional notion of self-knowledge do not stand or fall together. Specifically, self-knowledge may turn out to be privileged even if it is not authoritative. Thus, it seems reasonable to ask whether the dissonance experiment mentioned above shows that we attribute thoughts to ourselves in the same way in which we attribute thoughts to others.

    I do not think that it does. We attribute thoughts to other people based on behavioral evidence and inferences to the best explanation. If I hear you say “university fees should not be raised”, then I conclude that you must believe that university fees should not be raised because I believe that you speak English, that you are not trying to fool me, and you are trying to convey your views about some subject matter to me. Given all this, the hypothesis that you believe that university fees should not be raised seems to be the best explanation of why you are saying what you are saying to me. The point that our knowledge of other people’s thoughts is interpretive in this sense is, I take it, not in dispute. But this is not what, according to Carruthers, the above-mentioned subjects who change their reports regarding their beliefs have done. Those subjects are not supposed to remember, first of all, having freely advocated a certain view, and then make sense of their behavior by concluding that they must believe that the view in question is correct. Instead, Carruthers’s explanation of the data relies on two interesting claims: (i) Freely undertaken counter-attitudinal advocacy makes one feel bad, and (ii) attributing to oneself a belief that conforms with one’s actions makes one feel better. The idea is that, given (i), subjects who advocate a view that is contrary to what they believe, and they do so under the impression that they have freely chosen to do it, experience a negatively valenced state of arousal. And later, when they need to decide whether the view in question is what they believe or not, they report that it is indeed what they believe because they are trying to make themselves feel better and, given (ii), their reports will have that effect.

    Let us put aside the issue of whether (i) and (ii) are correct, and whether they do explain the data from the dissonance experiment. (It occurs to me, however, that if (i) and (ii) can be used to explain the data from the dissonance experiment in the way sketched above, then an interesting question arises: Shouldn’t it be the case, then, that defense lawyers will tend to report believing that their clients are innocent after having defended them? This is an empirical matter, and perhaps it is indeed the case that defense lawyers do report believing that, though it would be quite surprising if it turns out that they do.) Even if (i) and (ii) explain the data from the dissonance experiment, it seems clear that we do not attribute thoughts to other people based on whether our states of arousal are positively or negatively valenced. We seem to be able to attribute thoughts to other people even if they are total strangers, and we do not care at all about what they are thinking so it makes no emotional difference to us whether they have a belief in one proposition or another. Self-knowledge may be interpretive in the sense that self-attributions of beliefs are emotionally, or affectively, biased but this does not show, as far as I can see, that it is not privileged. All it shows is that, when we attribute thoughts to ourselves, there is a certain kind of mistake to which we are susceptible, or perhaps even prone (namely, mistakes that are due to an affective bias). This is consistent with the view that the procedure that we use to attribute thoughts to ourselves is different from the procedure that we use to attribute thoughts to other people. As a matter of fact, if the two procedures were different, then there would be no reason to expect that one of them should be just as vulnerable to certain types of error as the other one.

    A separate concern for the interpretive view is that we are able to attribute thoughts to ourselves in the absence of any action available for interpretation. I know, for example, that I believe that there are no clowns in Jupiter, and that less than 100 million people live in Andorra. But it is hard to see what kind of action could serve as my basis for interpretation to arrive at my self-attributions of those beliefs. Carruthers could reply that, as soon as I entertain the doubt of whether I believe that there are clowns in Jupiter or I do not believe it, I will say “There are no clowns in Jupiter” in inner speech, which will serve as my basis for interpretation. But I worry that such a response would simply make the interpretive model a version of the classical inner-sense model of self-knowledge. A related worry is that, in some cases, we can attribute to ourselves the lack of a belief in some proposition even though we do not disbelieve it, and we do not disbelieve its negation. We just suspend judgment on some issue, and we know it. I know, for example, that I do not believe that the number of stars in the sky is even. (And I know that I do not believe that it is odd.) In these cases, not only is it the case that I have performed no action which I can interpret in order to arrive at my self-attribution of the lack of that belief, but it is also the case that I have not had any sensory access to an episode of inner speech that could serve as my basis for interpretation. Presumably, if we suspend judgment on whether some proposition is correct or not, then we affirm neither that proposition nor its negation in inner speech. And it is to be expected that we will behave neither as if the proposition were true, nor as if it were false. So how can we attribute to ourselves, through interpretation of sensory information, a lack of belief in these cases?

    None of these worries threaten Carruthers’s point that our self-knowledge is not authoritative. If there is indeed empirical evidence that we are just as prone to error when we attribute thoughts to ourselves as we are when we attribute thoughts to other people, then that is surely a feature of the traditional notion of self-knowledge which is in need of revision. At the very least, that is one of the reasons why we should be interested in the findings discussed in Carruthers’s (2011). However, it is not obvious to me that the relevant data also show that self-knowledge is not privileged. I think that what emerges from reading the kind of empirical evidence that Carruthers has in mind is that the relevant sense of ‘interpretive’ may be different in philosophy and cognitive science. For the reasons mentioned above, it seems to me that the sense in which the evidence offered by Carruthers suggests that self-knowledge is interpretive is not the sense in which, according to the traditional notion of self-knowledge, self-knowledge is not interpretive. And that point should allow the traditional theorist of self-knowledge to preserve a sense in which self-knowledge is privileged, namely, it is not inferential.

  • Peter Carruthers

    Fernandez in his comment operates with a very weak understanding of the sense in which self-knowledge is supposed to be authoritative. According to Fernandez, we have authoritative self-knowledge provided that we are more reliable about our own propositional attitudes than we are about the attitudes of others. But would anyone really want to say that we have authoritative self-knowledge if we are correct about ourselves 25 per cent of the time and correct about others 20 per cent of the time? And would we even want to say that we have authoritative self-knowledge (without being authoritative about others) if we are right about ourselves 80 per cent of the time and right about others 75 per cent of the time? On the contrary, the way AUTHORITY is standardly understood in the philosophical literature on self-knowledge includes a clause such as that first-person attributions cannot normally be gainsaid from a third-person perspective, or somesuch. It is only in this stronger sense that I deny authoritative self-knowledge. (The weaker question is plainly empirical and entirely open, in my view.)

    Fernandez also claims that self-knowledge of attitudes that is based on one’s own verbal expressions in inner or outer speech is only possible if something like the traditional view is presupposed. This is because it would need to rely on an inference from my having found in the past that my speech is generally a reliable indicator of my attitudes. But these are not the grounds on which we come to know the attitudes of another person through what they say. Rather, the mindreading faculty working in concert with the language faculty arrives at an attribution of an attitude, and in the absence of cues of deceit we form a belief that the other possesses that attitude. This belief counts as knowledge provided that the interpretive process is generally a reliable one. Likewise, then, in our own case.

    Before coming to what Fernandez says about the dissonance experiments, let me first address his final main point: that I can know in the absence of any behavior that I know that there are no clowns on Jupiter. Granted, I can SAY in response to a query that I believe there are no clowns on Jupiter without having to engage in self-interpretation. But this is because speech-production in general does not proceed from beliefs ABOUT my beliefs, but just from my beliefs (together with many other factors). In this respect so-called “outward looking” accounts of self-knowledge have it right. When asked whether I believe that P, I address a first-order question — “P?” — to my memory systems. If the information that P is stored in memory or (as in the example about clowns on Jupiter) can immediately be computed from what is stored in memory, then that information is made available to language production systems, resulting in the utterance “P” or (given the form of the initial question) “I believe that P”. One does not FIRST form the belief that one believes that P and then say it. Rather, one says it, and the result is that one knows that one believes that P.

    If this is right, then I think Fernandez is wrong in his account of what happens in a typical dissonance experiment. It is not that my current state of negative arousal is EVIDENCE that I believe something other than I do. Nor is it that it provides a REASON for thinking that I believe other than I do. Rather, when I rehearse what I should say in response to a query, the prospect of SAYING the thing that is other than I believe is appraised positively (because it will make me feel better), and so I say it. I only come to have a false belief about my belief subsequent to my own assertion, because I will hear myself as saying that I believe that, and have no reason to reject the interpretation. This is just as happens when I hear the utterance of another person.

    Finally, on what it means to say that self-knowledge is, or is not, interpretive: I take it that one thing that everyone espousing traditional accounts of self-knowledge agrees upon is that self-knowledge is QUITE DIFFERENT FROM other-knowledge: it uses very different kinds of evidence, and very different inferential principles. That is exactly what the ISA account denies.

  • Introspection, Inattentional Blindness and an Insufficient Inferential Base

    Comments on Carruthers’ Knowledge of Our Own Thoughts is Just as Interpretative as Knowledge of the Thoughts of Others for National Humanities Center, 3 Nov 2011. References to Carruthers’ discussion will be (KOT:p#).

    Georges Rey, Philosophy, Univ of Maryland, College Park, MD 20742;

    1. Introduction

    Carruthers defends an “Interpretative Sensory-Access” (“ISA”) account of self-knowledge, according to which concurrent self-ascription of attitudes is based only upon sensory, behavioral and contextual (“SBC”) data, and not on any other data to which people might have special introspective access.[1] In (Rey, forthcoming) I have criticized his view and have defended what I’ll call “TAGS,” according to which people introspect many of their attitudes by virtue of the causal efficacy of tags affixed to some of the representations that are the output of some of their mental processes, much as standard computers mark the output of different programs run on them (e.g. “.doc” for MS-Word documents, “.jpg” for photos). Many self-ascriptions, say, of concurrent doubt, grief, resentment, can be especially reliable insofar as they are the consequence of complex constellations of such tagged representations.[2] In this brief space I’ll mostly summarize my reasons for rejecting ISA. For fuller discussion and defense of TAGS, readers should consult my longer paper at

    I should stress that I share Carruthers’ scepticism about traditional a priori-ish claims about the infallibility and “immediacy” of first-person concurrent ascription. Virtually all judgments are fallible and “mediated” by lots of other unconscious processes, and, in any case, the issues here are empirical. TAGS is simply a defense of the possibility of some privileged access to some propositional attitudes states. My primary reason for preferring it to ISA is that SBC data alone seem to me insufficient to account for the reliability that self-ascription seems often to exhibit.[3]

    Some preliminary issues:

    2. Interpretative Processes vs. Data

    ISA involves two claims that need to be distinguished: (i) that introspection is interpretative, and (ii) that the only data available for self-ascription is SBC data. However, attributions of any property to anything can of course be (unconsciously) inferential, but they also might involve quick sensory pattern matching, with no serious “inference” or effort at “explanation” at all: attributions of fear to anyone might sometimes involve a quick match between a sensory display and a stored prototype, whether it be SBC data (and/or tags) in one’s own case, or only BC data in the case of others. Moreover, TAGS can allow that sometimes self-attributions might be inferential, as when someone concludes she has selfish motives because everyone does; and it can allow that many cases might be mixes of inference and pattern-matching. The issue between ISA and TAGS is not issue (i), about “dual methods” (KOT:2), but only (ii), whether there is special “dual” data.

    3. Inattentional Blindness and Competence vs. Performance

    People’s attention can easily be affected by their background beliefs, expectations and interests, as in recent cases of “inattentional blindness,” where people attending to one stimulus can fail to notice another (Mack and Rock, 1998), or where people can fail to see what they don’t expect, e.g., a black heart in a rapidly exhibited deck of cards (Bruner and Postman, 1949). TAGS could easily explain the confabulations that Carruthers (KOT:2) cites — e.g., split-brain cases (2011:39-42), cognitive dissonance (KOT:3-4), and source confusion (2011:162-5) — as just such cases of people failing to attend to their inner tags because of their interests, desires, expectations or distraction by other data.[4]

    Carruthers claims that “dual method” theories (and presumably “dual data” ones like mine) “need to provide some principled account of the circumstances in which people access their thoughts directly and the circumstances in which they rely on self-directed mindreading… Many instances of confabulation concern perfectly ordinary everyday thoughts occurring in circumstances where people should have been paying attention to their thoughts… [and] should have had authoritative access to their thoughts if such a thing is ever possible.” –(KOT:2-3)

    But appeals to inattention need no general account of when and where people pay attention, or when they rely on one set of data rather than another. It would certainly need to be shown that “a direct question about one’s beliefs…would have the effect of activating the belief from memory,” so that there would be “no reason why a judgment of this sort should remain unconscious” (KOT:4). Carruthers’ own proposal about dissonance, that subjects want to seem “less bad” (KOT:4), supplies precisely the reason someone might ignore or trump her tags and supress the original belief — this is familiar enough self-deception, compatible with TAGS and any reasonable introspectionism.

    Of course, it’s one thing for inattention possibly to explain the errors, another for it actually to do so, and I don’t mean to suggest that the latter has yet been shown. Some cases, like the physiologically bizarre cases of split-brain patients, may well be ones in which the person really just is relying on SBC data. But if there are internal tags, people may need to be pressed hard to fix their attention upon them. After all, what do they ordinarily care whether they’re interpreting SBC data or genuinely introspecting? Nuances about introspective sources may ordinarily be disregarded, just as the nuances of visual experience go unnoticed until our attention is drawn to them by impressionist paintings. Indeed, an exquisitely Proustian sensitivity to the exasperating ups and downs of one’s inner life may be a pretty sure way to fail in one’s outer one (Proust, after all, had trouble getting out of bed!).

    In any case, inattentional blindness points to a general methodological problem in Carruthers’ discussion: performance errors alone aren’t sufficient for refuting a competence claim. Errors in self-ascription are no better evidence of lack of introspective competence than the inattentional blindness cases are of an actual defect in vision. What would show a lack of competence is failure across the board, especially when interfering factors, like inattention, are controlled for.

    Are there introspective failures across the board? People certainly appear ordinarily to know whether they are concurrently, e.g., fearing, expecting, doubting, hoping, judging, supposing, etc., far better than they know these facts about anyone else. How reliable are they in fact? I know of no studies; but I also know of no serious doubts. This is at least partly because we have a good deal of indirect evidence of this reliability: people often seem to be able to remember what attitudes they have had at least shortly after they have had them, and usually act consonantly with them (KOT:5). There are, of course, the familiar cases of repression, self-deception, the grip of bad theories, and confounding episodic memory with rational reconstruction. But this is hardly evidence of failure across the whole attitudinal board. In any case, all that concerns me here is how to explain whatever “general reliability” that Carruthers (KOT:5) agrees people do in fact display.

    4. Objectivist Explanations of Reliability

    Carruthers (2009a:7) attempts to explain our general reliability by appeal largely to the fact that we spend an awful lot of time with ourselves, and so have abundant evidence stretching back a lifetime, in addition to present SBC data (KOT:1).[5] But it’s not clear how even all that would be remotely sufficient.

    One way to bring out the difficulty is to consider cases in which BC data are minimal, say, someone day-dreaming, lying in bed, or sitting quietly in a non-descript room, eyes closed, for twenty minutes — what we might call “Meditative Cases.” ISA is committed to our self-knowledge in such cases being based on sensations and knowledge of one’s history alone, and so should predict that people are less reliable in such cases than in wide open-eyed, contextualized ones, since they would be basing their inferences on significantly less data, certainly less than they use in 3rd–person cases. And, of course, perhaps people are less reliable in these cases. But, again, taking subsequent behaviors and memories of such episodes at face value — say, paying a bill you suddenly felt guilty for neglecting — there is so far no reason to suppose they are.

    Carruthers (2011) briefly discusses Meditative cases:

    “There are numerous other sensory-involving cues that are available in such circumstances. One will be aware of one’s visual imagery, inner speech, affective feelings, motor imagery, and more besides. These might well be sufficient to ground attributions of thoughts to oneself. especially when one recalls…the two processing rules [(1) and (2)].” -(2011:158, emphasis mine; cf. pp92-3).

    (1) and (2) are the simple processing rules at KOT:5, applications of which “leave no room” for the possibilities of ignorance or error. But these principles are irrelevant here, for the issue is not how to infer one is/isn’t in some mental state because one thinks one is/isn’t, but, per the above worries about data, how one comes up with any ascription at all! There I am lying sleepless in bed, with an ache in my leg, an itch in my ear, an advertising jingle and images of Paris — and then a sudden memory of a trifling encounter with a neighbor — circulating in my mind. What am I to make of it all? Carruthers (2011:70) claims that 1st- and 3rd-person interpretative activity rely on “essentially the same interpretative principles.” But, however plausible this might be for full SBC cases, it seems patently false for these sorts of meditative ones, where standard means-ends reasoning by which we explain behavior is largely inapplicable. Inner imagery or speech is seldom produced for any purpose whatever; pace Freud, its course seems virtually random (and, in insomnia, often out of character, as in the sudden fixing on the trifling encounter). I’d be seriously at a loss to explain most of my own. Nevertheless, I could be quite confident that I was savoring the prospect of being in Paris, was suddenly resentful about the neighbor — and wished I could get the jingle out of my head. Perhaps I’d be wrong; but not as wrong as I would have been had I relied on merely an inference from the sensory data.

    Note that it’s not enough that sensory imagery “might well” be sufficient to support an inference in this case or that meditative case, as in the above suggestion and Carruthers’ other few, sketchy examples (e.g., 2011:69, 98,216). What needs to be shown is that they generally and reliably would be. By way of comparison, consider the case of open-eyed vision. Vision theorists face the non-trivial problem of explaining how a three-dimensional representation of a scene can be based reliably upon merely retinal stimulations, a problem partly solved by Stephen Ullman’s theorem showing how structure can be inferred from motion, and ultimately 3D forms from retinal intensity gradients (see Palmer 1999). Is there the slightest reason to think there would be any remotely analogous principles in the case of self-ascription on the basis (in meditative cases) of sensory data and background beliefs alone?

    5. Conclusion

    If people are even approximately as reliable in their self-ascriptions as they seem to be, it’s hard to see how this could be explained by their relying only on SBC data. I submit they would need some inner tags, even if, as in confabulation cases, they might sometimes disregard them. Of course, just which ones, and how and when they are used and preserved, would need much further research.[6]


    1. In his (20161:chap 5) he adds data regarding affect. He also claims Working Memory is confined to “sensory data” into which conceptual contents are somehow “bound” (2011:57-8,72-8,166-78). Since both of these (to my mind, highly problematic) emendations aren’t crucial here, I leave them for another time.

    2. The content of the attitude is provided by the content of the representation(s) tagged. Note that TAGS is not committed to claiming that tagged states correspond to crude folk categories like belief, desire or resentment. The claim is only that ascriptions of the latter rely on responding to constellations of the former.

    3. Another worry I have about ISA is its getting the phenomenology wrong, confining it (as too many philosophers do) to mere phenomenality (see Rey, forthcoming, pp15-6). Carruthers also argues for ISA on the basis of speculations about economy and evolution, which I won’t address given what I take to be our ignorance of sufficient psychology and history.

    4. Carruthers (2011:165) claims inattention can’t be responsible for mistaking imagined visual images with actual visual presentations, citing Petersen and Graham (1974). But this experiment shows only that imagery facilitates visual identification, not that the two are confused. Perhaps there are cases of genuine, incorrigible confusion, but they need to be examined closely, with the subjects pressed to attend carefully.

    5. He also appeals to the way that some attitudes can be “self-fulfilling” (2011:94-5), as when one feels committed to a decision one thinks one has made. But it’s hard to see how this would apply to any but a minority of special cases. Many self-ascriptions, e.g., of wonderings or imaginings seem committally quite neutral, and others. e.g., of fears, anxieties, forbidden desires, are of states one often wishes to get over!

    6. I’m grateful to Mark Engelbert, Andrew Knoll and Brendan Ritchie for comments on a draft.


    Bruner, J. and Postman, L. (1949), “On the Perception of Incongruity: A Paradigm,” Journal of Personality, XVIII: 206-23.

    Caruthers, P. (2009a), “How We Know Our Own Minds: the Relationship Between Mind-reading and Metacognition,” Behavioral and Brain Sciences, 32(2), pp1-62

    — (2009b), “Cartesian Epistemology: Is the Theory of the Self-Transparent Mind Innate?”, Journal of Consciousness Studies (forthcoming)

    — (2011), The Opacity of Mind, Oxford University Press

    Mack, A. & Rock I. (1998) Inattentional Blindness, Cambridge: MIT Press

    Petersen, M. and Graham, S. (1974), “Visual Detection and Visual Imagery,” Journal of Experimental Psychology, 103:109-14

    Rey, G. (forthcoming), “We Aren’t All Self-Blind: a Defense of a Modest Introspectionism,” available at

  • Dominic Murphy

    Struggling with empirically unmotivated philosophical treatments of self-knowledge is a wearying experience, and in contrast coming across Peter Carruthers’ theory is like finding a warm room with a bar. There is a strong temptation to relax and assume everything will be all right now: at last someone is doing it properly. For I am entirely on Carruthers’ side in this fight. Since we are philosophers, though, I shall naturally just talk about what I found to disagree with. In particular, I worry that the argument he gives here does not quite have the force he thinks.

    As far as the negative case goes, I agree with everything. I don’t think we have authoritative access to our mental states, and I don’t think our knowledge of our cognitive innards is a different sort of knowledge to our knowledge of what happens outside our minds. Self-knowledge is delivered by the same routes as knowledge of anything else — regular old perception, memory, inductive and abductive reasoning, and all the rest of it. There is no special mental faculty or system through which we become knowledgeable about ourselves. Indeed, the difficulty of figuring out what such a system could be, given what we know of human cognition and its neurological basis, is an important motivation for adopting a view like Carruthers’, at least in my case.

    But why have philosophers treated self-knowledge as authoritative — indeed, almost infallible, as he says?

    Carruthers’ view, if I get it, is that the philosophers have been misled by our psychology. Built into us we have these inference rules that conclude that one is or isn’t in a mental state if one thinks one is or isn’t. Carruthers puts these rules into the ISA model because he thinks they are part of our mental furniture and explain why “people across time and place have such a powerful intuition of infallible (or at least authoritative) access to their own thoughts”. On this view, then, the philosophers who have peddled the myth of privileged inner access have endorsed a piece of folk psychology and tried to explain it. But is this really true? Carruthers may have a body of ethnographic evidence to appeal to on this point that I don’t know about, but my sense is that a commitment to first person authority is not at all a cultural universal but is — as Gilbert Ryle argued sixty years ago — more likely to be a philosophers’ fiction than a genuine component of our folk psychology or folk epistemology. After all, it is not so unusual to ask or be asked whether you really believe what you claim, or whether that’s really what you want. Carruthers thinks that most of us would be baffled by such questions. Certainly in some cases it would be dumbfounding to be challenged — if someone told me I don’t really believe that dogs usually have four legs I wouldn’t know what to say to them. But that is because the evidence for my belief is so overwhelming, not because the belief is mine. We are certainly open to being argued with about what Carruthers calls the provenance of our beliefs about our states of mind. The idea that we could be wrong about why we believe what we do is a familiar one — perhaps we only hold our beliefs because of our class interests, or because we don’t want to accept that we have misjudged those closest to us. We might not agree that we are wrong about the provenance of our beliefs, but I don’t think the idea that we could be wrong is as strange to us as Carruthers suggests. So I am unsure about the force of Carruthers’ claim that a belief in especially authoritative self-knowledge is a human universal. If that is right then the reverse engineering justification for his heuristic rules falls away, because they do not have any work to do.

    So far I have only discussed the sources of our beliefs, not their content: in this I have followed Carruthers who at one point talks about our thoughts about the provenance of our attitudes. I agree that the experimental psychology suggests widespread ignorance and confabulation on that point. But I take it the more philosophically interesting issue for the debate over authoritative self-knowledge concerns the actual belief or judgement, and not its provenance. Certainly, that is where the traditional view is most likely to make a stand: if I sincerely assert that I believe that Maine is cold in winter, surely I can’t be wrong about that? I suspect that many of the philosophers that Carruthers has in his sights will indeed dig their heels in at just this point, and insist that what is at stake is not my knowledge of why I believe what I do, about which I could indeed be wrong, but my knowledge of what I believe. After all, Descartes was prepared to accept that he could be deceived about all the evidence for his belief that he had hands, but refused to concede that he could be wrong about his having that belief.

    Carruthers mounts a direct challenge to the idea that I could be wrong about my current belief, but I don’t think the evidence he musters supports his case. Take the dissonance experiment. I don’t see why a determined Cartesian about self knowledge cannot just respond as follows: the students in the experiment start out believing that tuition should not be increased. They end up believing that maybe (or even definitely) tuition should be increased. They are certainly wrong about the cognitive processes that led to that change in mind. But they are not wrong about the beliefs that they express at the end of the experiment — they really do now have a different belief about tuition fees, and they can report authoritatively on that. Carruthers disputes this on the grounds that it is unlikely that subjects are really changing their mind, since “there is no plausible mechanism via which a question about one’s beliefs should lead to the formation of a new belief in these circumstances”. Well, it’s true that there is no rational mechanism that should cause belief change in this case, but that is not the same as there being no plausible mechanism. It is quite clear that people change their minds for all sorts of non-rational, even irrational reasons, so why shouldn’t dissonance reduction be among them? The mind contains all sorts of mechanisms that cause beliefs without justifying them: I find it very plausible that these experiments have uncovered yet another.

    So I don’t think that the experiments Carruthers reports on quite do the trick. There is doubtless a lot more evidence amassed in the book, which we will all need to read straight away, but although I am on the same side as Carruthers I am warier than he is of appealing to the psychological literature to suggest that we do not know our attitudes. I think the science shows that we are often ignorant — to a dramatic extent — of the causes and sources of our beliefs and other attitudes. But it does not clearly show that we are wrong about the attitudes we report. The science is certainly suggestive on that point, but I don’t think it is a definitive refutation of the Cartesian position.

    So why am I on Carruthers’ side? Because everything we know about cognitive science makes it completely mysterious what the system could be that supposedly reports on my beliefs in the approved Cartesian fashion. We need to find out what the systems are that really underwrite self-knowledge, and we are starting to learn enough about how our psychology is put together to come up with some candidates. Carruthers says that the mind-reading system does all the work — that it is the only access we have to our mental states. But he also says, in response to Jordi Fernandez, that memory can be involved in reporting on our beliefs. I certainly agree, but that seems to me like quite another system. I suspect that if our knowledge of ourselves is like our knowledge of things in general it will involve all manner of systems, because knowledge in general is like that, and that self-knowledge involves processes that can occur wthout any involvement from mindreading, as for example when I simply remember what I think in response to a question about whether I believe Jupiter is inhabited. Carruthers, on the other hand, thinks everything from every system involved in self-knowledge goes through the mindreader, which is our only means of access to our mind. Which of these views is correct is at least in principle open to empirical scrutiny. It’s a great day when philosophical theories mature enough to be handed over to the scientists, and Carruthers’ ISA account has greatly helped that process along.

  • Peter Carruthers

    Rey claims that there are a special set of tags attached to our attitude events that allow us to know of them in our own case. He says that this account agrees with my Interpretive Sensory-Access (ISA) theory about the interpretive component, disagreeing only about the need for sensory access. But this is a mistake. The ISA theory does not merely claim (as does the TAGS model) that our access to our own attitudes is inferential, relying on subpersonal computations or inferences of one sort or another. (This is something that inner sense theorists, too, will accept.) Rather, the claim is that our access is interpretive in the same sense that our access to the attitudes of other people is interpretive — relying on the same mindreading system and the same set of tacit interpretive principles (“seeing leads to knowing”, “choice reflects preference”, and so on). This, of course, TAGS is committed to denying.

    Rey argues that a good reason to believe in TAGS (and to reject the ISA account) derives from the immense range of attitude events that we unhesitatingly attribute to ourselves, even in cases where behavioral and contextual data are unavailable (Rey calls these “meditative cases”). But this, too, is a mistake. Think of the range of attitudes one can unhesitatingly hear a friend as expressing while talking with her on the telephone (where behavioral and contextual cues other than those of the previous discourse are likewise unavailable). One can HEAR someone as fearing, expecting, doubting, judging, hoping, supposing, and so on, as a result of the activity of one’s mindreading system working in collaboration with the language faculty. Likewise, then, in meditative cases: one can hear oneself as expressing any number of attitudes as a result of interpreting one’s own inner speech; and in these cases one will additionally have the benefit of one’s own visual imagery, affective feelings, and so on, to assist in interpretation.

    It is an interesting question why we should never (or only very rarely) become aware of the ambiguities in our own inner speech, in the way that we are often aware of ambiguities in the speech of others (causing us to pause and reflect). In Carruthers (2011) I detail a number of independent but interacting strands of explanation. But the simplest to explain is as follows. One of the standard principles employed in speech comprehension is the relative accessibility of various concepts and structures. For example, if we have just been talking about finances, I shall automatically interpret your utterance, “Well, now I need to go to the bank” as about a financial institution, and not the bank of a river. This is because the previous context renders one concept more accessible than the other. But note that the thoughts that initiate an episode of inner speech will involve activations of some specific set of concepts. When the resulting representation of a heard sentence is processed by the language comprehension system, those concepts will perforce be readily accessible, making a correct interpretation highly likely.

    Rey also claims that his TAGS account can easily explain all the instances of confabulation about current attitudes that I cite as evidence favoring the ISA theory. This is true in the very weak sense that his account is consistent with the evidence. But a theory can be rendered consistent with any body of evidence by selecting suitable auxiliary assumptions. What needs to be shown is that the assumptions are at least principled and at best ones that we have independent reason to accept. Moreover, the resulting account needs to be able to predict, not just that confabulation should sometimes occur, but the specific patterning that we find in the confabulation data. This is what the ISA theory does. It claims that people should make mistakes about their own attitudes whenever they are provided with cues of the sort that would lead them to make mistakes about others (which is just what we find); and the auxiliary assumptions needed to generate predictions of the data are in every case well-established findings or mechanisms. The TAGS account, in contrast, can only accommodate the data post hoc, by appealing to lapses of attention, or “repression”, or whatever.

    Contra Rey, what the ISA account requires for support (or rather, for this strand of support; there are many others) is not “failure across the board”. Rather, it is a pattern of failures predicted by the ISA account, and only by the ISA account. If self-knowledge results from turning our mindreading capacities on ourselves, then we should expect errors whenever cues are provided of the sort that would mislead us when attributing mental states to others. This is just what we find. No other theory makes such a prediction — certainly not TAGS. Nor is what is at stake well characterized as one of competence versus performance. For both the ISA and TAGS models agree that we possess competence for ascribing mental states to ourselves, often issuing in self-knowledge. What is at stake is the nature of that competence, not its existence.

  • Peter Carruthers

    Murphy is doubtful of my claim that some form of Cartesian epistemology of the self is a cultural universal. He points out that the folk will commonly say things like, “Is that what you really believe / want?” He takes this to suggest that they are open to the possibility of error. I am doubtful. For in all such cases I think the question could appropriately be glossed, “Or are you just saying that?” The challenge concerns the literalness and sincerity of an assertion, rather than the existence of self-knowledge.

    In writing Carruthers (2011) I cast my net as widely as I could before making a (tentative) claim about cultural universality. I considered major figures in the history of Western philosophy (ancient, modern, and contemporary), and I consulted experts in the philosophies of Ancient China and of the Indian subcontinent. The general finding was that all are consistently Cartesian (to some significant degree) about the nature of self-knowledge. It seems that whenever people have reflected explicitly about their knowledge of their own mental states, they have been led to embrace some kind of Cartesian view. And it is a striking fact that although many cultures have had robust skeptical traditions regarding the possibility of knowledge of the physical world, none seems to have contained skeptics about our knowledge of our own mental states.

    Murphy, like Fernandez, does not find the dissonance data completely compelling. He is right. No set of data is ever sufficient by itself to establish the truth of a theory. I claim only that the ISA account provides the best explanation of those data, even when the debate is construed narrowly (that is, without considering all the other evidence that I muster in support of the theory). It is important to recall some of the details of the phenomenon. People only alter what they say in circumstances where they have been made to feel bad. But they do not do so in advance of being asked what they think. (We know this, recall, because they will also utter other dissonance-reducing confabulations when given the chance, only making a change in response to the first-presented option.) So Murphy’s hypothesis must be that a question about one’s belief in the presence of negative affect (and not just any negative affect, but affect that has been caused in the right way in the right circumstances) causes the eradication of one belief and the formation of a new one, which is then accurately (perhaps authoritatively) reported. There might, indeed, be some mechanism that achieves this. But I have no idea what it might be or how it might work; and Murphy admits that he doesn’t either. As a result, someone committed to Cartesian epistemology cannot yet EXPLAIN the data. They have merely postulated a something — we know not what — that might explain it. In contrast, the ISA account genuinely can provide such an explanation, appealing to well-established accounts of human affectively-based decision making of the sort developed by Damasio and others. By my lights, a good explanation beats a non-explanation every time.

    Finally, Murphy argues that other systems are involved in reporting our beliefs. Of course this is true (and something that I emphasize). But a report of a belief is an action. It is not an item of knowledge. Knowledge is a cognitive state, not a performance. So Murphy faces a dilemma. Either self-knowledge of the belief expressed occurs first, prior to its articulation in speech. If so, we are owed some account of how that happens. Or self-knowledge occurs subsequent to hearing and interpreting our own verbal or other report. The latter will, on any account, involve the work of the mindreading faculty. So in the absence of mechanisms that can get access to, and form beliefs about, our beliefs directly, all self-knowledge must, perforce, be “channeled” through the mindreading faculty, as I claim.

  • I am going to distinguish between two importantly different claims Carruthers makes in his challenging and original contribution to our understanding of the nature of self-knowledge. The first claim is that there are significant similarities between knowledge of our own mental lives and knowledge of the mental lives of others, including a shared fallible, non-privileged, and “interpretive” nature. The second claim is that the very same “mind-reading” faculty underwrites attributions of psychological states to both oneself and to others.

    These two claims are separable: e.g., there might be two distinct psychological faculties — one that underwrites self-knowledge and another that underwrites other-knowledge — that share the interpretive and fallible nature that Carruthers highlights but that differ in other respects. Or there might be non-overlapping collections of cognitive faculties that underwrite self-knowledge and other-knowledge, all of whose processing can plausibly be construed as similar forms of interpretation and inference. My focus in this comment is on considerations that speak against Carruthers’ second claim, though I will return to the first claim briefly at the end.

    Carruthers’ claim that a single psychological faculty is responsible for all of our knowledge of mental states depends crucially on how to appropriately individuate psychological faculties. Two criteria that have been central to faculty individuation in psychology are (a) the class of informational inputs a faculty processes and (b) the rules that govern a faculty’s processing. Factors (a) and (b) are not the only criteria for faculty individuation, but they are important.

    Consider first the informational inputs to the cognitive processes underlying attributions of mental states to oneself and others. Carruthers allows that unique inputs, in some sense, are associated with self-knowledge, including visual imagery and inner speech. Elsewhere Carruthers (2011) highlights that self-attributions also rely, to a unique extent, on proprioceptive inputs (i.e., information concerning one’s bodily position and movements) and interoceptive inputs (i.e., information concerning one’s internal states, such as organ pain or hunger). Thus, the sets of inputs to the cognitive processes underlying self-knowledge and other-knowledge are in some significant respects disjoint.

    Further, the similarities between inputs that do obtain are fairly weak. Carruthers classifies the inputs together only under the broad rubric of “sensory”. Further, he claims such inputs are “globally broadcast,” which means they are available to (and presumably processed by) many different kinds of cognitive faculties. In short, considerations relating to criterion (a) don’t discernibly recommend positing a single mind-reading faculty that underwrites knowledge of one’s own and others’ mental states.

    Consider next the processing rules that govern self-attributions and other-attributions of mental states. Carruthers claims that we have evidence that these processes are highly similar and interpretive in character because we “mak[e] errors in self-attribution that directly parallel the errors that we make in attributing thoughts to other people.” This generalization might seem warranted given dissonance studies and studies on confabulation.

    But even granting the interpretation to these studies Carruthers advocates does not address a certain class of errors that many philosophers, going back to Wittgenstein, have claimed plagues other-attributions but not self-attributions, namely, “errors of misidentification”. As an example of a misidentification error, consider attributing to your friend the belief that she is hungry based on seeing someone touching her stomach while complaining that she needs food. In this case, an identification error could arise if you misidentified someone else as your friend because of that person’s similar voice and appearance.

    Misidentification errors of this kind are not, on the face of it, precisely paralleled in self-attributions. Misidentifying other people and misattributing beliefs to them is not uncommon, but few of us have ever had occasion to say, “I believed I was hungry and indeed *someone* believed that she was hungry, but in fact it was not I who believed I was hungry”. Perhaps such errors of misidentification in self-attribution are possible in exotic and rare circumstances, but self-attributions such as “I believe I am hungry” and the like do not typically appear prone to such errors. This apparent difference is not predicted by Carruthers’s ISA account and instead provides prima facie reason to differentiate the kinds of cognitive processes underlying attributions of mental states to oneself from the kinds of cognitive processes underlying attributions of mental states to others.

    Another potential processing difference concerns inner and outer speech interpretation, which Carruthers claims are components, respectively, of self-knowledge and other-knowledge. A prima facie difference between these processes, which Carruthers attempts to explain away, is that we can be consciously aware of ambiguities and interpretive processes for outer but not inner speech. Nevertheless, Carruthers claims that a first pass, unconscious interpretative process occurs in both cases. Further, he holds that inner speech interpretation has greater access to salient contextual features for resolving ambiguities and that a unitary mindreading faculty models everyone as having transparent access to their own mind. Thus, Carruthers claims, inner and outer speech are interpreted by the same kinds of cognitive process.

    Even adopting these hypotheses on the nature of inner speech interpretation and transparency rules in the mindreading faculty, however, does not appear to answer the strongest form of this criticism, viz. that when interpreting inner speech, unlike when interpreting outer speech, we *cannot* call the interpretive processing to conscious attention. The “short-circuiting” Carruthers posits in this instance might instead be construed as a significant difference in the cognitive architecture underlying the interpretive processes for inner and outer speech. Further, the short-circuiting role that Carruthers attributes to the transparency model inherent in the mindreading faculty appears ad-hoc in the case of inner speech interpretation, since it does not appear to short-circuit higher-order conscious reasoning for self-attribution of mental states when such attributive processing takes into account possible cases of self-deception, confabulation, and the like.

    Ultimately, the full range of similarities and differences between cognitive processes, and additional factors such as the evolutionary history of the relevant cognitive structures, should be considered when classifying psychological faculties. I have not attempted to make an overall assessment here, but have instead highlighted potential differences that support positing one or more distinct cognitive faculties associated with attributing mental states to ourselves and others.

    Even if my criticisms above of a single mindreading faculty warrant positing multiple faculties, however, they do not directly undercut Carruthers’s first claim that self-knowledge is indirect, interpretive, and highly fallible. Carruthers may still have grounds for criticizing many projects in traditional epistemology and for drawing out implications for our conception of ourselves as moral agents. However, if Carruthers is mistaken in positing a single mindreading faculty, we might expect there to be proprietary kinds of inferences and interpretation involved in self-knowledge as opposed to our knowledge of the minds of others.

  • Peter Carruthers

    Fuller raises an interesting question concerning the individuation of mental faculties. He suggests that even if the ISA account is correct that self-attributions of mental states are just as interpretive and sensory-based as attributions of mental states to others, there may be distinct mental faculties underlying competence in the two domains.

    Fuller says that one reason for believing in two distinct faculties is that the inputs are significantly different in the two cases. He exaggerates the difference, however. Although visual imagery and inner speech are available in the first person but never in the third, these should not be counted as inputs distinct from the vision and hearing that guide our attributions of mental states to others. The difference lies merely in their causes (endogenous versus exogenous) not in their representational content or the mechanisms that realize them. (In fact we know that imagery and perception share the same mechanisms.) But Fuller is correct to point out, nevertheless, that self-attributions can also utilize proprioceptive and interoceptive experiences that are never available to us in the third-person.

    I think it is implausible to claim, however, that we have reason to postulate distinct faculties whenever inputs are of different kinds. More important, to my mind, is the DOMAIN of the faculty, which concerns its output, not its input. For whenever faculties deal with abstract, amodal, matters, they are likely to receive inputs of many kinds. Consider numerosity, for example, which most animals as well as human infants can compute. The numerosity system takes inputs of many sorts, enabling one to judge the approximate numerosity of a visually presented set of items, a sequence of sounds, or a proprioceptively presented sequence of bodily movements like finger-taps. This provides no grounds for saying that there are really three or more distinct numerosity faculties. On the contrary, what matters is that the system computes representations from the same domain (number), and that it is realized in a common brain network. We have just the same reason to believe that there is a single system that computes mental states for self and other. The domain is the same. And although the evidence is not as extensive or as clean as one might like, it appears that the very same network of brain regions is involved whether we attribute mental states to others or to ourselves (Ochsner et al., 2004; Chua et al., 2006, 2009; Lombardo et al., 2010).

    Fuller also suggests, however, that the processing employed for self and other is quite different, and cites the fact that we seem to be immune to error through misidentification – although we can be mistaken about what mental states we are undergoing, it seems we cannot be mistaken about WHO is undergoing those states. Fuller says that since the ISA account does not predict this result, we therefore have reason to think that a distinct mental faculty might be involved in the first-person case. But in fact ISA does predict that we cannot err through misidentification in connection with sensory and imagistic states. This is because they are presented to the mindreading system in ways that the sensory and imagistic states of other people never could be. But ISA claims that it is false that we are immune to error through misidentification for attitude states, since these can be self-attributed on the basis of one’s own circumstances and behavior, just as the attitudes of others can be. In such cases there will always be a substantive (and potentially mistaken) assumption made, namely that the circumstances and behavior in question are one’s own. (However, the ISA account predicts that errors of this kind should be rare, for obvious reasons.)

    Consider the phenomenon of “thought insertion” in schizophrenia. A subject might believe that Oprah is telling him to kill his mother, for example, mistaking his own inner speech for the speech of another person. He is right that SOMEONE is urging him to kill his mother, but he is mistaken about WHO is urging him to do that. These seem like clear cases of error through misidentification. And note that schizophrenia is strongly associated with difficulties in third-person mindreading as well (Brune, 2005; Sprong et al., 2007). This is, at least, consistent with the ISA account.

    The fact that most philosophers believe that immunity from error through misidentification is obviously true of attitudes as well as experiences is actually just another manifestation of the Cartesian intuition. Since people think that our own attitudes are presented to us in ways that the attitudes of other people never could be, of course they will intuitively believe that there is no question of misidentifying the bearer of those attitudes.


    Brune, M. (2005). “Theory of mind” in schizophrenia: A review of the literature. Schizophrenia Bulletin, 31, 21-42.

    Chua, E., Schacter, D., Rand-Giovannetti, E., and Sperling, R. (2006). Understanding metamemory: Neural correlates of the cognitive process and subjective level of confidence in recognition memory. NeuroImage, 29, 1150-1160.

    Chua, E., Schacter, D., and Sperling, R. (2009). Neural correlates of metamemory: A comparison of feeling-of-knowing and retrospective confidence judgments. Journal of Cognitive Neuroscience, 21, 1751-1765.

    Lombardo, M., Chakrabarti, B., Bullmore, E., Wheelwright, S., Sadek, S., Suckling, J., MRC AIMS Consortium, and Baron-Cohen, S. (2010). Shared neural circuits for mentalizing about the self and others. Journal of Cognitive Neuroscience, 22, 1623-1635.

    Ochsner, K., Knierim, K., Ludlow, D., Hanelin, J., Ramachandran, T., Glover, G., and Mackey, S. (2004). Reflecting upon feelings: An fMRI study of neural systems supporting the attribution of emotion to self and other. Journal of Cognitive Neuroscience, 16, 1746-1772.

    Sprong, M., Schothorst, P., Vos, E., Hox, J., and Van Engeland, H. (2007). Theory of mind in schizophrenia: Meta-analysis. British Journal of Psychiatry, 191, 5-13.

  • In his reply to my comment (above), Carruthers misses its main points:

    (i) As I stressed, the difference between ISA and TAGS is not over whether introspection involves interpretation –or even uses the same mind-reading system and interpretative principles. It may often do so. In fixing their beliefs about anything, I presume people can and often do use whatever evidence and principles they can get their minds on. The question is only whether in their own case they *also* have access to non-sensory data, viz., internal tags, unavailable in the case of others.

    (ii) Nor is the issue one of relative “hesitation.” I said nothing about this, and presume that some introspections on the basis of tags might be as perplexed and hesitant as about anything else. Indeed, as I also stressed, attribution of *anything to anything* can be inferential and/or interpretive –or it can be fast pattern matching, whereby –as I, myself, said– one unhesitatingly “sees” fear in another’s face, or, for TAGS, one is immediately aware of it in the constellation of one’s internal tags. The issue I pressed is not one of (un)hesitation, but of *reliability*: I see no reason to think that concurrent ascriptions of attitudes to oneself aren’t on the whole significantly more reliable than they are to others, as they appear to be. If they are, then ISA incurs a burden of explaining how they could be, especially in meditative cases in which behavioral and contextual evidence is exiguous, and in which, most importantly (a point to which Carruthers made no reply) the usual means-ends and other interpretative principles applicable to others are patently not applicable. To repeat, inner imagery is not *purposive* in the way outer behavior typically is. Even if there is interpretation here, it will have to be based in part on different “principles.”

    (iii) The problem in these meditative cases is not merely semantic ambiguities in inner speech, which may or may not be resolved by the concepts and associations that flood one’s mind in meditative cases (speaking for myself, I wouldn’t count on it). The problem for IAS is how one is to determine which *attitude* one bears to even disambiguated inner speech. I lie there insomniac, the image of an altercation with a neighbor suddenly filling my mind: how would that and other images alone provide a sufficient basis for me reliably to infer that I *resent* her remarks, or, alternatively, *feel guilty* or *ashamed* of my own; or perhaps merely *irritated* at the city for not having demarcated our properties? The imagery seems wholly compatible with all four and many other self-attributions, although it was and remains nevertheless perfectly clear to me which one was true.

    As I emphasized at the end of my comment, what IAS would need to provide is some reason to think that somehow the same interpretive principles that we apply to others could, when applied to inner imagery, issue in self-ascriptions that are as reliable as they appear to be. Yes, I suppose “one can hear oneself as expressing any number of attitudes as a result of interpreting one’s own inner speech,” but is there a shred of reason to believe one’s inner speech is as *reliably* articulate in this way as the speech of others –or that this is really the way people *reliably* learn of their own concurrent fear (they hear themselves say “I’m frightened,” and so infer that they’re afraid? Not that I’ve heard!).

    (iv) I appealed to the inattention phenomenon in vision deliberately to provide a non-ad hoc account of the confabulation data. Indeed, surely it would be surprising if inattention didn’t occur in self-ascription, whether or not it is based on tags or on SBC data alone, and for precisely the sorts of reasons that IAS adduces: the person is distracted or misled by wishes, expectations or other SBC data. TAGS therefore explains precisely the same “pattern of failures” as IAS; the confabulation cases won’t serve as critical data between the two accounts. As with establishing any special competence, whether it be vision or introspection, what one needs is an explanation not of failures, but of special *success*, of the sort that appears to be exhibited most clearly in the meditative cases.


  • Peter Carruthers (2011) presents to us experimental evidence in favor of the strong claim that every form of self-knowledge is propositional, based on self-attribution and mindreading, and of an indirect, interpretive kind. My objection is not that this claim is wrong, but only that it is too strong. Contrary to what his essay above is claiming, a dual-account of thinking about one’s thinking is able to offer “a principled way of accounting of the circumstances in which people access their thoughts directly, or rely, rather on self-directed mindreading”.

    My arguments will draw on the processes engaged in metacognition, i.e., assessment of one’s own cognitive success in a given task. Experimental studies suggest that two systems are involved in human metacognition. One of them is based on the feelings of fluency (or ease of processing), which apply to the structural aspects of a perceptual, conceptual, memorial, or reasoning task, independently of the particular mental contents involved in it. This system offers a direct access to one’s mental capacities – it does not require representing that one has attitudes with certain contents. It merely requires representing the task, and the uncertainty about the possibility of performing it correctly or not. The function of these feelings seems thus to be to guide decision in a way that is task-dependent, affect-based, and motivational. Mindreading-based metacognition, on the other hand, assesses cognitive dispositions on the basis of a naive theory of the first-order task, and of the competences it engages. In contrast with the former, the latter requires representing both the relevant propositional attitudes and their contents.

    The existence of a fluency based-system suggests, pace ISA theory, that not every form of access to one’s attitudes is interpretive. There is a form of “procedural” management of one’s attitudes that is based on an affective form of epistemic assessment and experience, which associates a positive or negative evaluation to a given attitude – without needing to represent it as an attitude. Furthermore, the patterning of the data is now better understood. Conditions such as divided attention, low motivation, low personal relevance, elated mood, favor cognitive assessment based on fluency. In contrast, when task motivation and personal relevance are high, when mood is bad, and when indications are given to the subject that fluent experiences may be attributed to environmental influence, then analytic, theory-based metacognition steps in.

    Moreover, there is also a clear behavioral dissociation between procedural metacognition and theory-based prediction. Subjects may offer radically different assessments of cognitive dispositions in self and others when they have been engaged in a task, from when they assess success in a detached way. In certain cases, an engaged judgment is more reliable. In Koriat & Ackerman (2010), judgments of learning based on the subjects’ own experience of a given task (self-paced learning of pairs of items) correctly use ease of processing as a cue: the longer you study a pair, the less likely it is that you will remember it. This cue is not used in essays where a yoked participant merely observes another perform the task. In these cases, a naïve, but wrong theory is used, according to which the longer you choose to study items, the better you will remember them. In other studies, however, fluency-based judgments lead to incorrect predictions, whereas attribution-based predictions are reliable. For example, subjects rate their memory for childhood better when their task is to recall six rather than twelve childhood events. A yoked participant, however, would correlate memory rating with the number of events retrieved (Schwarz, 2004).

    Peter Carruthers might object that a subject, when engaged in a metacognitive task, has access to evidence that she fails to have when she is merely observing another agent. Thus it is expected in ISA terms that the validity of the self-evaluations should differ in the two cases. In response to this objection, note that the participants in the engaged condition are unaware of using an effort heuristic. None of them reports, after the experiment, having based their own judgment of learning on an inverse relation between study time and learning. A natural explanation for the dissociation discussed above is that procedural metacognition and mental attribution engage two different types of mechanisms. Engaging in a cognitive task with metacognitive demands allows the agent to extract “activity-dependent” predictive cues, i.e. implicit associative heuristics that are formed as a result of the active, self-monitoring engagement in the task. Predicting success in a disengaged way, in contrast, calls forth conscious theoretical beliefs about what predicts success in the task. This type of contrast between implicit heuristics and explicit theorizing, and between engaged and detached assessment, seems to support the view that one’s knowledge of one’s own thought is different in kind from one’s knowledge of the thoughts of other people.


    Koriat, A. & Ackerman, R. 2010. Metacognition and mindreading: Judgments of learning for Self and Other during self-paced study. Consciousness and Cognition, 19, 1, 251-264.

    Schwarz,N. 2004. Metacognitive Experiences in Consumer Judgment and decision making, Journal of Consumer Psychology,14, 4, 332-348.

  • Peter Carruthers

    I will reply to Rey’s second set of comments using his own numbering system.

    (i) Rey is correct that the difference between his TAGS view and the ISA theory is not whether interpretation using the resources of the mindreading system is EVER the basis for spontaneous and seemingly-introspective self-attribution. Given the data, he has little option but to allow that it often is. Rather, what is at stake is whether self-attribution of attitudes ALWAYS employs mindreading-based interpretation. This is what ISA claims, and TAGS denies. As a result, Rey is forced to embrace a dual-method theory of self-knowledge. Cases that are alike in seeming, phenomenologically, to be instrospective, will sometimes result from self-directed mindreading and sometimes from accessing an appropriate set of attitude-identifying tags. What this means is that Rey assumes an extra explanatory burden. Since both accounts allow that seemingly-introspective self-knowledge sometimes results from self-directed mindreading, why should we postulate the existence of a second mechanism at all? For in general (and all else being equal) simpler theories, which postulate just a single mechanism (as does ISA), are preferable to those that postulate the existence of something else as well.

    Indeed, in proposing his TAGS account, Rey must assume a number of additional burdens. We need to be told more about the functions of these tags, and their roles within an overall cognitive architecture. Carruthers (2011) explores a number of ways in which the theory might be developed, showing that there are both theoretical and empirical difficulties specific to each. But all have problems accommodating the extensive empirical literature on source-monitoring (Mitchell and Johnson, 2000), as well as the close correspondence between sensory-based working memory abilities and fluid general intelligence, or g (Colom et al., 2004).

    (ii) Rey challenges me to account the distinctive reliability of self-attributions of attitudes in meditative cases, when external behavioral and contextual cues are unavailable. But in the first place, we have zero evidence of any great extent of our reliability in such cases. The few instances in which we get up to phone an old friend or whatever (having just self-attributed a decision to do so, say), don’t begin to establish any sort of general reliability. And there are real problems even thinking about how one might get an empirical handle on the question, given that most thoughts in meditative cases, when one’s mind is running in so-called “default mode”, are completely forgotten about and have no lasting impact on one’s mental or behavioral life.

    Moreover, it is far from clear what an appropriate comparison between first and third person should look like, for meditative cases. Certainly one should not compare reliability in first-person meditative cases with reliability in third-person meditative cases! For of course in the latter one has virtually no evidence to go on at all (nor will one generally attribute any attitudes). It is also problematic to compare first-person meditative cases with third-person cases equated for the amount of evidence that is available in each. For it is far from clear how to compare one form of evidence with another, here, or to judge how each form should be evidentially weighted. (I shall discuss the specific examples that Rey provides in (iii) below.) I guess what Rey is most likely to have in mind is a comparison between degree of reliability in first-person meditative cases with the reliability of ALL intuitively-immediate third-person attributions. Not only do we have no real evidence pertaining to the former (as I pointed out above), but we have no real evidence pertaining to the latter, either – beyond the platitude that we seem to be pretty good at it. It is doubtful that any further intuitions that one might have at this point can bear any argumentative weight.

    Rey also claims that it is hard for ISA to explain how we so readily self-attribute attitudes in meditative cases, given that the imagery we experience is not purposive, and hence not subject to the same sorts of means–ends explanations that the mindreading system uses when attributing attitudes to other people. But Rey’s assumption of purposelessness betrays his tacit Cartesianism here. Granted, we are generally not aware of any purpose or goal when, as we say, we “allow our thoughts to drift.” But it only follows that there are no purposes or goals present if one assumes that such states are always conscious. In fact there is every reason to think that our “drifting thoughts” will be generated from interactions among a variety of purposes and goals (many of them quite fleeting), serving to direct our attention in such a way as to activate one sort of visual representation rather than another, or to issue in one item of inner speech rather than another.

    Moreover, a little reflection is sufficient to show that much interpretation of the behavior of other people takes place without us being overtly aware of their goals. Think, for example, of casual gossip over coffee with someone one knows well. One will generally have little conscious awareness of the goals that lie behind each utterance, yet we nevertheless manage to hear specific types of attitude as being expressed.

    (iii) Rey points out (quite rightly) that in my reply to his earlier comment I had discussed how particular contents can be reliably self-attributed in meditative cases, but had failed to discuss how particular attitudes to those contents are likewise self-attributed. He mentions a number of different emotional attitudes, in particular (resentment, guilt, shame, irritation, fear). In fact there are a range of potential sources of information that are available to one here. The most familiar idea is that emotions might come paired with distinctive interoceptive experiences, or so-called “gut feelings”. I am inclined, myself, not to place much weight on this source of evidence in my account. This is partly because it is unclear from the current literature whether there are patterns of somatic change distinctive of the various emotions, and partly because people seem not to be very good at identifying them even if there are (at least in explicit tasks). But we also know that distinct emotions come paired with distinctive motor outputs, especially for facial expressions and bodily postures. And we know, likewise, that even fleeting forms of emotion issue in micro-changes in the facial musculature. So there is likely to be proprioceptive sensory information available to the mindreading system to aid in emotion identification, even in meditative cases. Moreover, if emotionally salient stimuli (whether externally perceived or self-generated in the form of an image) are processed in the way that other such stimuli are, then one might expect that emotion-specific appraisal concepts like WRONG or THREATENING would get bound into the sensory representations in question. (Compare the way in which concepts like CAR or PERSON are bound into the contents of perception, leading us to see something AS a car or AS a person.) If so, then reading off the emotion-type from one’s sensory experience will be almost as trivially easy as recognizing that one is imagining a car rather than a person.

    (iv) Rey claims that his appeal to inattention can explain the very same patterning of confabulations versus veridical self-report that are accounted for by ISA. Oversimplifying somewhat, one can say that the pattern is this. When subjects are provided with sensory cues of the kind that would be likely to mislead them when attributing an attitude to a third party they go awry, whereas in otherwise parallel cases when they are not provided with such cues they do not. Evidence of confabulation has been found for a range of propositional attitudes using a wide variety of experimental paradigms. It seems, then, that if Rey is not to advance post hoc explanations of the data on a piecemeal basis, he will need to claim that what all these paradigms have in common is that they somehow serve to distract subjects’ attention from their own tags (which would otherwise enable direct and reliable self-report). This is beginning to sound quite a lot like traditional defenses of such things as fairies that live at the bottom of the garden: the fairies are there, but they disappear as soon as you look for them. Likewise here: the tags are there (and can ground introspective self-knowledge), but they fail to be attended to as soon as experimenters look for evidence of them.


    Colom, R., Rebollo, I., Palacios, A., Juan-Espinosa, M., and Kyllonen, P. (2004). Working memory is (almost) perfectly predicted by g. Intelligence, 32, 277-296.

    Mitchell, K. and Johnson, M. (2000). Source monitoring: Attributing mental experiences. In E. Tulving and F. Craik (eds.), The Oxford Handbook of Memory, Oxford University Press.

  • Peter Carruthers

    Proust mis-states my thesis in a small, but crucial, respect. I do not claim that all self-knowledge is propositional and based on interpretive mindreading. Rather, I claim that all self-knowledge of propositional attitudes is based on interpretive mindreading. (Or rather, I claim that almost all is – there are a couple of exceptions that have not been discussed in this forum.) So I am happy to allow that some forms of cognitive self-management rely on our sensitivity to epistemic cues like feelings of fluency or the amount of time spent studying, which more-or-less reliably signal the presence of a corresponding cognitive process. What I deny, though, is that these cues have metacognitive contents, in the strict sense employed by psychologists as involving “thoughts about thoughts”. Only when these cues are interpreted by the mindreading faculty, issuing in a judgment that one is confident, say, is genuine metacognition involved.

    I also think that the procedural forms of self-management that Proust has in mind mostly don’t require metarepresentation, nor any contribution from the mindreading faculty. Rather, for example, a feeling of disfluency, or a feeling of low confidence, provides a direct motive to switch to a different form of processing. Although one can talk, here, of “monitoring and control”, the process is not really metacognitive in nature. (Or at least, not in the sense that concerns me – I am perfectly happy if Proust wants to describe these as forms of “procedural self-knowledge” that don’t involve mindreading or metarepresentation.)

    I am well aware of the dissociation between procedurally-based metacognition and theory-based prediction that Proust cites. And I reply as she predicts that I will: these are cases in which opportunities for learning have been offered to the mindreading faculty in the first-person that were not available in the third. She retorts, in turn, that subjects in the first-person condition, while making roughly accurate metacognitive judgments about their learning, fail to notice the inverse correlation between study time and accuracy that they actually employ as a cue. But nor would they, if the relevant generalization were learned by the mindreading faculty, whose internal contents are not normally globally accessible. In the third-person case, in contrast, the mindreading faculty has been given no opportunity for relevant learning, so people have no option but to fall back on their explicit folk theories (such as, “the longer you study, the more you will learn”). So the dissociation in question still provides no reason to think that distinct mechanisms are involved in the first-person from those involved in the third.

  • Peter Carruthers

    Since I gather that this forum is now closed to further contributions, let me thank all those who have taken time out of their busy lives to comment on my work. I have enjoyed the exchange, and hope others have found it of some interest.

    In closing, however, I would also like to stress that in my view the debate about the character of self-knowledge is (or should be) almost entirely empirical. Moreover, there are many, many, bodies of data from across cognitive science, and many related theories (most of which have gone unmentioned here), that are relevant to the evaluation of the various competing accounts. The overall form of argument for the ISA theory takes the form of an inference to the best explanation. What matters, in the end, is not how well I can respond to some or other consideration or counterexample, but rather how well the competing theories fare in accommodating the totality of the evidence.

  • This conversation, while ending here, continues on Facebook. Join us there by logging on to your Facebook account and proceeding to our group: On the Human.