Comments on: Knowledge of our own thoughts is just as interpretive as knowledge of the thoughts of others https://nationalhumanitiescenter.org/on-the-human/2011/10/knowledge-of-our-own-thoughts/ a project of the National Humanities Center Mon, 13 Feb 2012 19:42:46 +0000 hourly 1 By: Jason King https://nationalhumanitiescenter.org/on-the-human/2011/10/knowledge-of-our-own-thoughts/comment-page-1/#comment-8527 Wed, 09 Nov 2011 17:47:58 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2894#comment-8527 This conversation, while ending here, continues on Facebook. Join us there by logging on to your Facebook account and proceeding to our group: On the Human.

]]>
By: Peter Carruthers https://nationalhumanitiescenter.org/on-the-human/2011/10/knowledge-of-our-own-thoughts/comment-page-1/#comment-8524 Wed, 09 Nov 2011 14:32:33 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2894#comment-8524 Since I gather that this forum is now closed to further contributions, let me thank all those who have taken time out of their busy lives to comment on my work. I have enjoyed the exchange, and hope others have found it of some interest.

In closing, however, I would also like to stress that in my view the debate about the character of self-knowledge is (or should be) almost entirely empirical. Moreover, there are many, many, bodies of data from across cognitive science, and many related theories (most of which have gone unmentioned here), that are relevant to the evaluation of the various competing accounts. The overall form of argument for the ISA theory takes the form of an inference to the best explanation. What matters, in the end, is not how well I can respond to some or other consideration or counterexample, but rather how well the competing theories fare in accommodating the totality of the evidence.

]]>
By: Peter Carruthers https://nationalhumanitiescenter.org/on-the-human/2011/10/knowledge-of-our-own-thoughts/comment-page-1/#comment-8523 Wed, 09 Nov 2011 14:27:43 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2894#comment-8523 Proust mis-states my thesis in a small, but crucial, respect. I do not claim that all self-knowledge is propositional and based on interpretive mindreading. Rather, I claim that all self-knowledge of propositional attitudes is based on interpretive mindreading. (Or rather, I claim that almost all is – there are a couple of exceptions that have not been discussed in this forum.) So I am happy to allow that some forms of cognitive self-management rely on our sensitivity to epistemic cues like feelings of fluency or the amount of time spent studying, which more-or-less reliably signal the presence of a corresponding cognitive process. What I deny, though, is that these cues have metacognitive contents, in the strict sense employed by psychologists as involving “thoughts about thoughts”. Only when these cues are interpreted by the mindreading faculty, issuing in a judgment that one is confident, say, is genuine metacognition involved.

I also think that the procedural forms of self-management that Proust has in mind mostly don’t require metarepresentation, nor any contribution from the mindreading faculty. Rather, for example, a feeling of disfluency, or a feeling of low confidence, provides a direct motive to switch to a different form of processing. Although one can talk, here, of “monitoring and control”, the process is not really metacognitive in nature. (Or at least, not in the sense that concerns me – I am perfectly happy if Proust wants to describe these as forms of “procedural self-knowledge” that don’t involve mindreading or metarepresentation.)

I am well aware of the dissociation between procedurally-based metacognition and theory-based prediction that Proust cites. And I reply as she predicts that I will: these are cases in which opportunities for learning have been offered to the mindreading faculty in the first-person that were not available in the third. She retorts, in turn, that subjects in the first-person condition, while making roughly accurate metacognitive judgments about their learning, fail to notice the inverse correlation between study time and accuracy that they actually employ as a cue. But nor would they, if the relevant generalization were learned by the mindreading faculty, whose internal contents are not normally globally accessible. In the third-person case, in contrast, the mindreading faculty has been given no opportunity for relevant learning, so people have no option but to fall back on their explicit folk theories (such as, “the longer you study, the more you will learn”). So the dissociation in question still provides no reason to think that distinct mechanisms are involved in the first-person from those involved in the third.

]]>
By: Peter Carruthers https://nationalhumanitiescenter.org/on-the-human/2011/10/knowledge-of-our-own-thoughts/comment-page-1/#comment-8521 Wed, 09 Nov 2011 14:22:55 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2894#comment-8521 I will reply to Rey’s second set of comments using his own numbering system.

(i) Rey is correct that the difference between his TAGS view and the ISA theory is not whether interpretation using the resources of the mindreading system is EVER the basis for spontaneous and seemingly-introspective self-attribution. Given the data, he has little option but to allow that it often is. Rather, what is at stake is whether self-attribution of attitudes ALWAYS employs mindreading-based interpretation. This is what ISA claims, and TAGS denies. As a result, Rey is forced to embrace a dual-method theory of self-knowledge. Cases that are alike in seeming, phenomenologically, to be instrospective, will sometimes result from self-directed mindreading and sometimes from accessing an appropriate set of attitude-identifying tags. What this means is that Rey assumes an extra explanatory burden. Since both accounts allow that seemingly-introspective self-knowledge sometimes results from self-directed mindreading, why should we postulate the existence of a second mechanism at all? For in general (and all else being equal) simpler theories, which postulate just a single mechanism (as does ISA), are preferable to those that postulate the existence of something else as well.

Indeed, in proposing his TAGS account, Rey must assume a number of additional burdens. We need to be told more about the functions of these tags, and their roles within an overall cognitive architecture. Carruthers (2011) explores a number of ways in which the theory might be developed, showing that there are both theoretical and empirical difficulties specific to each. But all have problems accommodating the extensive empirical literature on source-monitoring (Mitchell and Johnson, 2000), as well as the close correspondence between sensory-based working memory abilities and fluid general intelligence, or g (Colom et al., 2004).

(ii) Rey challenges me to account the distinctive reliability of self-attributions of attitudes in meditative cases, when external behavioral and contextual cues are unavailable. But in the first place, we have zero evidence of any great extent of our reliability in such cases. The few instances in which we get up to phone an old friend or whatever (having just self-attributed a decision to do so, say), don’t begin to establish any sort of general reliability. And there are real problems even thinking about how one might get an empirical handle on the question, given that most thoughts in meditative cases, when one’s mind is running in so-called “default mode”, are completely forgotten about and have no lasting impact on one’s mental or behavioral life.

Moreover, it is far from clear what an appropriate comparison between first and third person should look like, for meditative cases. Certainly one should not compare reliability in first-person meditative cases with reliability in third-person meditative cases! For of course in the latter one has virtually no evidence to go on at all (nor will one generally attribute any attitudes). It is also problematic to compare first-person meditative cases with third-person cases equated for the amount of evidence that is available in each. For it is far from clear how to compare one form of evidence with another, here, or to judge how each form should be evidentially weighted. (I shall discuss the specific examples that Rey provides in (iii) below.) I guess what Rey is most likely to have in mind is a comparison between degree of reliability in first-person meditative cases with the reliability of ALL intuitively-immediate third-person attributions. Not only do we have no real evidence pertaining to the former (as I pointed out above), but we have no real evidence pertaining to the latter, either – beyond the platitude that we seem to be pretty good at it. It is doubtful that any further intuitions that one might have at this point can bear any argumentative weight.

Rey also claims that it is hard for ISA to explain how we so readily self-attribute attitudes in meditative cases, given that the imagery we experience is not purposive, and hence not subject to the same sorts of means–ends explanations that the mindreading system uses when attributing attitudes to other people. But Rey’s assumption of purposelessness betrays his tacit Cartesianism here. Granted, we are generally not aware of any purpose or goal when, as we say, we “allow our thoughts to drift.” But it only follows that there are no purposes or goals present if one assumes that such states are always conscious. In fact there is every reason to think that our “drifting thoughts” will be generated from interactions among a variety of purposes and goals (many of them quite fleeting), serving to direct our attention in such a way as to activate one sort of visual representation rather than another, or to issue in one item of inner speech rather than another.

Moreover, a little reflection is sufficient to show that much interpretation of the behavior of other people takes place without us being overtly aware of their goals. Think, for example, of casual gossip over coffee with someone one knows well. One will generally have little conscious awareness of the goals that lie behind each utterance, yet we nevertheless manage to hear specific types of attitude as being expressed.

(iii) Rey points out (quite rightly) that in my reply to his earlier comment I had discussed how particular contents can be reliably self-attributed in meditative cases, but had failed to discuss how particular attitudes to those contents are likewise self-attributed. He mentions a number of different emotional attitudes, in particular (resentment, guilt, shame, irritation, fear). In fact there are a range of potential sources of information that are available to one here. The most familiar idea is that emotions might come paired with distinctive interoceptive experiences, or so-called “gut feelings”. I am inclined, myself, not to place much weight on this source of evidence in my account. This is partly because it is unclear from the current literature whether there are patterns of somatic change distinctive of the various emotions, and partly because people seem not to be very good at identifying them even if there are (at least in explicit tasks). But we also know that distinct emotions come paired with distinctive motor outputs, especially for facial expressions and bodily postures. And we know, likewise, that even fleeting forms of emotion issue in micro-changes in the facial musculature. So there is likely to be proprioceptive sensory information available to the mindreading system to aid in emotion identification, even in meditative cases. Moreover, if emotionally salient stimuli (whether externally perceived or self-generated in the form of an image) are processed in the way that other such stimuli are, then one might expect that emotion-specific appraisal concepts like WRONG or THREATENING would get bound into the sensory representations in question. (Compare the way in which concepts like CAR or PERSON are bound into the contents of perception, leading us to see something AS a car or AS a person.) If so, then reading off the emotion-type from one’s sensory experience will be almost as trivially easy as recognizing that one is imagining a car rather than a person.

(iv) Rey claims that his appeal to inattention can explain the very same patterning of confabulations versus veridical self-report that are accounted for by ISA. Oversimplifying somewhat, one can say that the pattern is this. When subjects are provided with sensory cues of the kind that would be likely to mislead them when attributing an attitude to a third party they go awry, whereas in otherwise parallel cases when they are not provided with such cues they do not. Evidence of confabulation has been found for a range of propositional attitudes using a wide variety of experimental paradigms. It seems, then, that if Rey is not to advance post hoc explanations of the data on a piecemeal basis, he will need to claim that what all these paradigms have in common is that they somehow serve to distract subjects’ attention from their own tags (which would otherwise enable direct and reliable self-report). This is beginning to sound quite a lot like traditional defenses of such things as fairies that live at the bottom of the garden: the fairies are there, but they disappear as soon as you look for them. Likewise here: the tags are there (and can ground introspective self-knowledge), but they fail to be attended to as soon as experimenters look for evidence of them.

References

Colom, R., Rebollo, I., Palacios, A., Juan-Espinosa, M., and Kyllonen, P. (2004). Working memory is (almost) perfectly predicted by g. Intelligence, 32, 277-296.

Mitchell, K. and Johnson, M. (2000). Source monitoring: Attributing mental experiences. In E. Tulving and F. Craik (eds.), The Oxford Handbook of Memory, Oxford University Press.

]]>
By: Joëlle Proust https://nationalhumanitiescenter.org/on-the-human/2011/10/knowledge-of-our-own-thoughts/comment-page-1/#comment-8508 Tue, 08 Nov 2011 11:14:46 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2894#comment-8508 Peter Carruthers (2011) presents to us experimental evidence in favor of the strong claim that every form of self-knowledge is propositional, based on self-attribution and mindreading, and of an indirect, interpretive kind. My objection is not that this claim is wrong, but only that it is too strong. Contrary to what his essay above is claiming, a dual-account of thinking about one’s thinking is able to offer “a principled way of accounting of the circumstances in which people access their thoughts directly, or rely, rather on self-directed mindreading”.

My arguments will draw on the processes engaged in metacognition, i.e., assessment of one’s own cognitive success in a given task. Experimental studies suggest that two systems are involved in human metacognition. One of them is based on the feelings of fluency (or ease of processing), which apply to the structural aspects of a perceptual, conceptual, memorial, or reasoning task, independently of the particular mental contents involved in it. This system offers a direct access to one’s mental capacities – it does not require representing that one has attitudes with certain contents. It merely requires representing the task, and the uncertainty about the possibility of performing it correctly or not. The function of these feelings seems thus to be to guide decision in a way that is task-dependent, affect-based, and motivational. Mindreading-based metacognition, on the other hand, assesses cognitive dispositions on the basis of a naive theory of the first-order task, and of the competences it engages. In contrast with the former, the latter requires representing both the relevant propositional attitudes and their contents.

The existence of a fluency based-system suggests, pace ISA theory, that not every form of access to one’s attitudes is interpretive. There is a form of “procedural” management of one’s attitudes that is based on an affective form of epistemic assessment and experience, which associates a positive or negative evaluation to a given attitude – without needing to represent it as an attitude. Furthermore, the patterning of the data is now better understood. Conditions such as divided attention, low motivation, low personal relevance, elated mood, favor cognitive assessment based on fluency. In contrast, when task motivation and personal relevance are high, when mood is bad, and when indications are given to the subject that fluent experiences may be attributed to environmental influence, then analytic, theory-based metacognition steps in.

Moreover, there is also a clear behavioral dissociation between procedural metacognition and theory-based prediction. Subjects may offer radically different assessments of cognitive dispositions in self and others when they have been engaged in a task, from when they assess success in a detached way. In certain cases, an engaged judgment is more reliable. In Koriat & Ackerman (2010), judgments of learning based on the subjects’ own experience of a given task (self-paced learning of pairs of items) correctly use ease of processing as a cue: the longer you study a pair, the less likely it is that you will remember it. This cue is not used in essays where a yoked participant merely observes another perform the task. In these cases, a naïve, but wrong theory is used, according to which the longer you choose to study items, the better you will remember them. In other studies, however, fluency-based judgments lead to incorrect predictions, whereas attribution-based predictions are reliable. For example, subjects rate their memory for childhood better when their task is to recall six rather than twelve childhood events. A yoked participant, however, would correlate memory rating with the number of events retrieved (Schwarz, 2004).

Peter Carruthers might object that a subject, when engaged in a metacognitive task, has access to evidence that she fails to have when she is merely observing another agent. Thus it is expected in ISA terms that the validity of the self-evaluations should differ in the two cases. In response to this objection, note that the participants in the engaged condition are unaware of using an effort heuristic. None of them reports, after the experiment, having based their own judgment of learning on an inverse relation between study time and learning. A natural explanation for the dissociation discussed above is that procedural metacognition and mental attribution engage two different types of mechanisms. Engaging in a cognitive task with metacognitive demands allows the agent to extract “activity-dependent” predictive cues, i.e. implicit associative heuristics that are formed as a result of the active, self-monitoring engagement in the task. Predicting success in a disengaged way, in contrast, calls forth conscious theoretical beliefs about what predicts success in the task. This type of contrast between implicit heuristics and explicit theorizing, and between engaged and detached assessment, seems to support the view that one’s knowledge of one’s own thought is different in kind from one’s knowledge of the thoughts of other people.

References

Koriat, A. & Ackerman, R. 2010. Metacognition and mindreading: Judgments of learning for Self and Other during self-paced study. Consciousness and Cognition, 19, 1, 251-264.

Schwarz,N. 2004. Metacognitive Experiences in Consumer Judgment and decision making, Journal of Consumer Psychology,14, 4, 332-348.

]]>
By: Georges Rey https://nationalhumanitiescenter.org/on-the-human/2011/10/knowledge-of-our-own-thoughts/comment-page-1/#comment-8507 Mon, 07 Nov 2011 22:39:19 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2894#comment-8507 In his reply to my comment (above), Carruthers misses its main points:

(i) As I stressed, the difference between ISA and TAGS is not over whether introspection involves interpretation –or even uses the same mind-reading system and interpretative principles. It may often do so. In fixing their beliefs about anything, I presume people can and often do use whatever evidence and principles they can get their minds on. The question is only whether in their own case they *also* have access to non-sensory data, viz., internal tags, unavailable in the case of others.

(ii) Nor is the issue one of relative “hesitation.” I said nothing about this, and presume that some introspections on the basis of tags might be as perplexed and hesitant as about anything else. Indeed, as I also stressed, attribution of *anything to anything* can be inferential and/or interpretive –or it can be fast pattern matching, whereby –as I, myself, said– one unhesitatingly “sees” fear in another’s face, or, for TAGS, one is immediately aware of it in the constellation of one’s internal tags. The issue I pressed is not one of (un)hesitation, but of *reliability*: I see no reason to think that concurrent ascriptions of attitudes to oneself aren’t on the whole significantly more reliable than they are to others, as they appear to be. If they are, then ISA incurs a burden of explaining how they could be, especially in meditative cases in which behavioral and contextual evidence is exiguous, and in which, most importantly (a point to which Carruthers made no reply) the usual means-ends and other interpretative principles applicable to others are patently not applicable. To repeat, inner imagery is not *purposive* in the way outer behavior typically is. Even if there is interpretation here, it will have to be based in part on different “principles.”

(iii) The problem in these meditative cases is not merely semantic ambiguities in inner speech, which may or may not be resolved by the concepts and associations that flood one’s mind in meditative cases (speaking for myself, I wouldn’t count on it). The problem for IAS is how one is to determine which *attitude* one bears to even disambiguated inner speech. I lie there insomniac, the image of an altercation with a neighbor suddenly filling my mind: how would that and other images alone provide a sufficient basis for me reliably to infer that I *resent* her remarks, or, alternatively, *feel guilty* or *ashamed* of my own; or perhaps merely *irritated* at the city for not having demarcated our properties? The imagery seems wholly compatible with all four and many other self-attributions, although it was and remains nevertheless perfectly clear to me which one was true.

As I emphasized at the end of my comment, what IAS would need to provide is some reason to think that somehow the same interpretive principles that we apply to others could, when applied to inner imagery, issue in self-ascriptions that are as reliable as they appear to be. Yes, I suppose “one can hear oneself as expressing any number of attitudes as a result of interpreting one’s own inner speech,” but is there a shred of reason to believe one’s inner speech is as *reliably* articulate in this way as the speech of others –or that this is really the way people *reliably* learn of their own concurrent fear (they hear themselves say “I’m frightened,” and so infer that they’re afraid? Not that I’ve heard!).

(iv) I appealed to the inattention phenomenon in vision deliberately to provide a non-ad hoc account of the confabulation data. Indeed, surely it would be surprising if inattention didn’t occur in self-ascription, whether or not it is based on tags or on SBC data alone, and for precisely the sorts of reasons that IAS adduces: the person is distracted or misled by wishes, expectations or other SBC data. TAGS therefore explains precisely the same “pattern of failures” as IAS; the confabulation cases won’t serve as critical data between the two accounts. As with establishing any special competence, whether it be vision or introspection, what one needs is an explanation not of failures, but of special *success*, of the sort that appears to be exhibited most clearly in the meditative cases.

*************

]]>
By: Peter Carruthers https://nationalhumanitiescenter.org/on-the-human/2011/10/knowledge-of-our-own-thoughts/comment-page-1/#comment-8491 Sat, 05 Nov 2011 21:09:17 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2894#comment-8491 Fuller raises an interesting question concerning the individuation of mental faculties. He suggests that even if the ISA account is correct that self-attributions of mental states are just as interpretive and sensory-based as attributions of mental states to others, there may be distinct mental faculties underlying competence in the two domains.

Fuller says that one reason for believing in two distinct faculties is that the inputs are significantly different in the two cases. He exaggerates the difference, however. Although visual imagery and inner speech are available in the first person but never in the third, these should not be counted as inputs distinct from the vision and hearing that guide our attributions of mental states to others. The difference lies merely in their causes (endogenous versus exogenous) not in their representational content or the mechanisms that realize them. (In fact we know that imagery and perception share the same mechanisms.) But Fuller is correct to point out, nevertheless, that self-attributions can also utilize proprioceptive and interoceptive experiences that are never available to us in the third-person.

I think it is implausible to claim, however, that we have reason to postulate distinct faculties whenever inputs are of different kinds. More important, to my mind, is the DOMAIN of the faculty, which concerns its output, not its input. For whenever faculties deal with abstract, amodal, matters, they are likely to receive inputs of many kinds. Consider numerosity, for example, which most animals as well as human infants can compute. The numerosity system takes inputs of many sorts, enabling one to judge the approximate numerosity of a visually presented set of items, a sequence of sounds, or a proprioceptively presented sequence of bodily movements like finger-taps. This provides no grounds for saying that there are really three or more distinct numerosity faculties. On the contrary, what matters is that the system computes representations from the same domain (number), and that it is realized in a common brain network. We have just the same reason to believe that there is a single system that computes mental states for self and other. The domain is the same. And although the evidence is not as extensive or as clean as one might like, it appears that the very same network of brain regions is involved whether we attribute mental states to others or to ourselves (Ochsner et al., 2004; Chua et al., 2006, 2009; Lombardo et al., 2010).

Fuller also suggests, however, that the processing employed for self and other is quite different, and cites the fact that we seem to be immune to error through misidentification – although we can be mistaken about what mental states we are undergoing, it seems we cannot be mistaken about WHO is undergoing those states. Fuller says that since the ISA account does not predict this result, we therefore have reason to think that a distinct mental faculty might be involved in the first-person case. But in fact ISA does predict that we cannot err through misidentification in connection with sensory and imagistic states. This is because they are presented to the mindreading system in ways that the sensory and imagistic states of other people never could be. But ISA claims that it is false that we are immune to error through misidentification for attitude states, since these can be self-attributed on the basis of one’s own circumstances and behavior, just as the attitudes of others can be. In such cases there will always be a substantive (and potentially mistaken) assumption made, namely that the circumstances and behavior in question are one’s own. (However, the ISA account predicts that errors of this kind should be rare, for obvious reasons.)

Consider the phenomenon of “thought insertion” in schizophrenia. A subject might believe that Oprah is telling him to kill his mother, for example, mistaking his own inner speech for the speech of another person. He is right that SOMEONE is urging him to kill his mother, but he is mistaken about WHO is urging him to do that. These seem like clear cases of error through misidentification. And note that schizophrenia is strongly associated with difficulties in third-person mindreading as well (Brune, 2005; Sprong et al., 2007). This is, at least, consistent with the ISA account.

The fact that most philosophers believe that immunity from error through misidentification is obviously true of attitudes as well as experiences is actually just another manifestation of the Cartesian intuition. Since people think that our own attitudes are presented to us in ways that the attitudes of other people never could be, of course they will intuitively believe that there is no question of misidentifying the bearer of those attitudes.

References

Brune, M. (2005). “Theory of mind” in schizophrenia: A review of the literature. Schizophrenia Bulletin, 31, 21-42.

Chua, E., Schacter, D., Rand-Giovannetti, E., and Sperling, R. (2006). Understanding metamemory: Neural correlates of the cognitive process and subjective level of confidence in recognition memory. NeuroImage, 29, 1150-1160.

Chua, E., Schacter, D., and Sperling, R. (2009). Neural correlates of metamemory: A comparison of feeling-of-knowing and retrospective confidence judgments. Journal of Cognitive Neuroscience, 21, 1751-1765.

Lombardo, M., Chakrabarti, B., Bullmore, E., Wheelwright, S., Sadek, S., Suckling, J., MRC AIMS Consortium, and Baron-Cohen, S. (2010). Shared neural circuits for mentalizing about the self and others. Journal of Cognitive Neuroscience, 22, 1623-1635.

Ochsner, K., Knierim, K., Ludlow, D., Hanelin, J., Ramachandran, T., Glover, G., and Mackey, S. (2004). Reflecting upon feelings: An fMRI study of neural systems supporting the attribution of emotion to self and other. Journal of Cognitive Neuroscience, 16, 1746-1772.

Sprong, M., Schothorst, P., Vos, E., Hox, J., and Van Engeland, H. (2007). Theory of mind in schizophrenia: Meta-analysis. British Journal of Psychiatry, 191, 5-13.

]]>
By: Tim Fuller https://nationalhumanitiescenter.org/on-the-human/2011/10/knowledge-of-our-own-thoughts/comment-page-1/#comment-8483 Fri, 04 Nov 2011 20:39:29 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2894#comment-8483 I am going to distinguish between two importantly different claims Carruthers makes in his challenging and original contribution to our understanding of the nature of self-knowledge. The first claim is that there are significant similarities between knowledge of our own mental lives and knowledge of the mental lives of others, including a shared fallible, non-privileged, and “interpretive” nature. The second claim is that the very same “mind-reading” faculty underwrites attributions of psychological states to both oneself and to others.

These two claims are separable: e.g., there might be two distinct psychological faculties — one that underwrites self-knowledge and another that underwrites other-knowledge — that share the interpretive and fallible nature that Carruthers highlights but that differ in other respects. Or there might be non-overlapping collections of cognitive faculties that underwrite self-knowledge and other-knowledge, all of whose processing can plausibly be construed as similar forms of interpretation and inference. My focus in this comment is on considerations that speak against Carruthers’ second claim, though I will return to the first claim briefly at the end.

Carruthers’ claim that a single psychological faculty is responsible for all of our knowledge of mental states depends crucially on how to appropriately individuate psychological faculties. Two criteria that have been central to faculty individuation in psychology are (a) the class of informational inputs a faculty processes and (b) the rules that govern a faculty’s processing. Factors (a) and (b) are not the only criteria for faculty individuation, but they are important.

Consider first the informational inputs to the cognitive processes underlying attributions of mental states to oneself and others. Carruthers allows that unique inputs, in some sense, are associated with self-knowledge, including visual imagery and inner speech. Elsewhere Carruthers (2011) highlights that self-attributions also rely, to a unique extent, on proprioceptive inputs (i.e., information concerning one’s bodily position and movements) and interoceptive inputs (i.e., information concerning one’s internal states, such as organ pain or hunger). Thus, the sets of inputs to the cognitive processes underlying self-knowledge and other-knowledge are in some significant respects disjoint.

Further, the similarities between inputs that do obtain are fairly weak. Carruthers classifies the inputs together only under the broad rubric of “sensory”. Further, he claims such inputs are “globally broadcast,” which means they are available to (and presumably processed by) many different kinds of cognitive faculties. In short, considerations relating to criterion (a) don’t discernibly recommend positing a single mind-reading faculty that underwrites knowledge of one’s own and others’ mental states.

Consider next the processing rules that govern self-attributions and other-attributions of mental states. Carruthers claims that we have evidence that these processes are highly similar and interpretive in character because we “mak[e] errors in self-attribution that directly parallel the errors that we make in attributing thoughts to other people.” This generalization might seem warranted given dissonance studies and studies on confabulation.

But even granting the interpretation to these studies Carruthers advocates does not address a certain class of errors that many philosophers, going back to Wittgenstein, have claimed plagues other-attributions but not self-attributions, namely, “errors of misidentification”. As an example of a misidentification error, consider attributing to your friend the belief that she is hungry based on seeing someone touching her stomach while complaining that she needs food. In this case, an identification error could arise if you misidentified someone else as your friend because of that person’s similar voice and appearance.

Misidentification errors of this kind are not, on the face of it, precisely paralleled in self-attributions. Misidentifying other people and misattributing beliefs to them is not uncommon, but few of us have ever had occasion to say, “I believed I was hungry and indeed *someone* believed that she was hungry, but in fact it was not I who believed I was hungry”. Perhaps such errors of misidentification in self-attribution are possible in exotic and rare circumstances, but self-attributions such as “I believe I am hungry” and the like do not typically appear prone to such errors. This apparent difference is not predicted by Carruthers’s ISA account and instead provides prima facie reason to differentiate the kinds of cognitive processes underlying attributions of mental states to oneself from the kinds of cognitive processes underlying attributions of mental states to others.

Another potential processing difference concerns inner and outer speech interpretation, which Carruthers claims are components, respectively, of self-knowledge and other-knowledge. A prima facie difference between these processes, which Carruthers attempts to explain away, is that we can be consciously aware of ambiguities and interpretive processes for outer but not inner speech. Nevertheless, Carruthers claims that a first pass, unconscious interpretative process occurs in both cases. Further, he holds that inner speech interpretation has greater access to salient contextual features for resolving ambiguities and that a unitary mindreading faculty models everyone as having transparent access to their own mind. Thus, Carruthers claims, inner and outer speech are interpreted by the same kinds of cognitive process.

Even adopting these hypotheses on the nature of inner speech interpretation and transparency rules in the mindreading faculty, however, does not appear to answer the strongest form of this criticism, viz. that when interpreting inner speech, unlike when interpreting outer speech, we *cannot* call the interpretive processing to conscious attention. The “short-circuiting” Carruthers posits in this instance might instead be construed as a significant difference in the cognitive architecture underlying the interpretive processes for inner and outer speech. Further, the short-circuiting role that Carruthers attributes to the transparency model inherent in the mindreading faculty appears ad-hoc in the case of inner speech interpretation, since it does not appear to short-circuit higher-order conscious reasoning for self-attribution of mental states when such attributive processing takes into account possible cases of self-deception, confabulation, and the like.

Ultimately, the full range of similarities and differences between cognitive processes, and additional factors such as the evolutionary history of the relevant cognitive structures, should be considered when classifying psychological faculties. I have not attempted to make an overall assessment here, but have instead highlighted potential differences that support positing one or more distinct cognitive faculties associated with attributing mental states to ourselves and others.

Even if my criticisms above of a single mindreading faculty warrant positing multiple faculties, however, they do not directly undercut Carruthers’s first claim that self-knowledge is indirect, interpretive, and highly fallible. Carruthers may still have grounds for criticizing many projects in traditional epistemology and for drawing out implications for our conception of ourselves as moral agents. However, if Carruthers is mistaken in positing a single mindreading faculty, we might expect there to be proprietary kinds of inferences and interpretation involved in self-knowledge as opposed to our knowledge of the minds of others.

]]>
By: Peter Carruthers https://nationalhumanitiescenter.org/on-the-human/2011/10/knowledge-of-our-own-thoughts/comment-page-1/#comment-8482 Fri, 04 Nov 2011 19:09:34 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2894#comment-8482 Murphy is doubtful of my claim that some form of Cartesian epistemology of the self is a cultural universal. He points out that the folk will commonly say things like, “Is that what you really believe / want?” He takes this to suggest that they are open to the possibility of error. I am doubtful. For in all such cases I think the question could appropriately be glossed, “Or are you just saying that?” The challenge concerns the literalness and sincerity of an assertion, rather than the existence of self-knowledge.

In writing Carruthers (2011) I cast my net as widely as I could before making a (tentative) claim about cultural universality. I considered major figures in the history of Western philosophy (ancient, modern, and contemporary), and I consulted experts in the philosophies of Ancient China and of the Indian subcontinent. The general finding was that all are consistently Cartesian (to some significant degree) about the nature of self-knowledge. It seems that whenever people have reflected explicitly about their knowledge of their own mental states, they have been led to embrace some kind of Cartesian view. And it is a striking fact that although many cultures have had robust skeptical traditions regarding the possibility of knowledge of the physical world, none seems to have contained skeptics about our knowledge of our own mental states.

Murphy, like Fernandez, does not find the dissonance data completely compelling. He is right. No set of data is ever sufficient by itself to establish the truth of a theory. I claim only that the ISA account provides the best explanation of those data, even when the debate is construed narrowly (that is, without considering all the other evidence that I muster in support of the theory). It is important to recall some of the details of the phenomenon. People only alter what they say in circumstances where they have been made to feel bad. But they do not do so in advance of being asked what they think. (We know this, recall, because they will also utter other dissonance-reducing confabulations when given the chance, only making a change in response to the first-presented option.) So Murphy’s hypothesis must be that a question about one’s belief in the presence of negative affect (and not just any negative affect, but affect that has been caused in the right way in the right circumstances) causes the eradication of one belief and the formation of a new one, which is then accurately (perhaps authoritatively) reported. There might, indeed, be some mechanism that achieves this. But I have no idea what it might be or how it might work; and Murphy admits that he doesn’t either. As a result, someone committed to Cartesian epistemology cannot yet EXPLAIN the data. They have merely postulated a something — we know not what — that might explain it. In contrast, the ISA account genuinely can provide such an explanation, appealing to well-established accounts of human affectively-based decision making of the sort developed by Damasio and others. By my lights, a good explanation beats a non-explanation every time.

Finally, Murphy argues that other systems are involved in reporting our beliefs. Of course this is true (and something that I emphasize). But a report of a belief is an action. It is not an item of knowledge. Knowledge is a cognitive state, not a performance. So Murphy faces a dilemma. Either self-knowledge of the belief expressed occurs first, prior to its articulation in speech. If so, we are owed some account of how that happens. Or self-knowledge occurs subsequent to hearing and interpreting our own verbal or other report. The latter will, on any account, involve the work of the mindreading faculty. So in the absence of mechanisms that can get access to, and form beliefs about, our beliefs directly, all self-knowledge must, perforce, be “channeled” through the mindreading faculty, as I claim.

]]>
By: Peter Carruthers https://nationalhumanitiescenter.org/on-the-human/2011/10/knowledge-of-our-own-thoughts/comment-page-1/#comment-8481 Fri, 04 Nov 2011 18:13:18 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=2894#comment-8481 Rey claims that there are a special set of tags attached to our attitude events that allow us to know of them in our own case. He says that this account agrees with my Interpretive Sensory-Access (ISA) theory about the interpretive component, disagreeing only about the need for sensory access. But this is a mistake. The ISA theory does not merely claim (as does the TAGS model) that our access to our own attitudes is inferential, relying on subpersonal computations or inferences of one sort or another. (This is something that inner sense theorists, too, will accept.) Rather, the claim is that our access is interpretive in the same sense that our access to the attitudes of other people is interpretive — relying on the same mindreading system and the same set of tacit interpretive principles (“seeing leads to knowing”, “choice reflects preference”, and so on). This, of course, TAGS is committed to denying.

Rey argues that a good reason to believe in TAGS (and to reject the ISA account) derives from the immense range of attitude events that we unhesitatingly attribute to ourselves, even in cases where behavioral and contextual data are unavailable (Rey calls these “meditative cases”). But this, too, is a mistake. Think of the range of attitudes one can unhesitatingly hear a friend as expressing while talking with her on the telephone (where behavioral and contextual cues other than those of the previous discourse are likewise unavailable). One can HEAR someone as fearing, expecting, doubting, judging, hoping, supposing, and so on, as a result of the activity of one’s mindreading system working in collaboration with the language faculty. Likewise, then, in meditative cases: one can hear oneself as expressing any number of attitudes as a result of interpreting one’s own inner speech; and in these cases one will additionally have the benefit of one’s own visual imagery, affective feelings, and so on, to assist in interpretation.

It is an interesting question why we should never (or only very rarely) become aware of the ambiguities in our own inner speech, in the way that we are often aware of ambiguities in the speech of others (causing us to pause and reflect). In Carruthers (2011) I detail a number of independent but interacting strands of explanation. But the simplest to explain is as follows. One of the standard principles employed in speech comprehension is the relative accessibility of various concepts and structures. For example, if we have just been talking about finances, I shall automatically interpret your utterance, “Well, now I need to go to the bank” as about a financial institution, and not the bank of a river. This is because the previous context renders one concept more accessible than the other. But note that the thoughts that initiate an episode of inner speech will involve activations of some specific set of concepts. When the resulting representation of a heard sentence is processed by the language comprehension system, those concepts will perforce be readily accessible, making a correct interpretation highly likely.

Rey also claims that his TAGS account can easily explain all the instances of confabulation about current attitudes that I cite as evidence favoring the ISA theory. This is true in the very weak sense that his account is consistent with the evidence. But a theory can be rendered consistent with any body of evidence by selecting suitable auxiliary assumptions. What needs to be shown is that the assumptions are at least principled and at best ones that we have independent reason to accept. Moreover, the resulting account needs to be able to predict, not just that confabulation should sometimes occur, but the specific patterning that we find in the confabulation data. This is what the ISA theory does. It claims that people should make mistakes about their own attitudes whenever they are provided with cues of the sort that would lead them to make mistakes about others (which is just what we find); and the auxiliary assumptions needed to generate predictions of the data are in every case well-established findings or mechanisms. The TAGS account, in contrast, can only accommodate the data post hoc, by appealing to lapses of attention, or “repression”, or whatever.

Contra Rey, what the ISA account requires for support (or rather, for this strand of support; there are many others) is not “failure across the board”. Rather, it is a pattern of failures predicted by the ISA account, and only by the ISA account. If self-knowledge results from turning our mindreading capacities on ourselves, then we should expect errors whenever cues are provided of the sort that would mislead us when attributing mental states to others. This is just what we find. No other theory makes such a prediction — certainly not TAGS. Nor is what is at stake well characterized as one of competence versus performance. For both the ISA and TAGS models agree that we possess competence for ascribing mental states to ourselves, often issuing in self-knowledge. What is at stake is the nature of that competence, not its existence.

]]>