Comments on: Control: Conscious and Otherwise http://nationalhumanitiescenter.org/on-the-human/2010/05/control-conscious-and-otherwise/ a project of the National Humanities Center Mon, 13 Feb 2012 19:42:46 +0000 hourly 1 By: Phillip Barron http://nationalhumanitiescenter.org/on-the-human/2010/05/control-conscious-and-otherwise/comment-page-1/#comment-1418 Sun, 06 Jun 2010 16:12:28 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=1078#comment-1418 This conversation, while ending here, continues on Facebook. Join us there by logging on to your Facebook account and proceeding to our group: http://bit.ly/OnTheHumanFacebook.

]]>
By: Christopher Suhler and Patricia Churchland http://nationalhumanitiescenter.org/on-the-human/2010/05/control-conscious-and-otherwise/comment-page-1/#comment-1417 Sun, 06 Jun 2010 07:43:11 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=1078#comment-1417 Linus Huang makes two important points in his reply. The first is that our best available scientific understanding of the mind suggests that the conscious-nonconscious distinction is not anywhere near as neat as our folk psychological theories would have us believe. (Charles Wolverton, in his reply, expresses a similar doubt about the meaningfulness of the distinction between conscious and nonconscious influences on behavior.) This is indeed a point that was lingering in the background of our article, and it is one which we are seeking to develop more explicitly in our current work. In particular, as described in our earlier reply, we aim to argue that the agent should be viewed as encompassing both conscious and nonconscious aspects of cognition (and the gradations between them).

Huang’s second point concerns what implications accepting a picture of the agent along these lines would have. The first possibility he sets out is to eliminate the requirement of conscious endorsement for agency altogether and say that nonconscious endorsement alone can be enough. The second possibility, which Huang, Levy, and others favor, is to say that if Baars’s conceptualization of consciousness as a global workspace is correct, there is still reason to regard conscious endorsement as special. The reason for this special status, in a nutshell, is that conscious reflection allows for interaction between a wider range of inputs (both conscious and nonconscious), rendering the resulting action more reflective of the agent.

This brings us to Levy’s most recent reply, in which he reiterates his concern that research guided by the cognitive scientific conception of control (“controlled processes”) does not really bear on the philosophical conception of control (“freedom-level control”). Levy holds that “consciousness is a global workspace, whereby subpersonal mechanisms communicate with one another such that only actions that have consciousness in their causal history (very recent causal history in all cases in which the action is not habitual) are fully expressive of the agent”. He characterizes this claim as an empirical one, and the first part of it (regarding whether consciousness is a global workspace) no doubt is. But the second part (regarding what constitutes something’s being “fully expressive of the agent”) is much more an assertion of allegiance to a particular philosophical account of agency than an empirical claim. Indeed, it is quite unclear how one would even go about empirically evaluating the claim that “only actions that have consciousness in their causal history… are fully expressive of the agent”.

One empirical claim that Levy may be making is that from the perspective of “freedom-level control”, having conscious processes involved is always superior. But unless one takes this as merely a matter of definition, we see little reason to accept it. This leads us back to our original point of disagreement with Levy, who seems to regard work on controlled processes as having little bearing on what we consider necessary for “freedom-level control”. The burden of our argument in the paper under discussion is precisely that research from psychology, neuroscience, evolutionary biology, and other fields provides reason to think that nonconscious processes are capable of supporting the species of control required for moral agency and responsibility – Levy’s “freedom-level control”.

Moreover, even if the global workspace story is basically correct, that does not mean that nonconscious process are completely fragmented and local, lacking in coherent support from connections across the cortex (for more on brain connectivity, see Buller & Sporns, 2009). Nothing in the anatomy implies that nonconscious processes are not quite highly integrated and coherent, and the behavioral data on the sophistication of these processes suggest that blanket claims about conscious processes being more highly integrated and coherent are unwarranted. To be sure, some conscious processes may be widely integrated, and some nonconscious processes (e.g., in the retina) may not involve widespread activity, but it is quite probable that some nonconscious processes, such as those involved in social cognition, intellectual tasks (e.g., giving a lecture, rapidly exchanging arguments and counterarguments with a philosophical colleague), and skills (e.g., playing basketball or hockey), are very highly integrated indeed.

A further point worth briefly noting is that the utility of bringing nonconscious inputs into the conscious global workspace is not unqualified – there are many situations in which nonconscious processes are superior to or enjoy primacy over conscious ones. The case of skilled actions, discussed in our paper, is one clear example, but the superiority of nonconscious processes can extend to contexts where identifiable skills/habits are not operative (e.g., Dijksterhuis et al., 2006).

As noted earlier, Levy may be presenting the requirement of conscious involvement for “freedom-level control” as a matter of definition. But if having “freedom-level control” of a given action requires, across the board, substantial conscious involvement in close proximity to the initiation of that action, then it is hard, given the sophistication and utility of nonconscious processes operating without such conscious involvement, to see to see why anyone would want this sort of control.

Switching gears, we would like to thank Thomas Nadelhoffer for another insightful reply. Once more, we believe that his framing of the issues is spot-on: if one revises traditional notions of moral agency and control (whether along the lines we are suggesting or in some other way), then traditional notions of moral responsibility should shift with them. We are therefore quite willing to accept a shift in the notion of moral responsibility to accompany our proposed shift in the notions of control and agency. Merely endorsing such a shift is, of course, only a start, for as Nadelhoffer points out, the real work will be to develop a more detailed account of responsibility that makes appropriate distinctions between culpable and excusable actions, avoids a blanket “strict liability” policy wherein people are responsible for anything and everything they do, and addresses various other challenges. This task will be among those that occupy us in our future research. (We should mention that we share Nadelhoffer’s skepticism about moral desert and free will [at least in the agent-causal sense], so such notions, at least in their traditional forms, will be unlikely to appear in any account we develop.)

We wish to conclude by once more thanking Gary Comstock, Phillip Barron, and the others behind the scenes at OTH, as well as the terrific respondents to our article. We are under no illusions that the issues under discussion have been definitively resolved, but we hope that this forum has helped to advance the debate and that everyone involved has found it as enlightening as we have.

References:

Bullmore, E. & Sporns, O. (2009). Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10, 186-198.

Dijksterhuis, A., Bos, M. W., Nordgren, L. F., & van Baaren, R. B. (2006). On Making the Right Choice: The Deliberation-Without-Attention Effect. Science, 311, 1005-1007.

]]>
By: Charles Wolverton http://nationalhumanitiescenter.org/on-the-human/2010/05/control-conscious-and-otherwise/comment-page-1/#comment-1411 Thu, 03 Jun 2010 21:40:37 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=1078#comment-1411 As we learn more and more about the various influences on human behavior (conscious or otherwise, although I am skeptical that this is a meaningful distinction), the scenario Prof Strawson hypothesizes seems to be less and less fantastic. This suggests to me that in time the criminal justice system might move from an MR-punishment paradigm to a biological/social malfunction-therapy paradigm, in which case the many interesting issues raised in the essay and comments would apply as considerations in defining a program of therapy rather than in the increasingly difficult (and arguably decreasingly meaningful) determination of MR. In such a system, “therapy” might still include incarceration or even capital punishment since the intent would be primarily to benefit society and only secondarily to benefit the offender – and in some instances removal from society might be the only effective “therapy” available at the time. On the other hand, it might also include more aggressive early intervention, perhaps more palatable for those considered at risk once the stigma of “immoral” has been eliminated from prospective anti-social behavior.

Should this to come to pass, the (seemingly) slightly aggressive final sentence in Prof Strawson’s comment would translate into “But we can’t claim that [therapy has] anything more than a purely instrumental/pragmatic justification”, a rather benign observation.

]]>
By: Thomas Nadelhoffer http://nationalhumanitiescenter.org/on-the-human/2010/05/control-conscious-and-otherwise/comment-page-1/#comment-1408 Wed, 02 Jun 2010 14:33:34 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=1078#comment-1408 John,
Chandra Sripada has some been doing some interesting work on what he calls the “Deep Self Model” of the attribution of agency and responsibility. More specifically, he has evidence that suggests that we care a great deal about the concordance (or lack thereof) between the public foot people put forward and the boundaries of their deep self. Whether this view sheds light on how we think and talk about reasons remains to be seen. It seemed worth mentioning either way. Minimally, Sripada has provided us with a method of analysis–namely, structural equation modeling–that holds out the promise for enabling us to test the kind of view Daryl attributes to Velleman. The empirical element of this issue revolves around tracking the salient intuitions and judgments about reasons and their proper relationship with varying levels of the self. To see what he’s been working on, check out the following two posts:
http://experimentalphilosophy.typepad.com/experimental_philosophy/2010/02/telling-more-than-we-can-know-about-intentional-action.html
http://agencyandresponsibility.typepad.com/flickers-of-freedom/2010/05/more-on-manipulation.html

Patricia and Christopher,

Thanks for your illuminating reply. I just wanted to briefly recast my worry in the following way: You are ultimately arguing for a revisionist picture when it comes to how we should conceptualize control in light of the gathering data from the sciences of the mind. In short, you want to neurobiologize our traditional notions of control and agency so that they can accommodate the gathering data on the important etiological role played by non-conscious mental states and processes. That’s fine as far as it goes. However, the worry I raised in my earlier comments was that if one wants to revise our traditional notions of agency and control to accommodate the gathering data, then it is unclear why we would still cling to our traditional notions of moral responsibility.

Now, you are certainly correct that at this juncture we have a decision to make. One choice involves restricting our traditional notion of responsibility in light of our pared down notions of agency and control. This is the route taken in various ways by Doris, Harman, myself, and others. The other choice involves expanding our traditional notion of responsibility to cover some non-conscious mental states and processes. This seems to be the route you prefer. However, the real challenge for your view is to provide a revisionist account of moral agency and desert that enables you to distinguish the culpable from the excusable that does not itself depend on the traditional distinction between conscious intents, desires, beliefs, actions, reasons, etc. and their unconscious or non-conscious counterparts. Moreover, you will need to expand the notion of moral desert in such a way that it penetrates down to the realm of the non-conscious. As it stands, I don’t know what this kind of desert would look like.

Of course, I am admittedly skeptical of the notion of desert more generally, so perhaps the present case is simply a reflection of my more general worries! That being said, I nevertheless think that your expanded notion of desert is likely to be even more puzzling than the traditional notion of desert since your revised account will necessarily involve justifiably making people suffer for acting on reasons that were perhaps essentially beyond the reach of conscious control. Now, you could simply jettison the notion of desert altogether and focus instead on some other notion of moral responsibility. I, for one, think that would be much easier that trying to develop a notion of desert that’s up to the task. But, like I already said, I am a skeptic about both free will and moral desert, so it’s no surprise I would prefer you to follow me down the path toward skepticism even while you’re trying to provide agency and responsibility with a firmer footing when it comes to the gathering data on the nature of human cognition and action.

]]>
By: Neil Levy http://nationalhumanitiescenter.org/on-the-human/2010/05/control-conscious-and-otherwise/comment-page-1/#comment-1401 Tue, 01 Jun 2010 02:09:07 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=1078#comment-1401 Thanks for the reply, Chris. I don’t see where I suggested that unconscious mechanisms = subpersonal mechanisms as a definitional claim. I made an empirical claim: consciousness is a global workspace, whereby subpersonal mechanisms communicate with one another such that only actions that have consciousness in their causal history (very recent causal history in all cases in which the action is not habitual) are fully expressive of the agent. I just don’t see *any* evidence, either in your response or in the original article, that even begins to suggest that this empirical claim is false. Rather, your evidence bears on whether unconscious processes can be flexible and responsive to environmental cues. In short, I claimed that consciousness was necessary for freedom-level control, though not for controlled processes; you replied that consciousness was not necessary for freedom-level control and cited in defense of the claim the evidence that it not necessary for controlled processes.

]]>
By: Ta Lun (Linus) Huang http://nationalhumanitiescenter.org/on-the-human/2010/05/control-conscious-and-otherwise/comment-page-1/#comment-1400 Tue, 01 Jun 2010 00:50:49 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=1078#comment-1400 CONSCIOUS ENDORSEMENT: WHAT‘S SO SPECIAL ABOUT IT ANYWAY?

Is it wrong the way we think of conscious endorsement as special? Short answer: YES
Is there something special, after all, about conscious endorsement? Short answer: YES

I.
I would like to begin by thanking Gary Comstock for inviting me to participate in this discussion. I’ve enjoyed reading Chris and Pat’s insightful article and the thoughtful and helpful comments following it. I find something true on both sides. On the one hand, I agree with the authors’ point that conscious control is not always necessary for responsibility, and non-conscious control, given certain conditions, can be quite sufficient for grounding responsibility. If the situationist literature shows only that our actions are often the result of non-conscious processes, it certainly does not threaten our responsibility practice. On the other hand, I also agree with the critics’ point. Situationist literature shows more than that: it demonstrates that our actions are often significantly influenced by causal processes that we cannot consciously, reflectively endorse. Here is where the challenge to responsibility lies: As long as we think “conscious endorsement” as an important, if not necessary, condition for free action, situationist literature shows that we lack certain important control over our actions.

II.
However, I do think the critics so far miss an important point Chris and Pat are trying to make in their article, that is, “what is so significant about the distinction of conscious vs. non-conscious control?” In light of the current dialogue, we can rephrase this point in terms of endorsement. So, the question: What is so special about conscious endorsement? Just to clarify, I have no doubt that our folk psychology endows tremendous importance to this “conscious endorsement” criterion, and the critics rightfully assume it. However, the authors can still argue that it is time to “eliminate” such an unjustifiable criterion in light of our best scientific knowledge about mind.
First, we can observe that the conscious endorsement criterion is probably rooted in an out-dated, unscientific picture of mind, namely, the Cartesian Theater model. According to this model, consciousness is where the “real self” resides; thus, what is endorsed consciously is endorsed by the “real self”. I suspect this is how most of the “oomph” of this criterion is derived. To be fair, no one really takes the Cartesian Theater model seriously anymore (among the critics at least). However, there is still the tendency to think of the brain as composed of a conscious system and a set of non-conscious systems, with the conscious system equipped with its own set of values, deliberation process, perceptions, and control, not unlike a mini-agent in the brain, which one identifies as one’s real self. Consciously endorsing a causal process is like a pilot’s endorsing the autopilot devise. “How else do I make the device and its actions MINE?”, one may ask.
I take this is exactly what Chris and Pat want to challenge: Our best scientific understanding of mind does not support the above pictures; hence, the “conscious endorsement” criterion, however cherished by the folks, cannot be justified. Science tells us that our (access) conscious processes are rather like an empty stage that is waiting to be put on a show (the Global Worksapce theory). The non-conscious processes determine what information gets on stage to be made accessible to all other non-conscious processes. There is not a separate set of values, memory, and control that belongs to the conscious processes. The non-conscious processes determine what perceptual information enters the stage, what values are recalled, what the next step of deliberation goes, what control the conscious process will exert. Given this picture, it is wrong to think that when we separate the conscious processes from the non-conscious ones, we will get anything remotely resembling an agent. So, how can “conscious endorsement” be anything special, when what is endorsed consciously is also determined by the non-conscious processes. Why is “unconscious endorsement”, whatever it is, not enough?
Another way to put it, the agent cannot be identified only with the conscious process in the brain, because the agent is the whole brain conscious or not (if not also the body and part of the environment). If it is what our best science tells us, on what ground can we begin to disown the action influenced significantly by OUR brain processes, even if they are not consciously endorsed? Maybe it is time to give up this criterion based on nothing but bad metaphysics, anyone?

III.
If the above argument at least seems convincing, Chris and Pat are right to focus on anatomical and physiological criteria of control, while ignoring or remain agnostic about how these criteria fit into our conceptions of conscious and non-conscious controls. However, I do think there is something special about “conscious endorsement” , even if its specialness cannot be grounded the way folks tend to think it does. That is, I believe our best science does justify treating “conscious endorsement” as special. I will be not able to argue fully for this point in this short comment; however, let me at least point out how the argument could go.
First, what is consciously endorsed (usually) stand for the agent as a whole more. I am partly following Neil Levy’s point here (also see his comment). Because the information in the global workspace is broadcast to various non-conscious processes, and the non-conscious processes reactions to this piece of information can be sent back into the workspace to be further negotiated and compromised. What is consciously endorsed, especially after prolonged reflection, tends to reflect the agent’s various non-conscious processes, hence the agent as a whole more. I agree with Jesse Prinz’s point earlier that what is at issue here is “reflective endorsement” (which entails conscious endorsement), rather than “mere conscious endorsement”: an information that is merely conscious without being properly negotiated among all the non-conscious processes may not represent the agent any more than an insulated non-conscious response does.
Second, what is consciously endorsed tends to shape our non-conscious processes in the long term, and it acts as our expectation for ourselves and our commitment to others that we struggle to live up to in the short term. A sincere self-proclaimed egalitarianist (even if one only consciously endorses this ideal privately) will “usually” take steps to eliminate his/her non-conscious bias, or at least prevent them from being expressed. If conscious endorsement can play such a significant role in our psychology and in our social interaction, that is, it predicts what one is likely to become and behave both in long and short terms, we are justified to take the “conscious endorsement” criterion as special.*

In sum, I hope to suggest that our best scientific understanding is compatible with our (somehow qualified)folk conviction of the importance of conscious endorsement. We should let the distinction of conscious/non-conscious endorsement guide our further scientific and philosophical investigations into responsibility and free will. Unfortunately, it also implies that we still need to worry about the situationist challenge, and what it means to the theory of agency.

*[Allow me to assume a just-so story here about the evolution of conscious processes (the second argument does not depend on it, but it will help boost the argument if true) It is quite plausible that our conscious processes emerge for the purpose of communication and cooperation in group; what enters into conscious process can be readily expressed in language or other means of communication, which in turn boosts our ability to cooperate with each other. Cooperation depends on (mostly) truthful communication, which pose a so-called “commitment problem”: how do we know the agent will do as he/she says. A psychological mechanism is evolved or developed to (partly) help with this annoying problem—if the agent come to have a desire to change him/herself in accordance with what he/she publicly endorses (in the long term), or at least live up to it (in the short term), there will be less worry for this commitment problem, and we can continue cooperating happily with each other. Because our conscious process has this evolutionary root, the conscious endorsement is an internal version of public endorsement, and there is no wonder we feel the urge to shape ourselves and our behaviors accordingly.]

]]>
By: John Fischer http://nationalhumanitiescenter.org/on-the-human/2010/05/control-conscious-and-otherwise/comment-page-1/#comment-1399 Tue, 01 Jun 2010 00:27:06 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=1078#comment-1399 Galen,

Thanks for the question. You ask whether it does not matter how an agent came to be the way he or she is, whether, that is, all that matters (for me [with respect to acting freely]) is whether the agent displays guidance control. My reply is that I have always argued for an essentially historicist conception of acting freely and moral responsibility. Guidance control is analyzed in terms of reasons-responsiveness and ownership, where ownership is explicitly a historical notion. Now, whether I have the history just right is certainly disputable; perhaps we would disagree about how far back along the sequence one needs to penetrate in ascertaining moral responsibility. My account is at least robustly historicist, but I do not have as strong a view as yours here (I think).

Thanks again.

]]>
By: Christopher Suhler and Patricia Churchland http://nationalhumanitiescenter.org/on-the-human/2010/05/control-conscious-and-otherwise/comment-page-1/#comment-1398 Mon, 31 May 2010 23:27:21 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=1078#comment-1398 We would like to begin by thanking Gary Comstock and everyone else involved in “On the Human” for inviting us to contribute a target post and for all of their work in making this forum possible. We are also, of course, extremely grateful to everyone who has responded to our paper thus far for their stimulating and insightful comments.

Eddy Nahmias, Thomas Nadelhoffer, Gilbert Harman, Jesse Prinz, and others voice variants of the concern that our account does not address the fact that people are still behaving “badly” in certain situations. We do not deny that certain social psychological studies appealed to by situationists, and people’s behavior in them, can be disconcerting – at times, they certainly are (the famous experiments of Milgram and Zimbardo are vivid examples). But the point of our argument was not to say that people behave ideally regardless of the situation; one would not need elegant social psychology experiments to know that this is false. Rather, our aim was to counter the claim that everyday situational factors, often processed nonconsciously, are capable of undermining control and, with it, responsibility. If the requirements for control are understood neurobiologically, as we are suggesting, we see little reason that the mere fact that someone behaves “badly” in the face of everyday situational pressures (e.g., being more likely to help another individual after finding a dime or being exposed to the aroma of freshly baked cookies) should confer upon her diminished responsibility. On our account, then, people in situations such as those in Isen and Levin’s (1972) “dime-finding” and “cookie” experiments are in control and are responsible; the fact that the situational factors in question exerted their influence nonconsciously does not undermine control or responsibility. (We return below to the issue of just how pervasive large effects by seemingly trivial situational factors are, particularly in real-world rather than experimental conditions.)

We very much agree with Nahmias that an Aristotelian understanding of action emphasizing the development of (automatic) habits of behavior has much to recommend it in light of the ever-growing body of data on the pervasiveness of nonconscious cognition. Certainly, it is much more in accord with the data than a position which demands substantial conscious involvement at the time of action for control/responsibility to be present. (Bernard Baars, too, argues for an explicitly Aristotelian view of action in his reply.)

Aristotelian approaches to issues in action theory could come in different flavors depending on the inclinations and other philosophical commitments of the theorist. One possibility, of particular relevance to the remarks of John Fischer, is that this approach could be used to construct a reasons-responsiveness framework that is more resilient in the face of situationist challenges and the social psychological data underlying them. There are (at least) two virtues of this approach worth noting. First, as Nahmias points out, it could provide a way around the psychologically problematic requirement of conscious acknowledgement of and reflection on reasons at the time of action. And second, it could furnish the resources to for the reasons-responsiveness theorist to address the “unreliable-motivation-detector” problem that Fischer notes as an important situationist threat to current reasons-responsiveness approaches. To flesh this out a bit, perhaps a reasons-responsiveness theorist, by shifting focus to appropriate (and often nonconscious) habits of motivation and responsiveness to reasons, would be able to drop the requirement that, at the moment of action, we must be consciously aware of (and correct about) what is motivating that action. Indeed, as Martin Roth suggests, one could even go further and say that the reasons to which nonconscious processes are sensitive need not be ones which we would consciously acknowledge as reasons were we to become (consciously) aware of them. (We should reiterate that, as Fischer notes, we are not ourselves proponents of reasons-responsiveness approaches, but this does not preclude theorists who do incline toward such views from pursuing the approaches just sketched.)

Neil Levy raises an important potential objection to our view, namely that the prevailing psychological and philosophical notions of control do not map onto one another. He is correct that our starting point is psychological and neurobiological research on control (in the sense of effective guidance of actions, goal maintenance in the face of perturbations, etc.). Also correct is his claim that the sort of control philosophers are interested in does indeed require conscious control (this is what we were aiming to capture in our paper with the “neo-Kantian” conception of control). However, the Neo-Kantian picture, with its requirement of consciousness/reflection for control, is precisely what we’re arguing against. Related to this, we take issue with Levy’s claim that nonconscious processes are “subpersonal”. They are, of course, subconscious, but this can only be equated with being subpersonal if one takes the person/agent to be restricted to the conscious sphere of cognition. But as we describe in the next two paragraphs, we believe that this view is becoming less and less tenable as data on human cognition, action, and development accumulate.

A worry that has been raised by a number of commentators is nicely summed up by Nadelhoffer as follows: “In short, if we associate the moral agent with the conscious self, and if it turns out that the conscious self has far less control over human behavior than we traditionally assumed, then it is unclear why our notion of moral agency shouldn’t shrink accordingly.” We agree that this is a worry if one takes on board the traditional philosophical equation of the agent with the conscious agent. But as with the issue of responsibility-supporting control, there are two ways one can go when faced with data of the sort presented by certain social psychology experiments. The first option is to stick with the standard notion of control/agency (what Nadelhoffer describes as the “associat[ion] of the moral agent with the conscious self”) and adjust our view of the sphere of agency, control, responsibility, and so forth in light of the social psychological data. This is the option Nadelhoffer, Doris, and others champion; they suggest that the range of situations in which people exercise moral agency (are morally responsible, etc.) may be significantly smaller than previously thought.

The second option, which we prefer and believe to be more consistent with what a broader range of scientific data tells us about human cognition and action, is to adjust the philosophical notions (of agency, control, responsibility, etc.) so that they are more in line with the totality of the data. In the paper currently under discussion, we aimed to set out this argument as it applies to control and responsibility. However, in a paper we are currently working on, we aim to extend this line of reasoning to the notion of agency itself, arguing that the restriction of the agent to the conscious portion of a person’s cognitive activity is unwarranted. Rather, what we propose is that the agent should be viewed as a unified whole encompassing both conscious and nonconscious processes (and the interactions and gradations between them, since as Baumeister points out nearly all cognitive and physiological processes will involve some combination of conscious and nonconscious contributions rather than relying entirely on one or the other). This is, admittedly, a substantial departure from the traditional philosophical picture of the moral agent, but we see little reason that philosophical views on human agency and action should not be responsive to what science tells us about human cognition and action.

Jesse Prinz remarks that “the issue of consciousness strikes [him] as a red herring”. Not surprisingly, we disagree with this assessment. Although one could perhaps formulate a version of the challenge from social psychology to control and responsibility which makes no mention of consciousness (as Prinz attempts to do with his example of the Milgram experiments), for many philosophers concerned with agency, control, and responsibility, the issue of consciousness is in fact paramount. Levy’s reply to our post, for example, nicely articulates the philosophical view of consciousness as a requirement for actions that are capable of supporting attributions of moral agency, control, and responsibility. Although, as described in our reply to Levy, we are not inclined toward this picture of action, his remarks, as well as those of others such as Fischer and Nadelhoffer, provide a clear illustration that the consciousness requirement for moral agency is one which many philosophers interested in moral responsibility take very seriously.

Prinz argues that “the types of control furnished by evolution (delaying gratification, carrying out a multistep task, slowing acquiring a skill, etc.) are no safeguard against situational influence”. What we are taking issue with, however, is precisely the notion that in order to be in control (morally responsible, a moral agent, etc.) one must be safeguarded against situational influence. This is most neatly captured in the final sentence of our post, where we say that “[s]o long as control-relevant anatomical structures are intact and the neurochemicals on which their functionality depends are within their appropriate ranges, sensitivity to situational contingencies and nonconscious processes are appropriate aspects of control and goal-directed behavior, not obstacles to them”.

Prinz also describes individuals as “being manipulated like marionettes by external variables”. Yet the use of ‘manipulated’ once more seems to beg the question against the sort of position captured in the concluding sentence of our post. What Prinz and others sympathetic to the situationist position are calling ‘manipulation’ we regard as highly useful sensitivity to one’s environment.

Furthermore, saying, as Prinz does, that people are being “radically influenced by small situational factors” and “manipulated like marionettes by external variables” itself radically overstates the influence of minor situational factors in everyday action or, for that matter, in the vast bulk of laboratory-based social psychology experiments. A naïve reader, upon encountering Doris, Prinz, Nadelhoffer, and others’ presentations of the social psychological data, might come away with the impression that she need only walk around with a plateful of cookies or a pocketful of dimes in order to exert total control over the people around her. But as an anonymous referee on our paper, self-identified as a social psychologist, commented, the vast bulk of effects found in the social psychology literature are nowhere near as large as those in the handful of studies situationists tend to focus on. Moreover, in contrast to the social psychology laboratory, where conditions are carefully tailored to minimize the influence of all factors other than the one(s) under study, in real-world environments, one is confronted by a numerous, and highly various, situational stimuli, internally maintained goals, demands on attention, and so on, each of which may contribute to behavior. As a result, one must be very cautious when extrapolating from the magnitude of particular variables’ effects under carefully controlled laboratory conditions to their magnitude in noisy real-world conditions. (Subliminal priming, which is a staple of the psychologist’s laboratory toolkit, provides an instructive example. Despite fears among many in the general population that powerful subliminal messages are embedded in advertisements for products or political candidates, and hopes for the effectiveness of self-help recordings employing subliminal messages, there is little or no evidence that real-world subliminal messages of this sort actually have any effects on people’s behavior – see, e.g., Merikle [1988], Theus [1994], and Trappey [1998].)

After writing the previous paragraph, we saw Baumeister’s response, in which he puts these points about effect sizes and generalization outside the laboratory even more elegantly and concisely than we did. He writes: “Regarding Frail Control: Doris is correct that situational causes exert an influence over behavior. But large effects are rare. Most effects just shift the odds slightly. Large effects depend on everything else being carefully screened out and on conscious attention being systematically managed so as not to interfere (and often to cooperate). The situational effects of social psychology allow plenty of room for conscious control.” Seeing the results of social psychology in this way provides a bulwark against the tendency among certain researchers (both philosophers and psychologists) and popular science writers to jump from exciting findings in cognitive science to sweeping statements about their implications for free will, control, and other crucial concepts. Furthermore, as the last sentence of the quote from Baumeister, as well as other portions of his response, suggests, this more nuanced understanding of the relationship between social psychological results and real-world action opens the door for views which see a place for both conscious and nonconscious processes in agency. This, as noted above, is precisely the sort of position we are developing in a paper currently in preparation.

Nadelhoffer, Prinz, and others bring up Milgram’s famous obedience experiments and historical atrocities such as the Holocaust as examples of situations in which responses to situational factors may mitigate responsibility. Yet appeals to such cases miss the point of our argument entirely. As we note at various points in our article, our aim is not to say that people are fully responsible in every situation whatsoever; rather, our target is the claim that seemingly trivial situational factors undermine control and responsibility (again recall Prinz’s remarks about people being “manipulated like marionettes”). Whether the soldiers in Nazi Germany who carried out the Holocaust – being under enormous social pressures, aware that their livelihood and even lives could be threatened by dissent, and so forth – can be considered fully responsible for their actions is of course a very important and interesting question. However, it is also one which is orthogonal to our argument, since it deals with far-outside-the-norm situational pressures and stressors. The goal of our argument was, instead, to counter the notion that “even in unexceptional conditions, humans have little control over their behavior” and are therefore not (or less) responsible for that behavior and its consequences. This denial that situationist arguments warrant a dramatic expansion of the range of responsibility-mitigating circumstances is perfectly compatible with some responsibility-mitigating circumstances existing, with the extreme cases cited by Prinz and others being a case in point.

Nahmias, Hansen, Fischer, and others point out that certain prominent psychologists and neuroscientists are at least as guilty as philosophers of making extravagant claims about the implications of scientific findings for free will, control, and other issues in action theory. With this we fully agree. We chose Doris’s situationism as our target because we believe it to be the most forceful and sophisticated attempt to use findings from empirical psychology to challenge common notions of agency, control, and responsibility. For reasons of space, we were only briefly able to acknowledge other proponents of “Frail Control” hypotheses, including psychologists such as Dan Wegner and (at least in some of his writings) John Bargh. But our own brief mention of these other researchers does not mean that their views (and others, such as those of Benjamin Libet) do not deserve deeper scrutiny for their oversimplification of the relationship between scientific findings and freedom of the will (control, responsibility, etc.). As Fischer notes, in the case of psychologists and neuroscientists making pronouncements on whether or not people have free will, the problem is less a failure to take into account a sufficiently broad range of scientific evidence than the use of somewhat loose or ill-defined notions of free will and other philosophical concepts.

Daryl Cameron, building on remarks by Nahmias and Nadelhoffer, suggests that the situationist challenge may need to be reframed; in particular, he proposes that “the real challenge from social psychology is not to self-control, but instead to self-knowledge”. This version of the social psychological challenge we fully endorse. Social psychologists have provided a wealth of evidence that conscious self-knowledge (in the form of introspective access to motives, influences on action, etc.) is nothing like as transparent or incorrigible as our lay theories would have us believe. (These conclusions are underwritten not just by social psychological findings but by work in neuroscience and other fields. Michael Gazzaniga’s findings concerning confabulation by callosum sectioned [“split-brain”] patients provide one especially striking example.)

While agreeing with Cameron and others that psychological findings are a grave threat to folk theories about the accuracy and extent of (conscious) self-knowledge, we wish to resist the move to claims that they are also a substantial threat to agency, responsibility, and so on. The principal reason for this was noted earlier: we believe that scientific findings point toward a conception of the agent which encompasses both conscious and nonconscious cognition. While seemingly radical at first glance and no doubt at odds with much of contemporary philosophy, this broader view of agency and action has, as Baars points out, both ancient roots and certain modern manifestations. He illustrates this point with the example of a groggy philosopher making a fatal error while pulling out of her driveway in the morning. Baars suggests that perhaps the criminal law, which does not accept as an excuse for traffic accidents, assault, fraud, or drunk driving the mere fact that nonconscious influences were involved, gets things basically right in this regard.

References:

Isen, A.M. & Levin, P.F. (1972). Effect of Feeling Good on Helping: Cookies and Kindness. Journal of Personality and Social Psychology, 21, 384–88.

Merikle, P. M. (1988). Subliminal auditory messages: An evaluation. Psychology and Marketing, 5, 355-372.

Theus, K. T. (1994). Subliminal advertising and the psychology of processing unconscious stimuli: A review of research. Psychology and Marketing, 11, 271-290.

Trappey, C. (1996). A meta-analysis of consumer choice and subliminal advertising. Psychology and Marketing, 13, 517-530.

]]>
By: Galen Strawson http://nationalhumanitiescenter.org/on-the-human/2010/05/control-conscious-and-otherwise/comment-page-1/#comment-1397 Mon, 31 May 2010 21:31:54 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=1078#comment-1397 John, If Free Will is just guidance control, does that mean it doesn’t matter how we come to be the way we are, or what we come to be — just so long as we have guidance control?

More generally: the idea of being able to bring things to consciousness seems extremely important. But the old problem of FW and MR remains. Suppose we somehow convert everyone so that they can, and indeed do, always bring everything to consciousness (situationist influences, Freudian influences, the lot). Some do good, given how they are, and some do evil, for the same reason. We can certainly reward the former and punish the latter. But we can’t claim that punishment and reward have anything more than a purely instrumental/pragmatic justification.

]]>
By: Bernard J Baars, PhD http://nationalhumanitiescenter.org/on-the-human/2010/05/control-conscious-and-otherwise/comment-page-1/#comment-1394 Mon, 31 May 2010 00:50:11 +0000 http://nationalhumanitiescenter.org/on-the-human/?p=1078#comment-1394 Yes! I blew it!

My fingers took off all by themselves. I’m not responsible for my basal ganglia, but I AM responsible for scaring the daylights out of my basal ganglia when they go off the wrong way. That’s me, my cortex and I.

One of the interesting biological oddities is that in humans, at least, cortex (and thalamus) are really tightly coupled with conscious and voluntary processes. It’s sometimes argued to be less so for cats and such, but all the evidence I know points straight to cortex. (It’s the thalamocortical system, really.)

Because consciousness is ancient evolutionarily (at least 200 million years for mammals alone), it is very likely that different layers of the brain supported conscious functions at different stages of phylogeny. One possibility is that the human cortex, as it ballooned to occupy perhaps 80 percent of our crania, also acquired the ability to inhibit prior ‘seats’ of consciousness. You need to do something like that over evolution to avoid crossed and self-defeating signals and output commands. I believe that visual cortex inhibits the superior colliculus, for example, and the basal ganglia have lots of inhibitory connections.

So one possibility is that we are talking about a multi-storied building, but that evolution can’t just add another layer of control without a lot of integrative fixes. Try adding another computer at home and get it to play well with the original, and you see the problem.

Jaak Panksepp and Bjorn Merker have made arguments for brainstem regions to be involved with consciousness in ancestral species, including perhaps the reticular formation of the brainstem and midbrain, and the zona incerta. Panksepp has demonstrated beyond doubt that the PAG (brainstem & higher gray matter around the liquid-carrying aqueduct in the center of the brainstem) is involved in mother-infant attachment, distress cries, and cuddling/soothing/suckling behavior. That does not imply consciousness as such, but it suggests that closely related functions may be very ancient.

As for the sleep/waking/dreaming cycle, that’s all over the place with mammals, birds, and maybe other critters.

]]>