Tags

, ,

Neuroethicist-extraordinaire Nicole Vincent has a fascinating post in the online symposium at Neuroethics & Law Blog regarding Michael Pardo and Dennis Patterson’s book Minds, Brains, and Law: The Conceptual Foundations of Law and Neuroscience.

Now, on this blog I likely won’t be saying all that much about neuroethics, especially as to its philosophical foundations.  (My current areas of scholarly focus shade more to public and population health, and a variety of topics therein).  However, insofar as there are ways of doing and thinking about neuroethics that expressly benefit from or can accommodate perspectives in public health ethics/law/policy and/or population-level bioethics, I may wade into the subject from time to time.  The specific subjects on which I am likely to comment in this vein probably center on pain, sport-induced mild traumatic brain injury, and my historical work on medical and scientific imaging in the late 19th-early 20th c. U.S.

Even though I do not say much — nor do I imagine for a moment that I have many useful things to say — about the philosophical foundations of neurolaw, Pardo & Patterson’s arguments are important to some of my work on pain, and I cite it as such.

Below the jump, I offer a response to some of Dr. Vincent’s excellent analysis.  I make no claims that P&P would endorse or even agree with anything written below.  Rather, it is just offered as my own response to Dr. Vincent’s sharp questions, based entirely on my own reading and interpretation of Pardo & Patterson’s arguments.

Dr. Vincent writes:

In a few paragraphs I shall return to what they say about identifying brain states with intentions or knowledge, but first I would ask you to consider the phenomenology that I experience when I act with different kinds of intent. First, there is nothing that it is like for me to do something negligently. This is simply because negligence involves a failure to consider stuff that you should have considered, and not considering something – an absence – does not feel like anything in particular. On the other hand, when I do things recklessness I feel an attitude of belligerence. It feels like a certain kind of wilful, flaunting disregard. Another kind of disregard – a much colder “don’t care” sort of attitude – accompanies acting recklessly. And a certain kind of ditermined, directed, or single-minded feeling is what tags along when I do things on purpose. Distinct phenomenology – a distinct feeling – accompanies doing things on purpose, with knowledge, recklessly, and negligently. Now, my own phenomenology is probably idiosyncratic, so nothing of consequence for Pardo and Patterson’s argument should be extracted from the content of my idiosyncratic phenomenology of intending. But what does follow, in my view, is this:

(P1) if different phenomenology attends to different ways of intending, and
(P2) different brain processes correlate with different phenomenology, and
(P3) brain imaging techniques could discern these different brain processes from one another, then
(MC) brain imaging techniques could help courts discern the kind of intent with which someone acted.

I don’t really disagree with this, and I’m not sure that P&P would either.  But there are some questions I have regarding the moves and the premises above.  It is, I would think, uncontroversial, to assert P2.  In fact, we need to know absolutely nothing about neuroimaging at all to have some good reasons to suspect P2.

But what does it means to say in P1 that different phenomenology “attends” to different ways of intending? I am not sure what work is being done by the verb “attends” in the premise.   Of course, it stands to reason that different phenomenology has something important to do with different ways of intending.  Indeed, it seems almost a truism to suggest that different lived experiences shapes different ways of intending in ways that may be relevant for formation of mental states sufficient to satisfy mens rea.  Moreover, if phenomenology is connected to intent, and if brain processes are correlated with phenomenology, then it follows that brain processes are somehow “connected” to intent (leaving aside for the moment the significant questions of what it means to be ‘connected’ in this sense).

But I’m not sure I see much in P&P’s account that is incompatible with these claims.  The problem with the mereological fallacy, as I see it, is that it falsely equates attributes of neuromaterial structure with attributes of moral agents (embodied people, say).  This has particular implications for criminal law, of course, where, as Dr. Vincent and colleagues have so ably explored, profound questions of responsibility are shaped by attributions of (the level of) agency.  If this is in any way a correct interpretation of an implication of P&P’s argument, I am not sure I see why there is a need to reject MC above.  If the correlations prove reliable enough – and this is a very thorny epistemic problem given (1) the difficulty in agreeing on acceptable standards of reliability in criminal law and (2) any difficulties internal to neuroscientific practice of obtaining generally reliable correlations in general – then I’m not sure P&P would have much objection to introducing evidence of brain states as an indicator of a phenomenology that relates to mens rea qua intent.  The problem that P&P identify is, as Dr. Vincent goes on to discuss, the identity of brain states with intention, which on my reading requires the mereological fallacy or something close to it.  Brains don’t form intent; moral agents do.  Brains are certainly required for intent formation, and perhaps if we have the right tools we can locate relevant processes in the brain that are involved in phenomenologies that help determine intent.  But it does not follow that we can literally look inside the material structure of the brain to find intent, because the mental phenomenon of intent is not contained in the brain.

No?

Dr. Vincent continues:

Consider next what Pardo and Patterson say about assessing knowledge (as opposed to intent) through brain scans: “Imagine we had an fMRI scan of a defendant’s brain while [he was] committing an allegedly criminal act. … Exactly where in his brain would we find this knowledge?” Presumably, what they mean here is something like knowledge that he committed that act, or that he committed it on purpose. They continue: “Because knowledge is an ability, and not a state of the brain, the answer is: nowhere” (139). Their point, I take it, is that knowledge is not like data encoded on a computer hard drive nor like text printed on the page of a book — something that exists in some location, that we could peer at, inspect, and measure. Rather, knowledge is on their account an ability. Suppose that’s right. (Though to be perfectly frank I’m not totally sure I understand precisely what they mean by this, but yet I do not think this matters for what I about about to say.) What’s meant to follow from that? On Pardo and Patterson’s account what follows is that such fMRI scans could not possibly reveal whether defendants do or do not know the facts in question, because those facts simply are not inscribed in any location in their brains, and so they cannot be read by inspecting any location in their brains.

My puzzlement here is simple, I think: regardless of whether we conceive of knowledge as akin to text inscribed in some location on a sheet of paper (this, I guess, is meant to be analogous to what they call “states” of the brain), or whether the right way to think about knowledge is as an ability, even in the latter case we will still presumably want to say that the ability in question is implemented somewhere in the brain.

Similar to my question regarding “attends” above, I am not sure what Dr. Vincent means here by “implemented.”  Again, of course a relevant quantum of functioning brain is necessary for a moral agent to have knowledge.  So in this sense, the ability that constitutes knowledge is the emergent result of a certain N+S amount of neural activity.  But it does not follow from this that we can literally look in the material structure of the brain to point to that knowledge.  That knowledge does not exist within the structure of the brain, and hence it is a fallacy to think that by examining the brain we can locate it.  Of course, we can examine the structure of the brain to find processes or pathways that may be relevant to the particular question of knowledge in which we are interested.  But the move that P&P seem to me to be concerned with is the reductionist move which alleges that we can infer the extent of a moral agent’s knowledge on a given question by examining features of their brain structure.   Like Snape correctly told Harry Potter, “the mind is not a book to be read.”

If not there, then where? For example, if I have the ability to speak Polish (which I do), then presumably that ability is to a significant degree somehow encoded or wired into my brain.

But what does “encoded or wired into my brain” mean in this context? These are metaphorical terms that, as Evelyn Fox Keller and many others have argued, are freighted.  The metaphors are so powerful that they can do a lot of the argumentative work needed to explain exactly what is meant by the claim that examining brain correlates of knowledge (assuming such phenomena exist) provide significant insight into what a moral agent knows – or even more difficult, what a moral agent knew at t1.

Dr. Vincent’s next point concerns the reductionism of mind-brain identity that seems to concern P&P:

This brings me to their claim that to even notice the incoherence of statements like “I intend to X, and X is impossible” (138), we cannot suppose that having an intention just is having a particular brain state, since there is nothing incoherent about saying “I have a particular brain state, and X is impossible”. My worry here is that Pardo and Patterson’s argument misunderstands the sense of “is” that lies at the core of the Mind-Brain Identity Theory. To make my point, I quote U.T. Place on the topic of the distinction between two different senses of “is”, the “is” of definition and the “is” of composition . . . .

I suspect that the argument that Pardo and Patterson offer to support their claim that intentions cannot just be brain states makes precisely the same mistake as the one that U.T. Place highlighted back in 1956. Rather than sprinkling my own confusions all over U.T. Place’s impeccable argument, I’ll leave things here and assume that this suffices to put Pardo and Patterson’s mind at ease in the knowledge that they need not be as pessimistic about the prospect of using brain scans to help courts assess people’s intent and knowledge.

I admit to being heretofore unfamiliar with Place’s argument, but I agree that it might well set P&P‘s minds at ease.  Place’s emphasis on the distinction between the “is” of definition and the “is” of composition seems to me to go exactly to the core of P&P’s claims regarding the reductionism in the mereological fallacy.  The claim that brain states are part of the composition of phenomena that produce intent seems to me to be relatively uncontroversial and compatible with P&P’s argument.  Brain states are connected to the formation of intent, although many other phenomena also play a role (such as perhaps the body-environment components of Glannon’s brain-body-environment triumvirate).  What brain states alone are not is the definition of intent.  Intent is a phenomenon that emerges at the level of persons (or at least full moral agents); as such, it is not accurately deemed a property of brains-in-a-vat.  If so, while we might gain some kind of useful information from examining neural processes and structures, we will never be able to locate intent by doing so because intent cannot be reduced to brain states.

I’m not sure if this adequately captures P&P’s pessimism, but this is my reading (and perhaps mine alone?) . . .

I argued above that intention and knowledge need not be identified with brain states in order for evidence about the brain to usefully inform mens rea investigations. Correlation will suffice. For evidential utility, all that’s needed are (sufficiently) stable correlations, and we could certainly get those if we discovered that certain brain states are (almost) invariably present whenever someone intends or knows. I also argued that it is not as implausible as Pardo and Patterson maintain to identify brain states with mental states. U.T. Place did it, and it seems to me that Pardo and Patterson’s objections to doing it only apply if we use the word “is” in the definitional rather than the compositional sense.

But isn’t this question-begging? That is, it is not clear to me that Place really means to identify brain states with mental states, because by “identity” he does not mean that the two are connected by an analytic definition.  If I am correct in reading his claim that the “is” of composition fairly describes an “is” connection between brain states and mental states – but that an “is” of definition may not – then the kind of identity Place has in mind seems to me to be one that is both compatible with P&P but, importantly, is one that explicitly repudiates the kind of reductionism with which P&P are concerned.  Mental states like intent are simply not reducible to brain states, and Place’s argument seems to support the point.

But putting aside the philosophical gloss and endless distinctions that we are so fond of making, I simply wonder where else, if not in the brain, are we to find knowledge and intention? I do not mean to be crass, but only to point out that regardless of whether we conceive of intending and knowing as states or as processes, they will still in the end need to be implemented somewhere, and the brain seems like a plausible candidate.

About six years ago, I was sitting in a meeting of a world-class Department of Neuroscience.  As discussion proceeded, led by the world-famous chair of the department, I advanced my claim that we would never find the phenomenon of pain by examining neural structure because the experience of pain could never be reduced to that level.  The chair looked at me skeptically, and asked, “Well, if it is not in the brain, then where is it?”

I replied, “The lived experience of pain may not actually be an object in the material world, even if the material world is necessary to its production.”  The chair’s eyes shot up, and he waved his arms in dismissal of my admittedly poorly-worded claim – he could not accept that the reality of mental phenomena could be non-susceptible to the objectifying tools that characterize contemporary neuroscience.

But in truth, I think quite a few neuroethicists (Gillett, Glannon, etc.) supply answers to Dr. Vincent’s query above.  Knowledge and intention obviously exist in the external world.  But it does not follow that the phenomena are objects that can be apprehended by tools and modalities whose impressive epistemic program is literally founded on objectification (and I speak here with some authority as an historian of objectivity and its relationship to medicine and science in the modern era, especially in context of scientific imaging).  The brain is required for knowledge and intention, but knowledge and intention are simply not reducible to the brain.  As I have argued, the contrary claim rests on a category error that conflates causation with metaphysics.  We can find pathways and processes in the brain that may give us some evidence of knowledge and intention.  But knowledge and intention do not exist within the brain; they emerge from it, and hence may not fairly be said to exist as objects in the natural world susceptible to the objectifying techniques of many forms of contemporary scientific praxis.

Anyway, I found Dr. Vincent’s post interesting enough to help drag me back into the world of blogging.

Comments and thoughts are most welcome!

Advertisements