Over at the best health care policy blog around, The Incidental Economist, the fabulous Aaron Carroll has a post arguing in favor of an approach to conflicts of interest that focuses on managing such conflicts (rather than seeking to eliminate them).

I have enormous respect for Dr. Carroll, but I’d like to examine his points and offer some alternative perspectives.

Referencing a post from Marion Nestle that notes disclosures of extensive COIs in a nutrition study, Carroll states that:

Here’s the thing. This is exactly what we want from researchers: open, transparent, declarations of potential conflicts of interest. We shouldn’t demonize those who do what we ask. And everyone who gets a grant or a donation isn’t a bad scientist.

Respectfully, there is some question-begging here.  Namely, while disclosure of COIs is certainly an ethical minimum, it assumes the conclusion to suggest that all that we ought to receive are “open, transparent declarations of potential conflicts of interest.”  Full disclosure is a floor, not a ceiling, and there is ample room to question its efficacy.  Why?

To understand this, we have to go to the literature on cognitive bias.

Stark TaxonomyI generally teach this via Andrew Stark’s outstanding taxonomy of conflicts of interest (as highlighted in Sheldon Krimsky’s work).  When we are talking about COIs, we are concerned first about relationships, in this case between scientists and commercial industry.  The first stage is the antecedent acts that constitute and cement the relationships.  These entanglements lead to positive dispositions or states-of-mind towards the party (because of the human tendencies to want to reciprocate towards other parties with whom we are in relation).

These positive states-of-mind, over the long run of cases, create a much greater likelihood of behavior of partiality, which are generally the bad acts that we want to avoid.  (Of course, partial behavior is not always bad, but the bad acts that flow from COIs are always behavior of partiality in Stark’s taxonomy).

From this taxonomy, we get two important points.  First, managing COIs in practice virtually always addresses the third stage — behavior of partiality.  Current approaches to such management do virtually nothing to preclude the antecedent acts (i.e., those that form the relationship, such as the seeking and execution of a grant/contract, a gift exchange, a free lunch, etc.) that lead to the states-of-mind that give rise to behavior of partiality (first, second, and third stages respectively).

And the mandate of disclosure is a paradigm case of this problem.  Requiring disclosure of COIs does absolutely nothing to preclude the constitution of the relationships, nor the states-of-mind that are more likely to lead to behavior of partiality.  In fact, as scholars of COIs have noted time and again, the evidence is quite clear that disclosure can actually have an offsetting effect, intensifying the states-of-mind by conferring a form of moral license: “I have disclosed, so I am behaving exactly as I should, and can deepen the relationship I have with the commercial entity just so long as I continue to disclose it.”

If in fact it is the existence and depth of the relationships that is the core of the moral problem, remedies directed solely at the third stage in the taxonomy are almost by definition insufficient.

And here’s the rub, as I have previously noted: it is not evidence-based to claim that these kinds of entanglements do not have an influence on our behavior.  They do.  We know that they do.  Across a population of actors subjected to these entanglements, a significant percentage of them will modify their behavior in ways favorable to the commercial entity.

So, to return to Dr. Carroll’s initial point, I think there is ample room to disagree with a claim that all we should want from researchers is open, transparent, declarations of COIs.  To be sure, we do want this, and such is an ethical minimum.  But it is far from clear that we should be morally satisfied with such.

(As an aside, I also reject any distinction between potential and actual COIs.  There is no such thing as a potential COI.  All COIs are actual.  They may or may not impact behavior in any given case, but that is exactly the point: they make more likely certain kinds of behavior over repeated iterations.  For more on the different meanings people attach to COIs — i.e., as a warning and as a evaluation of bad behavior — see this excellent chapter by Edmund Erde and Howard Brody’s book Hooked).*

Dr. Carroll continues:

I know people with industry funding whose integrity is nearly unimpeachable. I also know lots of people with funding from government sources who are ridiculously conflicted in an academic sense.

This is self-evidently true, but I respectfully do not think it addresses any of the points made above.  Of course the sole source of COIs are not solely financial entanglements.  Yet it just as obviously does not follow that we should somehow be unconcerned about financial COIs.  We know, as Dr. Carroll himself has shown and discussed repeatedly, that money is an overwhelmingly powerful determinant of human behavior.  How does the existence and power of non-financial COIs undermine this concern?

(Howard Brody has written on so-called “intellectual COIs” on his excellent Hooked blog here and here).*

We need to judge science on the methodology. In this case, this was a systematic review. Go read the paper. Judge for yourself whether they did a good job or not. But in the same way that I refuse to “believe” just anyone without due diligence, I don’t dismiss them without doing it either.

But as I have previously noted:

Science, and especially epidemiologic data,  never speaks for itself; it is always produced by actual people, is highly uncertain and ambiguous, and inexorably requires interpretation on which reasonable and informed people often disagree.  Moreover, people’s scientific opinions and clinical practices are indeed affected by relationships with relevant actors, including financial relationships.  There is overwhelming evidence of this (see the work of George Loewenstein for a long-running and robust example of this evidence).

These facts obviously do not excuse us from doing our “due diligence,” as Dr. Carroll suggests, but neither does doing our due diligence eliminate the real ethical concern we ought to have regarding the powerful effects commercial industries have in shaping (1) the evidence base itself and (2) the ways in which that evidence is interpreted, managed, and implemented, especially in health care settings.

Finally, although I tend to self-identify as a ruthless pragmatist, we have to be extremely careful to avoid the naturalistic fallacy, especially in the paradigm of COIs.  That is, it is undeniably true that commercial interests have extraordinarily old and deep tentacles in science, biomedicine, and health care.  While it is of course difficult to imagine exorcising these influences completely, we are not justified in concluding therefrom that the status quo, or only a minor modification therein, is ethically permissible.  It might well be the case that moral considerations mandate major modifications in our traditional and customary ways of doing business in these arenas.  Another, perhaps more concrete way of framing this question is to ask whether we should permit shallower or deeper entanglements between commercial industry and scientific researchers, even if course we can never eliminate them.  The latter fact does not license the conclusion that deep entanglements are ethically permissible.  They might be, but we cannot simply assume it as such.

Thoughts?

[edited for grammar, clarity, and links]

* In the interests of full disclosure (!!), I note that Dr. Brody is the Director of the Institute for the Medical Humanities which is where I did my Ph.D, and served on my dissertation committee.

Advertisements