On the heels of my recent post regarding Gaetan Dugas and the origin (myths) of HIV/AIDS in the U.S. comes this 10/29 editorial from NYT science writer Donald G. McNeil, Jr.
The editorial is strange, IMO.
On the heels of my recent post regarding Gaetan Dugas and the origin (myths) of HIV/AIDS in the U.S. comes this 10/29 editorial from NYT science writer Donald G. McNeil, Jr.
The editorial is strange, IMO.
One of the books I love teaching above all else is Judith Walzer Leavitt’s fabulous Typhoid Mary: Captive to the Public’s Health. This is a spectacular book, and my students regularly confirm this perspective. It is one of those rare texts that manages to satisfy the scholarly standards expected by professional historians of medicine and public health at the same time that it is readable and accessible for non-expert audiences (undergraduates, public health students, etc.)
My pedagogical affect is generally quite genial and affable. I smile often in class, which is in part a function of the fact that I love teaching so much, and in part is because I try to practice compassion and accessibility with my students. I begin most of my courses by sharing a significant amount about myself and my background, in part to build rapport and to try and diminish the educational distance that often exists in the hierarchies between student and teacher.
But because health stigma is a frequent topic of discussion and analysis in all of my teaching, we often find ourselves broaching the subject of Mary Mallon.
(x-posted from Yoni Freedhoff’s excellent Weighty Matters blog)
The Academy of Nutrition and Dietetics (“AND”) recently held its annual meeting. According to dietician Andy Bellatti (and others), during her opening talk, the President of the AND, Lucille Beseler, opined on the rising concerns over financial conflicts of interest among nutritionists and dieticians: “I’m not so weak-minded that I would make a decision on receiving a pen.”
In the course of researching, writing, and teaching about conflicts of interest among health professionals for over a decade, what I have come to marvel at the most is the apparent ease with which leading health professionals insist on a kind of willful ignorance regarding the cognitive science underlying concerns over COIs. This is amazing to me because of the irony: members of professions presumably dedicated to basing their practice on the best evidence proceed to engage concerns over COI with almost no awareness whatsoever of what the best evidence actually suggests regarding the impact of conflicts of interest on human behavior.
This is why, in December of 2015, I published a small chart with explanation in BMJ entitled “COI Bingo.” I grew so exasperated with the same tired justifications for financial COIs that I categorized the standard responses into a Bingo chart:
Reasonable people of good conscience can, of course, disagree on whether the behavior of partiality that occurs in the presence of COIs are morally justified, and on the appropriate remedies, if any, for such behavior. But the argument should proceed with all stakeholders fully aware of what the cognitive science actually suggests regarding the impact of COIs on health professional behavior.
What does that evidence show? Beyond a shadow of a doubt, gifts almost certainly do influence health professionals’ behavior, at least in the aggregate. Not only have we documented this finding itself ad nauseum, we also have powerful causal explanations that elucidate the mechanisms by which even gifts of de minimis value influence health professional behavior. Virtually all human societies exchange gifts — they promote social cohesion and are therefore a critical adaptive mechanism. One of the ways gifts accomplish such cohesion is because they tend to automatically, unconsciously create a desire to reciprocate on the part of the recipient.
Commercial industries are well-aware of this phenomenon, which is why they provide such gifts. The gift exchange also cements the relationship with industry, which that in which commercial industry is most interested. The tighter the relationship that exists between commercial industries and health professionals, the more likely it is in the broad run of cases that behavior of partiality will occur. COIs have to be understood iteratively — the existence of a financial COI does not imply that bad behavior will necessarily take place in any given case. But over the long run of cases, the existence of financial COIs makes shenanigans much more likely — a conclusion which is — again — extremely well-documented in both experimental and uncontrolled (i.e., real-life) conditions.
That these kinds of gifts “work” in the health professions to serve the interests of the “donor” is therefore beyond dispute. Moreover, some of the more darkly amusing findings in the COI literature document our own immunity bias: while we imagine ourselves much less likely to be influenced by pens and mugs, we have serious concerns that the professional sitting next to us may have their judgment clouded if they accept gifts from commercial industry:
(Steinman, Shlipak & McPhee 2001)
Maybe any given health provider will indeed remain entirely uninfluenced by deep entanglements with commercial industry. But the evidence establishes beyond all doubt that the odds are forever not in your favor.
Ultimately, far too many stakeholders seem willing to wade into the fray with a perfect, almost studied indifference to the significant evidence base regarding COIs. This is itself an ethical problem — mistakes themselves are not ipso facto morally blameworthy — but mistakes made because health professionals did not bother to examine an available evidence base and ground their practice in that evidence come much closer to moral failure.
Since I’m apparently blogging again, I might as well make it a two-fer, as they say in the Southern U.S.
In a recent blog post for JPMHP Direct, I argue the following:
the genesis of organized public health in the modern West are firmly and unquestionably rooted in social reform. Historians of public health have documented repeatedly that early public health actors were reformers and advocates to the bone.
For example, Jacob Riis’ stunning photos of NYC tenements in the 1890s were absolutely crucial to sparking municipal public health action. Riis undoubtedly saw himself as an advocate and an activist, and there is no doubt public health was the better for it. In 2010, a group of historians and scholars at Columbia University’s Center for the History & Ethics of Public Health explicitly argued that public health in the US has essentially taken an exodus from its roots in social reform and public advocacy, and that this departure has had and will continue to have grave effects on population health and health inequalities.
This is in fairness an oversimplification, one wrought from considerations of venue, word limits, and audience. (All of what follows is I think pretty well-settled ground for historians of public health in the modern West, but by all means, dear readers, tell me if I am mistaken).
Bill Gardner over at The Incidental Economist has a thoughtful post on a NY Times op-ed that appeared last Friday. The op-ed pointed out that for all of our considerable investment in basic neuroscience, we have not gained many significant clinical psychiatric interventions over the past 20 years.
Gardner points out that there are a number of evidence-based psychotherapeutic interventions, and concludes that “the National Institute of Mental Health (NIMH) should be funding more research on psychotherapy!” But, he cautions, it does not follow that we should reduce funding for neuroscience research:
The brain is the most important yet least understood organ in the body. Eventually, we will understand the brain. When that day comes, that understanding will be transformative.
Now, given that one of my central areas of scholarship within public health ethics is on priority-setting, I found myself wondering if we can really avoid difficult allocative decisions simply by increasing absolute levels of funding. So, I asked Bill on Twitter, what happens if we cannot do the latter? What if we have to make a decision regarding whether to reduce funding levels for basic neuroscience in order to increase funding levels for psychotherapy research? Which is our priority, and why?
Believe it or not, I was actually not trying to be cheeky here! These kinds of questions are literally core to my research. Of course, as I argue in lots of places, we want to avoid the false choice fallacy. That is, we do not need to do either A or B or Q. We can pursue all of them. But, as I responded to Bill:
We can do lots of different things, but given scarce resources, we cannot do them all at the same level of investment. Even if we were to increase the size of the pot for research, we would still be faced with the same kinds of allocative questions: should funding for basic neuroscience be maintained at the same proportion relative to that allotted for psychotherapy research? Should we change the ratio of funding? Why or why not?
Bill argued that given the constriction of research funding for NIH in general, let alone NIMH, trying to make these kinds of priority-setting assessments would be distorted:
Here, I want to respectfully disagree with Bill. There is nothing “false” about making difficult decisions of priority-setting in times of scarce resources. Although fighting for the world we want to see is of great moral significance, as I often remark to my students, sometimes we cannot escape difficult moral problems in the here and now. We are, as they say, in the s*it.
The world we inhabit is one in which we have scarce resources for mental health research. We are faced with a dilemma as to how best to allocate those resources. Although we can choose to sponsor research in a variety of areas, we can nevertheless not avoid the question of which area we should invest relatively more or less of our resources.
Finally, as I remarked to Bill, we should also pay heed to the fact that the world we live in is one in which, as Nikolas Rose & Joelle Abi-Rached put it, the neuromolecular gaze dominates (see also Stephen Casper’s wonderful scholarship on the neuro-turn). As such, even if we were to increase absolute levels of funding, we would have as a sociological matter every reason to suspect that the lion’s share of increased funding would go to basic neuroscience research. If this is correct, and we have little reason to suspect otherwise given current neuromania, increasing absolute levels of funding would inspire little confidence that we would substantially improve the dearth of resources currently allocated to psychotherapy research.*
*James Coyne has argued in many fora that the quality of the evidence base supporting much psychotherapy leaves much to be desired (to put it mildly). I offer no opinion on that here — indeed, I’m no methodologist so am not really qualified to weigh in on many of the technical issues in this vein — but it is worth noting.
(In which he emerges from his dogmatic slumber . . . )
Regular blogging is obviously not my thing, but every now and then something arises that I can’t easily discuss in 140 characters on my beloved Twitter, so here we go.
A new article in PLoS Medicine flashed in my Inbox this afternoon, entitled “A Global Biomedical R&D Fund and Mechanism for Innovations of Public Health Importance.” This really grinds my gears. Perhaps surprisingly, this irritates me not because I am opposed to establishing a “biomedical R&D fund and mechanism for innovations of public health importance.” Not at all — if we can find better ways of generating pharmaceutical products for which robust evidence shows important impact on population health especially in emergent public health scenarios, that sounds fine with me.
What is not fine with me is the way in which public and population health continually seems to be captured by biomedical culture and biomedical interventions, the most prominent of which, of course, is pharmaceuticals. I am referring here to the medicalization of global health policy. Such is bad for any number of reasons. First, overwhelming evidence shows that substantial improvements in overall population health and in the compression of existing health inequities (global or otherwise) are extremely unlikely to flow from acute care services, including but not limited to drugs. There is virtually no question that collective action on upstream, macrosocial determinants of health are vastly more likely to improve overall population health and to compress inequities, which matters because these two criteria are core to any adequate theory of justice in population health.
The concern, as I have written about independently here,* and as Joe Gabriel and I discuss here, is that the frame of the debate about how best to improve global health is cast on the geopolitical scale in terms of how best to ensure access to pharmaceuticals. To the extent that the frame obscures an unquestionably more important discussion — intervention on fundamental causes of disease — it is ethically suboptimal.
Second, as Vicente Navarro has pointed out, the perpetual tendency to focus myopically on biomedical interventions for social problems — make no mistake, health is a social problem — has actually had the historical tendency to weaken larger public health and social welfare systems, especially in the global South. The emphasis on magic bullets tends to absorb resources that could be better allocated to interventions that act on structural determinants of health outcomes, on the amelioration of structural violence, etc. So even, as Navarro points out, as the eradication of smallpox is unquestionably a good thing, it nevertheless resulted in a substantial weakening of larger health and social welfare systems in some of the most resource-poor settings on the planet. This is Bad.
Biomedical interventions supported by robust evidence of population health impact obviously have a role to play in improving global health, and in responding to acute public health emergencies. But we are not going to resolve our health problems, nor compress the staggering global inequities in health, via pharmaceuticals. Social problems rooted in adverse and deeply rooted social structures must be resolved at the level, or not at all.
*This paper is essentially an article-length exposition of many of the themes of this blog post.
John Yamamoto-Wilson, an early modern historian who has published a recent book on pain and suffering in 17th c. England, has a fascinating blog post examining some of Olivia Weisser’s forthcoming work on pain, suffering, and gender in early modern England (go Wes!).
(Historians of pain are eagerly awaiting Dr. Weisser’s forthcoming book!)
I am interested in Dr. Yamamoto-Wilson’s conclusion, but I do want to first note the posture in which I approach this subject.
First, thanks to all readers for engaging the first post in this series. I think it might already be the most widely-read post in the very short history of this blog.
Andrew Ruis, an historian of medicine and public health at the University of Wisconsin-Madison, left on my post a comment that was so excellent I immediately requester permission to bump it up the post level. Andrew graciously consented, so here it is, reprinted verbatim, with paragraph breaks added only to ease readability:
“The issue that always floors me with characterizations like Horton’s is that utility is a useful metric for, say, skills, but a poor one for knowledge generation, where utility (beyond statements like “all knowledge is useful”) can only be assessed retrospectively (historically!), to the extent that it can be assessed at all. Research in history of medicine is no different than bench research in biomedicine or any other basic research endeavor, if using utility as yardstick, because no one can predict the future.
I worked for several years at a genomics company where we learned a lot about cancer biology, but whether that knowledge was useful or not was impossible to determine except insofar as knowledge is useful for its own sake. Even evaluating it comparatively–was that knowledge more or less useful than my historical work on children’s nutrition programs?–is impossible to gauge, unless you have some magical Aristotelian formula that assigns precise values to potentiality. Arguments like Horton’s are not about utility per se but about values. He’s basically implying that he (and I think it’s no great stretch to suggest that he purports to represent a far larger group of medical professionals) values biomedical research intrinsically (biomedical = good, for surely he would not declare biomedical research “moribund”), but he values history of medicine only insofar as he can perceive an immediate and obvious use for it (HoM = good iff useful to me right now). That’s not a statement of logic, it’s a statement of belief. What’s worse, it is a shockingly naive view of utility, as if things like epistemological perspective or the way historians frame, investigate, and answer questions isn’t potentially “useful” even if the outcome of any specific research endeavor may not be in some particular context.
The problem, as Carsten, you, and others suggest, is that values, whether well-founded or no, are powerful when shared by people in positions of power. I fully agree with your assessment–that valuation should not be tied to utility–and it seems the most significant challenge historians of medicine face is not convincing medical professionals and the biomedical-industrial complex that we (and our work) are useful but that there is no real way of assessing the utility of our work except, ironically, historically.
(I may have an additional post on this tomorrow or the day after, engaging some conversation Mark Weatherall and I have been having on the Twitterz).
Ho hum. Another day, another person doubting the worth of [insert humanities field here] history. Most recently, we have Richard Horton, an editor of The Lancet, writing a commentary in which he declares that
Most medical historians, it seems, have nothing to say about important issues of the past as they might relate to the present. They are invisible, inaudible, and, as a result, inconsequential.
*Pauses to note irony of editor of THE LANCET proclaiming the history of medicine moribund*
Horton is not disdainful of the field itself; he is really lamenting a Golden Past, itself a common historiographical narrative:
Medical historians have made critically important contributions to public debates about health, health services, and medical science.
*Pauses again for irony*
Horton goes on to list a number of texts that he thinks provide such a contribution, but opines that
for almost two decades, medical historians have produced little that has provided truly fresh insights into our understanding of health and disease . . . it seems fair to conclude that medical history is a corpus of activity lying moribund on its way to the scholarly mortuary.
Goodness me . . . glad I’m not involved in any
corpse animation such endeavors . . .
Carsten Timmermann at Manchester and Simon Chaplin of The Wellcome Library (The Happiest Place on Earth for historians of medicine & public health) issued excellent responses. Given that Horton professes, at least, to see the value in the history of medicine per se, the most direct response is to declare that the critic’s opinion is mistaken, and point to just a few of the large number of examples of meritorious works.
I endorse these rebuttals wholeheartedly. But part of me wonders whether engaging on these terms itself cedes too much ground. That is, Horton’s view of the value of the history of medicine as a field of inquiry is entirely instrumental. It is of value only to the extent that it provides contributions to ongoing debates about public debates on health, health services, and medical science. The contrapositive proves the rule, which means that insofar as the history of medicine and public health is not providing such contributions, it lacks value.
Because I do public health law/policy/ethics AND the history of medicine/public health, I spend an inordinate amount of time thinking about the advantages and drawbacks to thinking about the latter fields in this instrumental sense. I’ve presented on it (slides here; precis here), blogged about it, and discussed it with other historians in person and on social media. I actually do not think it is as tricky as it seems.
The only problem with reductionism is that it is reductionist. What I mean is that the problem with saying that the history of medicine/public health is valuable because of the light it sheds on contemporary issues in health and medicine is absolutely not that the proposition is false. It is most assuredly true. As Drs. Timmermann and Chaplin note, there are no shortage of outstanding recent works in the field that are indisputably relevant to public conversations in health and medicine.
But what if there weren’t?
That is, let us assume for the moment that a key premise of Horton’s critique is true, that it accurately reflects the state of the world — that, in point of fact, there have been precisely no recent works in the history of medicine and public health that can inform contemporary public conversations on health and medicine.
Horton’s conclusion — that the fields are moribund — would nevertheless be invalid. It does not follow even if we grant his factual premise. This is because of the central unstated premise in Horton’s position: that history, or at least, certain subfields, is of value only insofar as it is useful in illuminating contemporary problems.
I think we should reject this premise with extreme prejudice. We ought to study history — or anything else for that matter — because it is of inherent worth, because understanding how things happened, how people acted, and what may have motivated them is inherently valuable. As I have remarked, reducing history to its (very real) instrumental value is enough to bring Clio’s wrath down upon all of our heads . . .
What I’m interested in here is the idea of enterprise justification. That is, Horton, and critics like him, are not, IMO, simply asking for an accounting of the value of the field. Such is relatively easy to provide, although in saying as such I do not mean for a moment to denigrate the time and energy expended by those noble souls who deliver such. Rather, I think what the critics are seeking is a justification for the entire enterprise. It reminds me of the rasha, the “wicked” son in the emblematic Passover story of The Four Sons:
The Haggadah explains that the “wicked” son looks around at all of the participants engaged in telling their stories, relating their shared history, and says, “What does all of this mean to YOU?” The Haggadah instructs that by phrasing the question this way, he has excluded himself from the community of storytellers. He is, in essence, challenging the justification for the entire enterprise.
I guess the point — there’s a point! — of all of this is to suggest that while responding to critics like Horton by detailing some of the outstanding works that do in fact have great relevance for contemporary discussions of health and medicine is — while important and worthwhile — to some extent still playing in the critic’s dojo. At the same time the challenge is and should be met on the critic’s terms, I think it is also important to destabilize the assumed framework for the contest: the justification for studying the history of medicine and public health is not (exclusively) the insights it provides for contemporary policy and practice.
Even where we historians of medicine & public health ought to highlight and lionize the significance of those insights, history ought never be reduced to its instrumental value.
From Joanna Bourke’s stunning new book on the history of pain (p. 269):
But the narrative medicine that is promoted by many concerned commentators in the medical humanities is also infused with a particular class-based ideology that assumes the speech and writing is redemptive. Like the ‘men of feeling’ of the eighteenth century, linking sympathy with narrative medicine was and is highly dependent upon the statements by articulate and often elite patients.