[a] story-centered approach to examining health policy, however, is barely tolerated in mainstream policy circles. One recent example is a blog post on the Health Affairs website written by economists Katherine Baicker and Amy Finkelstein. Based on their own research into Medicaid expansion in Oregon, these authors argue that stories are little more than anecdotes that are incapable of providing the “evidence” necessary to develop and assess health policies. Instead they contend that more “rigorous” methods, and particularly randomized controlled trials (RTCs) are needed.
The Q&A is the subject of a fascinating, wide-ranging conversation I had with the journalist, Sarah Zhang. At one point, we have the following exchange:
Those attitudes, practices, and beliefs are handed down, and of course, they’re also racialized. One of the things we know is that persons of color, especially black people in the U.S., are much less likely to receive opioids than white people. As it turns out, it might actually be good for them.
Zhang: Right—and that’s one of the reasons why the opioid epidemic has been centered around white communities.
Goldberg: Yes, people of color might be missing some of the opioid epidemic than comparably situated white people, that’s a public health gain, but they’re also experiencing pain stigma, and that’s a public health loss.
On Twitter, several readers of this exchange were concerned that this might seem to indicate a belief that racial discrimination in access to health care resources is acceptable if it results in salutary health consequences.
This is not what I meant, but when it comes to racism, what the speaker means is far less relevant than what the speaker says. Language matters.
Here is what I should have said:
Racial discrimination in access to health care resources is immoral. Period.
There is an active debate, of which I am part, regarding whether stigma can ever be justified if it produces good public health ends. I do not think it can be so justified, but others disagree.
Regardless, that debate is inapplicable here because, when opioids were being liberally prescribed, they were widely believed to produce more benefit than harm. They were perceived as a health good. Therefore, access to such an intervention was widely deemed to be just and right, and racial inequalities in such access were not justifiable when they were occurring. Denying people what was widely believed to be a salubrious intervention along racial lines cannot be justified retrospectively because it resulted in some unintended health good.
On the heels of my recent post regarding Gaetan Dugas and the origin (myths) of HIV/AIDS in the U.S. comes this 10/29 editorial from NYT science writer Donald G. McNeil, Jr.
The editorial is strange, IMO.
One of the books I love teaching above all else is Judith Walzer Leavitt’s fabulous Typhoid Mary: Captive to the Public’s Health. This is a spectacular book, and my students regularly confirm this perspective. It is one of those rare texts that manages to satisfy the scholarly standards expected by professional historians of medicine and public health at the same time that it is readable and accessible for non-expert audiences (undergraduates, public health students, etc.)
My pedagogical affect is generally quite genial and affable. I smile often in class, which is in part a function of the fact that I love teaching so much, and in part is because I try to practice compassion and accessibility with my students. I begin most of my courses by sharing a significant amount about myself and my background, in part to build rapport and to try and diminish the educational distance that often exists in the hierarchies between student and teacher.
But because health stigma is a frequent topic of discussion and analysis in all of my teaching, we often find ourselves broaching the subject of Mary Mallon.
(x-posted from Yoni Freedhoff’s excellent Weighty Matters blog)
The Academy of Nutrition and Dietetics (“AND”) recently held its annual meeting. According to dietician Andy Bellatti (and others), during her opening talk, the President of the AND, Lucille Beseler, opined on the rising concerns over financial conflicts of interest among nutritionists and dieticians: “I’m not so weak-minded that I would make a decision on receiving a pen.”
In the course of researching, writing, and teaching about conflicts of interest among health professionals for over a decade, what I have come to marvel at the most is the apparent ease with which leading health professionals insist on a kind of willful ignorance regarding the cognitive science underlying concerns over COIs. This is amazing to me because of the irony: members of professions presumably dedicated to basing their practice on the best evidence proceed to engage concerns over COI with almost no awareness whatsoever of what the best evidence actually suggests regarding the impact of conflicts of interest on human behavior.
This is why, in December of 2015, I published a small chart with explanation in BMJ entitled “COI Bingo.” I grew so exasperated with the same tired justifications for financial COIs that I categorized the standard responses into a Bingo chart:
Reasonable people of good conscience can, of course, disagree on whether the behavior of partiality that occurs in the presence of COIs are morally justified, and on the appropriate remedies, if any, for such behavior. But the argument should proceed with all stakeholders fully aware of what the cognitive science actually suggests regarding the impact of COIs on health professional behavior.
What does that evidence show? Beyond a shadow of a doubt, gifts almost certainly do influence health professionals’ behavior, at least in the aggregate. Not only have we documented this finding itself ad nauseum, we also have powerful causal explanations that elucidate the mechanisms by which even gifts of de minimis value influence health professional behavior. Virtually all human societies exchange gifts — they promote social cohesion and are therefore a critical adaptive mechanism. One of the ways gifts accomplish such cohesion is because they tend to automatically, unconsciously create a desire to reciprocate on the part of the recipient.
Commercial industries are well-aware of this phenomenon, which is why they provide such gifts. The gift exchange also cements the relationship with industry, which that in which commercial industry is most interested. The tighter the relationship that exists between commercial industries and health professionals, the more likely it is in the broad run of cases that behavior of partiality will occur. COIs have to be understood iteratively — the existence of a financial COI does not imply that bad behavior will necessarily take place in any given case. But over the long run of cases, the existence of financial COIs makes shenanigans much more likely — a conclusion which is — again — extremely well-documented in both experimental and uncontrolled (i.e., real-life) conditions.
That these kinds of gifts “work” in the health professions to serve the interests of the “donor” is therefore beyond dispute. Moreover, some of the more darkly amusing findings in the COI literature document our own immunity bias: while we imagine ourselves much less likely to be influenced by pens and mugs, we have serious concerns that the professional sitting next to us may have their judgment clouded if they accept gifts from commercial industry:
(Steinman, Shlipak & McPhee 2001)
Maybe any given health provider will indeed remain entirely uninfluenced by deep entanglements with commercial industry. But the evidence establishes beyond all doubt that the odds are forever not in your favor.
Ultimately, far too many stakeholders seem willing to wade into the fray with a perfect, almost studied indifference to the significant evidence base regarding COIs. This is itself an ethical problem — mistakes themselves are not ipso facto morally blameworthy — but mistakes made because health professionals did not bother to examine an available evidence base and ground their practice in that evidence come much closer to moral failure.
Since I’m apparently blogging again, I might as well make it a two-fer, as they say in the Southern U.S.
In a recent blog post for JPMHP Direct, I argue the following:
the genesis of organized public health in the modern West are firmly and unquestionably rooted in social reform. Historians of public health have documented repeatedly that early public health actors were reformers and advocates to the bone.
For example, Jacob Riis’ stunning photos of NYC tenements in the 1890s were absolutely crucial to sparking municipal public health action. Riis undoubtedly saw himself as an advocate and an activist, and there is no doubt public health was the better for it. In 2010, a group of historians and scholars at Columbia University’s Center for the History & Ethics of Public Health explicitly argued that public health in the US has essentially taken an exodus from its roots in social reform and public advocacy, and that this departure has had and will continue to have grave effects on population health and health inequalities.
This is in fairness an oversimplification, one wrought from considerations of venue, word limits, and audience. (All of what follows is I think pretty well-settled ground for historians of public health in the modern West, but by all means, dear readers, tell me if I am mistaken).
Bill Gardner over at The Incidental Economist has a thoughtful post on a NY Times op-ed that appeared last Friday. The op-ed pointed out that for all of our considerable investment in basic neuroscience, we have not gained many significant clinical psychiatric interventions over the past 20 years.
Gardner points out that there are a number of evidence-based psychotherapeutic interventions, and concludes that “the National Institute of Mental Health (NIMH) should be funding more research on psychotherapy!” But, he cautions, it does not follow that we should reduce funding for neuroscience research:
The brain is the most important yet least understood organ in the body. Eventually, we will understand the brain. When that day comes, that understanding will be transformative.
Now, given that one of my central areas of scholarship within public health ethics is on priority-setting, I found myself wondering if we can really avoid difficult allocative decisions simply by increasing absolute levels of funding. So, I asked Bill on Twitter, what happens if we cannot do the latter? What if we have to make a decision regarding whether to reduce funding levels for basic neuroscience in order to increase funding levels for psychotherapy research? Which is our priority, and why?
Believe it or not, I was actually not trying to be cheeky here! These kinds of questions are literally core to my research. Of course, as I argue in lots of places, we want to avoid the false choice fallacy. That is, we do not need to do either A or B or Q. We can pursue all of them. But, as I responded to Bill:
We can do lots of different things, but given scarce resources, we cannot do them all at the same level of investment. Even if we were to increase the size of the pot for research, we would still be faced with the same kinds of allocative questions: should funding for basic neuroscience be maintained at the same proportion relative to that allotted for psychotherapy research? Should we change the ratio of funding? Why or why not?
Bill argued that given the constriction of research funding for NIH in general, let alone NIMH, trying to make these kinds of priority-setting assessments would be distorted:
Here, I want to respectfully disagree with Bill. There is nothing “false” about making difficult decisions of priority-setting in times of scarce resources. Although fighting for the world we want to see is of great moral significance, as I often remark to my students, sometimes we cannot escape difficult moral problems in the here and now. We are, as they say, in the s*it.
The world we inhabit is one in which we have scarce resources for mental health research. We are faced with a dilemma as to how best to allocate those resources. Although we can choose to sponsor research in a variety of areas, we can nevertheless not avoid the question of which area we should invest relatively more or less of our resources.
Finally, as I remarked to Bill, we should also pay heed to the fact that the world we live in is one in which, as Nikolas Rose & Joelle Abi-Rached put it, the neuromolecular gaze dominates (see also Stephen Casper’s wonderful scholarship on the neuro-turn). As such, even if we were to increase absolute levels of funding, we would have as a sociological matter every reason to suspect that the lion’s share of increased funding would go to basic neuroscience research. If this is correct, and we have little reason to suspect otherwise given current neuromania, increasing absolute levels of funding would inspire little confidence that we would substantially improve the dearth of resources currently allocated to psychotherapy research.*
*James Coyne has argued in many fora that the quality of the evidence base supporting much psychotherapy leaves much to be desired (to put it mildly). I offer no opinion on that here — indeed, I’m no methodologist so am not really qualified to weigh in on many of the technical issues in this vein — but it is worth noting.
(In which he emerges from his dogmatic slumber . . . )
Regular blogging is obviously not my thing, but every now and then something arises that I can’t easily discuss in 140 characters on my beloved Twitter, so here we go.
A new article in PLoS Medicine flashed in my Inbox this afternoon, entitled “A Global Biomedical R&D Fund and Mechanism for Innovations of Public Health Importance.” This really grinds my gears. Perhaps surprisingly, this irritates me not because I am opposed to establishing a “biomedical R&D fund and mechanism for innovations of public health importance.” Not at all — if we can find better ways of generating pharmaceutical products for which robust evidence shows important impact on population health especially in emergent public health scenarios, that sounds fine with me.
What is not fine with me is the way in which public and population health continually seems to be captured by biomedical culture and biomedical interventions, the most prominent of which, of course, is pharmaceuticals. I am referring here to the medicalization of global health policy. Such is bad for any number of reasons. First, overwhelming evidence shows that substantial improvements in overall population health and in the compression of existing health inequities (global or otherwise) are extremely unlikely to flow from acute care services, including but not limited to drugs. There is virtually no question that collective action on upstream, macrosocial determinants of health are vastly more likely to improve overall population health and to compress inequities, which matters because these two criteria are core to any adequate theory of justice in population health.
The concern, as I have written about independently here,* and as Joe Gabriel and I discuss here, is that the frame of the debate about how best to improve global health is cast on the geopolitical scale in terms of how best to ensure access to pharmaceuticals. To the extent that the frame obscures an unquestionably more important discussion — intervention on fundamental causes of disease — it is ethically suboptimal.
Second, as Vicente Navarro has pointed out, the perpetual tendency to focus myopically on biomedical interventions for social problems — make no mistake, health is a social problem — has actually had the historical tendency to weaken larger public health and social welfare systems, especially in the global South. The emphasis on magic bullets tends to absorb resources that could be better allocated to interventions that act on structural determinants of health outcomes, on the amelioration of structural violence, etc. So even, as Navarro points out, as the eradication of smallpox is unquestionably a good thing, it nevertheless resulted in a substantial weakening of larger health and social welfare systems in some of the most resource-poor settings on the planet. This is Bad.
Biomedical interventions supported by robust evidence of population health impact obviously have a role to play in improving global health, and in responding to acute public health emergencies. But we are not going to resolve our health problems, nor compress the staggering global inequities in health, via pharmaceuticals. Social problems rooted in adverse and deeply rooted social structures must be resolved at the level, or not at all.
*This paper is essentially an article-length exposition of many of the themes of this blog post.
John Yamamoto-Wilson, an early modern historian who has published a recent book on pain and suffering in 17th c. England, has a fascinating blog post examining some of Olivia Weisser’s forthcoming work on pain, suffering, and gender in early modern England (go Wes!).
(Historians of pain are eagerly awaiting Dr. Weisser’s forthcoming book!)
I am interested in Dr. Yamamoto-Wilson’s conclusion, but I do want to first note the posture in which I approach this subject.
First, thanks to all readers for engaging the first post in this series. I think it might already be the most widely-read post in the very short history of this blog.
Andrew Ruis, an historian of medicine and public health at the University of Wisconsin-Madison, left on my post a comment that was so excellent I immediately requester permission to bump it up the post level. Andrew graciously consented, so here it is, reprinted verbatim, with paragraph breaks added only to ease readability:
“The issue that always floors me with characterizations like Horton’s is that utility is a useful metric for, say, skills, but a poor one for knowledge generation, where utility (beyond statements like “all knowledge is useful”) can only be assessed retrospectively (historically!), to the extent that it can be assessed at all. Research in history of medicine is no different than bench research in biomedicine or any other basic research endeavor, if using utility as yardstick, because no one can predict the future.
I worked for several years at a genomics company where we learned a lot about cancer biology, but whether that knowledge was useful or not was impossible to determine except insofar as knowledge is useful for its own sake. Even evaluating it comparatively–was that knowledge more or less useful than my historical work on children’s nutrition programs?–is impossible to gauge, unless you have some magical Aristotelian formula that assigns precise values to potentiality. Arguments like Horton’s are not about utility per se but about values. He’s basically implying that he (and I think it’s no great stretch to suggest that he purports to represent a far larger group of medical professionals) values biomedical research intrinsically (biomedical = good, for surely he would not declare biomedical research “moribund”), but he values history of medicine only insofar as he can perceive an immediate and obvious use for it (HoM = good iff useful to me right now). That’s not a statement of logic, it’s a statement of belief. What’s worse, it is a shockingly naive view of utility, as if things like epistemological perspective or the way historians frame, investigate, and answer questions isn’t potentially “useful” even if the outcome of any specific research endeavor may not be in some particular context.
The problem, as Carsten, you, and others suggest, is that values, whether well-founded or no, are powerful when shared by people in positions of power. I fully agree with your assessment–that valuation should not be tied to utility–and it seems the most significant challenge historians of medicine face is not convincing medical professionals and the biomedical-industrial complex that we (and our work) are useful but that there is no real way of assessing the utility of our work except, ironically, historically.
(I may have an additional post on this tomorrow or the day after, engaging some conversation Mark Weatherall and I have been having on the Twitterz).