On Priorities in Health Policy (as to Cost-Effectiveness Research)

This Health Affairs blog post is irritating, though not because it’s wrong. In the blog post, the authors acknowledge the following facts:

  • Health care services do not account for more than 15-20 percent of population health outcomes (Ed: AT MOST)
  • The vast majority of Americans believe that health is largely a function of access to health care services
  • The vast majority of cost-effectiveness research is done on health care interventions (i.e., in the clinical sector)

These points, and others like it, are the empirical jump-off points for my own work, which essentially pleads with scholars and policymakers alike to invest their attention and resources to the prime determinants of health and its distribution (the social and economic conditions in which people live, work, and play).

I suppose what irritates me most is that health policy experts in the US spend the vast majority of their time on HSR and health insurance.  These paradigms are important and worthy of attention.  But given the overwhelming evidence that HSR and access to health insurance are not the prime determinants of health and its distribution, this allocation of time, energy, and attention seems misdirected.  And I charge health policy experts with knowing this, which makes their overwhelming focus on health care and access to it all the more frustrating.  The authors of the blog post briefly address the causes of this focus:

Given the large spend in the health sector, health services researchers and economists understandably gravitate to clinical interventions to investigate where greater efficiency and effectiveness can be found. And because so much money is invested in medical care and the research that underpins it, such studies are comparatively easy to undertake. Meanwhile, the evidence base is much less straightforward when it comes to analyzing the cost-effectiveness of interventions that target more broadly health-beneficial policies, those that through tax policies promote and inhibit health-affecting behaviors, those directed at built and natural environments, education, and other strategies that directly or indirectly affect people’s health.

Priority-setting matters. And we spend far too much time talking and arguing over things and phenomena that IMO do not justify the moral priority we seem to grant them. Even the terms of our debates reflect this, as in the pharmaceuticalization of health (to where people on all sides of the debate frame health in terms of access to drugs).  The blog post reflects this — it is not simply the actual proportion of expenditures on health care vs. whole population interventions (which is hard to quantify but according to several measures, public health spending in the US is rarely more than 3% of overall health expenditures) — but even our attempts to analyze health and the welfare state fall prey to the Health Care Beast.  (The authors’ point is that we do most of our CEA on clinical interventions and health services, as opposed to whole population interventions targeted at root social determinants).

Although I generally agree with the blog post, I’d also note we do have some excellent CEA on some upstream interventions, such as the phenomenal work on The Abecedarian Project. The ROI is so good for the kind of intensive early childhood intervention evaluated here, Nobel laureate James Heckman has made such investment a cornerstore of The Heckman Equation.

Finally, the HA blog post leaves unaddressed what to my mind is the crucial question: does the evidence base we do possess as to the social determinants of health sufficient to justify public health action? Or need we wait until we have broader and perhaps more rigorous cost-effectiveness research as to root social determinants?

I think the evidence is more than sufficient to justify action, at least in some paradigms and as to some interventions, and a significant portion of my work is devoted to justifying this conclusion.

Thoughts?

On Stories, Rigor, and Public Health Policy

From the fascinating MAQ Blog curated by the fascinating and delightful Theresa MacPhail:
[a]  story-centered approach to examining health policy, however, is barely tolerated in mainstream policy circles. One recent example is a blog post on the Health Affairs website written by economists Katherine Baicker and Amy Finkelstein. Based on their own research into Medicaid expansion in Oregon, these authors argue that stories are little more than anecdotes that are incapable of providing the “evidence” necessary to develop and assess health policies. Instead they contend that more “rigorous” methods, and particularly randomized controlled trials (RTCs) are needed.
Interesting. I’ll say right off the bat I’m a friend and a fan of med anthro, qualitative work, and ethnography in general. I rely on it heavily for my work.
 
I’m also not here for the epistemic chauvinism evident in the notion that “stories are not rigorous,” which is abject nonsense if you know the first thing about ethnographic and qualitative methods (more specifically: stories in and of themselves may not constitute rigorous data, but neither is a random internet survey. Rather, systematic and careful methodology can render either collection and analysis of stories or insurance claims data rigorously produced, valid, and reliable).
 
I have firsthand practical experience of the power of narratives. I practiced pharmaceutical, hospital, and insurance litigation for 5 years in Texas, including mass tort pharma litigation. Causation in these cases is almost impossible to prove,  b/c the plaintiffs were often quite sick already, there’s a zillion confounders, and epidemiologic causation is brutally difficult to establish.
 
Plaintiffs should basically have never won, by the standards required under the law.  But they did, and when they did, it was because their stories were compelling.
 
When I teach about the use of stories in public health policymaking, we work from the notion that such stories are crucial, indispensable, and yet also dangerous. We ignore these stories at our peril — there is so much that is so important in them.
 
And yet, public health policy made on the basis of a few stories — note that in this context the stories would NOT make for valid and reliable data — can be disastrous, and there are many examples of this. The one I use are the horrifying patient dumping stories of the early 1980s, which led Congress to enact the unfunded mandate of EMTALA. There is good evidence that EMTALA has actually DECREASED access to emergency care for people in at least some rural and resource-poor settings. The use of stories in the congressional testimony prompted Congress to run off and apply a fix that was in too many cases actually worse than the status quo.
 
This is not a basis for rejecting the use of stories in public health policy, of course. As I said, they are indispensable to sound, humane public health policymaking.  But basing public health policies only on stories that are not collected and analyzed in rigorous ways in indeed dangerous for sound, humane public health policy.
Here‘s a nice defense of the use of stories in both clinical practice and in health policy, in JAMA, no less.
Thoughts?
(For those interested in looking further into this issue, the work of political scientist Sylvia Tesh is quite fascinating).

On Stigma, Race & Pain

Earlier today, The Atlantic published a Q&A with me regarding my recent publication on history, pain, and stigma.

The Q&A is the subject of a fascinating, wide-ranging conversation I had with the journalist, Sarah Zhang.  At one point, we have the following exchange:

Those attitudes, practices, and beliefs are handed down, and of course, they’re also racialized. One of the things we know is that persons of color, especially black people in the U.S., are much less likely to receive opioids than white people. As it turns out, it might actually be good for them.

Zhang: Right—and that’s one of the reasons why the opioid epidemic has been centered around white communities.

Goldberg: Yes, people of color might be missing some of the opioid epidemic than comparably situated white people, that’s a public health gain, but they’re also experiencing pain stigma, and that’s a public health loss.

On Twitter, several readers of this exchange were concerned that this might seem to indicate a belief that racial discrimination in access to health care resources is acceptable if it results in salutary health consequences.

This is not what I meant, but when it comes to racism, what the speaker means is far less relevant than what the speaker says.  Language matters.

Here is what I should have said:

Racial discrimination in access to health care resources is immoral.  Period.

There is an active debate, of which I am part, regarding whether stigma can ever be justified if it produces good public health ends.  I do not think it can be so justified, but others disagree.

Regardless, that debate is inapplicable here because, when opioids were being liberally prescribed, they were widely believed to produce more benefit than harm.  They were perceived as a health good.  Therefore, access to such an intervention was widely deemed to be just and right, and racial inequalities in such access were not justifiable when they were occurring.  Denying people what was widely believed to be a salubrious intervention along racial lines cannot be justified retrospectively because it resulted in some unintended health good.

Thoughts?

On Stigma, Public Health, and Narrative Ethics

One of the books I love teaching above all else is Judith Walzer Leavitt’s fabulous Typhoid Mary: Captive to the Public’s Health.  This is a spectacular book, and my students regularly confirm this perspective.  It is one of those rare texts that manages to satisfy the scholarly standards expected by professional historians of medicine and public health at the same time that it is readable and accessible for non-expert audiences (undergraduates, public health students, etc.)

My pedagogical affect is generally quite genial and affable.  I smile often in class, which is in part a function of the fact that I love teaching so much, and in part is because I try to practice compassion and accessibility with my students.  I begin most of my courses by sharing a significant amount about myself and my background, in part to build rapport and to try and diminish the educational distance that often exists in the hierarchies between student and teacher.

But because health stigma is a frequent topic of discussion and analysis in all of my teaching, we often find ourselves broaching the subject of Mary Mallon.

Continue reading

On Evidence & Conflicts of Interest

(x-posted from Yoni Freedhoff’s excellent Weighty Matters blog)

The Academy of Nutrition and Dietetics (“AND”) recently held its annual meeting.  According to dietician Andy Bellatti (and others), during her opening talk, the President of the AND, Lucille Beseler, opined on the rising concerns over financial conflicts of interest among nutritionists and dieticians: “I’m not so weak-minded that I would make a decision on receiving a pen.”

In the course of researching, writing, and teaching about conflicts of interest among health professionals for over a decade, what I have come to marvel at the most is the apparent ease with which leading health professionals insist on a kind of willful ignorance regarding the cognitive science underlying concerns over COIs.  This is amazing to me because of the irony: members of professions presumably dedicated to basing their practice on the best evidence proceed to engage concerns over COI with almost no awareness whatsoever of what the best evidence actually suggests regarding the impact of conflicts of interest on human behavior.

This is why, in December of 2015, I published a small chart with explanation in BMJ entitled “COI Bingo.”  I grew so exasperated with the same tired justifications for financial COIs that I categorized the standard responses into a Bingo chart:

coi-bingo-chart-bmj-final

Reasonable people of good conscience can, of course, disagree on whether the behavior of partiality that occurs in the presence of COIs are morally justified, and on the appropriate remedies, if any, for such behavior.  But the argument should proceed with all stakeholders fully aware of what the cognitive science actually suggests regarding the impact of COIs on health professional behavior.

What does that evidence show? Beyond a shadow of a doubt, gifts almost certainly do influence health professionals’ behavior, at least in the aggregate.  Not only have we documented this finding itself ad nauseum, we also have powerful causal explanations that elucidate the mechanisms by which even gifts of de minimis value influence health professional behavior.  Virtually all human societies exchange gifts — they promote social cohesion and are therefore a critical adaptive mechanism.  One of the ways gifts accomplish such cohesion is because they tend to automatically, unconsciously create a desire to reciprocate on the part of the recipient.

Commercial industries are well-aware of this phenomenon, which is why they provide such gifts.  The gift exchange also cements the relationship with industry, which that in which commercial industry is most interested.  The tighter the relationship that exists between commercial industries and health professionals, the more likely it is in the broad run of cases that behavior of partiality will occur.  COIs have to be understood iteratively — the existence of a financial COI does not imply that bad behavior will necessarily take place in any given case.  But over the long run of cases, the existence of financial COIs makes shenanigans much more likely — a conclusion which is — again — extremely well-documented in both experimental and uncontrolled (i.e., real-life) conditions.

That these kinds of gifts “work” in the health professions to serve the interests of the “donor” is therefore beyond dispute.  Moreover, some of the more darkly amusing findings in the COI literature document our own immunity bias: while we imagine ourselves much less likely to be influenced by pens and mugs, we have serious concerns that the professional sitting next to us may have their judgment clouded if they accept gifts from commercial industry:

immunity-bias-coi-steinman-2001

(Steinman, Shlipak & McPhee 2001)

Maybe any given health provider will indeed remain entirely uninfluenced by deep entanglements with commercial industry.  But the evidence establishes beyond all doubt that the odds are forever not in your favor.

Ultimately, far too many stakeholders seem willing to wade into the fray with a perfect, almost studied indifference to the significant evidence base regarding COIs.  This is itself an ethical problem — mistakes themselves are not ipso facto morally blameworthy — but mistakes made because health professionals did not bother to examine an available evidence base and ground their practice in that evidence come much closer to moral failure.

The History of (U.S. & G.B.) Public Health & Social Reform: Complicating the Narrative

Tags

Since I’m apparently blogging again, I might as well make it a two-fer, as they say in the Southern U.S.

In a recent blog post for JPMHP Direct, I argue the following:

the genesis of organized public health in the modern West are firmly and unquestionably rooted in social reform. Historians of public health have documented repeatedly that early public health actors were reformers and advocates to the bone.

For example, Jacob Riis’ stunning photos of NYC tenements in the 1890s were absolutely crucial to sparking municipal public health action. Riis undoubtedly saw himself as an advocate and an activist, and there is no doubt public health was the better for it. In 2010, a group of historians and scholars at Columbia University’s Center for the History & Ethics of Public Health explicitly argued that public health in the US has essentially taken an exodus from its roots in social reform and public advocacy, and that this departure has had and will continue to have grave effects on population health and health inequalities.

This is in fairness an oversimplification, one wrought from considerations of venue, word limits, and audience.  (All of what follows is I think pretty well-settled ground for historians of public health in the modern West, but by all means, dear readers, tell me if I am mistaken).

Continue reading

On Priority-Setting in Mental Health Research

Tags

Bill Gardner over at The Incidental Economist has a thoughtful post on a NY Times op-ed that appeared last Friday.  The op-ed pointed out that for all of our considerable investment in basic neuroscience, we have not gained many significant clinical psychiatric interventions over the past 20 years.

Gardner points out that there are a number of evidence-based psychotherapeutic interventions, and concludes that “the National Institute of Mental Health (NIMH) should be funding more research on psychotherapy!” But, he cautions, it does not follow that we should reduce funding for neuroscience research:

The brain is the most important yet least understood organ in the body. Eventually, we will understand the brain. When that day comes, that understanding will be transformative.

Now, given that one of my central areas of scholarship within public health ethics is on priority-setting, I found myself wondering if we can really avoid difficult allocative decisions simply by increasing absolute levels of funding.  So, I asked Bill on Twitter, what happens if we cannot do the latter? What if we have to make a decision regarding whether to reduce funding levels for basic neuroscience in order to increase funding levels for psychotherapy research? Which is our priority, and why?

Bill replied:

Believe it or not, I was actually not trying to be cheeky here! These kinds of questions are literally core to my research.  Of course, as I argue in lots of places, we want to avoid the false choice fallacy.  That is, we do not need to do either A or B or Q.  We can pursue all of them.  But, as I responded to Bill:

We can do lots of different things, but given scarce resources, we cannot do them all at the same level of investment.  Even if we were to increase the size of the pot for research, we would still be faced with the same kinds of allocative questions: should funding for basic neuroscience be maintained at the same proportion relative to that allotted for psychotherapy research? Should we change the ratio of funding? Why or why not?

Bill argued that given the constriction of research funding for NIH in general, let alone NIMH, trying to make these kinds of priority-setting assessments would be distorted:

Here, I want to respectfully disagree with Bill.  There is nothing “false” about making difficsouth-park_1ult decisions of priority-setting in times of scarce resources.  Although fighting for the world we want to see is of great moral significance, as I often remark to my students, sometimes we cannot escape difficult moral problems in the here and now.  We are, as they say, in the s*it.

The world we inhabit is one in which we have scarce resources for mental health research.  We are faced with a dilemma as to how best to allocate those resources.  Although we can choose to sponsor research in a variety of areas, we can nevertheless not avoid the question of which area we should invest relatively more or less of our resources.

Finally, as I remarked to Bill, we should also pay heed to the fact that the world we live in is one in which, as Nikolas Rose & Joelle Abi-Rached put it, the neuromolecular gaze dominates (see also Stephen Casper’s wonderful scholarship on the neuro-turn).  As such, even if we were to increase absolute levels of funding, we would have as a sociological matter every reason to suspect that the lion’s share of increased funding would go to basic neuroscience research.  If this is correct, and we have little reason to suspect otherwise given current neuromania, increasing absolute levels of funding would inspire little confidence that we would substantially improve the dearth of resources currently allocated to psychotherapy research.*

Thoughts?

__________________________________________

*James Coyne has argued in many fora that the quality of the evidence base supporting much psychotherapy leaves much to be desired (to put it mildly).  I offer no opinion on that here — indeed, I’m no methodologist so am not really qualified to weigh in on many of the technical issues in this vein — but it is worth noting.

On the Pharmaceuticalization of (Global) Public Health

(In which he emerges from his dogmatic slumber . . . )

Regular blogging is obviously not my thing, but every now and then something arises that I can’t easily discuss in 140 characters on my beloved Twitter, so here we go.

A new article in PLoS Medicine flashed in my Inbox this afternoon, entitled “A Global Biomedical R&D Fund and Mechanism for Innovations of Public Health Importance.”  This really grinds my gears.  Perhaps surprisingly, this irritates me not because I am opposed to establishing a “biomedical R&D fund and mechanism for innovations of public health importance.”  Not at all — if we can find better ways of generating pharmaceutical products for which robust evidence shows important impact on population health especially in emergent public health scenarios, that sounds fine with me.

What is not fine with me is the way in which public and population health continually seems to be captured by biomedical culture and biomedical interventions, the most prominent of which, of course, is pharmaceuticals.  I am referring here to the medicalization of global health policy.  Such is bad for any number of reasons.  First, overwhelming evidence shows that substantial improvements in overall population health and in the compression of existing health inequities (global or otherwise) are extremely unlikely to flow from acute care services, including but not limited to drugs.  There is virtually no question that collective action on upstream, macrosocial determinants of health are vastly more likely to improve overall population health and to compress inequities, which matters because these two criteria are core to any adequate theory of justice in population health.

The concern, as I have written about independently here,* and as Joe Gabriel and I discuss here, is that the frame of the debate about how best to improve global health is cast on the geopolitical scale in terms of how best to ensure access to pharmaceuticals.  To the extent that the frame obscures an unquestionably more important discussion — intervention on fundamental causes of disease — it is ethically suboptimal.

Second, as Vicente Navarro has pointed out, the perpetual tendency to focus myopically on biomedical interventions for social problems — make no mistake, health is a social problem — has actually had the historical tendency to weaken larger public health and social welfare systems, especially in the global South.  The emphasis on magic bullets tends to absorb resources that could be better allocated to interventions that act on structural determinants of health outcomes, on the amelioration of structural violence, etc.  So even, as Navarro points out, as the eradication of smallpox is unquestionably a good thing, it nevertheless resulted in a substantial weakening of larger health and social welfare systems in some of the most resource-poor settings on the planet.  This is Bad.

Biomedical interventions supported by robust evidence of population health impact obviously have a role to play in improving global health, and in responding to acute public health emergencies.  But we are not going to resolve our health problems, nor compress the staggering global inequities in health, via pharmaceuticals.  Social problems rooted in adverse and deeply rooted social structures must be resolved at the level, or not at all.

Thoughts?

*This paper is essentially an article-length exposition of many of the themes of this blog post.

On the Modern Rise of Hedonism: Changing Views Towards Pleasure, Pain & Suffering

Tags

,

John Yamamoto-Wilson, an early modern historian who has published a recent book on pain and suffering in 17th c. England, has a fascinating blog post examining some of Olivia Weisser’s forthcoming work on pain, suffering, and gender in early modern England (go Wes!).

(Historians of pain are eagerly awaiting Dr. Weisser’s forthcoming book!)

I am interested in Dr. Yamamoto-Wilson’s conclusion, but I do want to first note the posture in which I approach this subject.

Continue reading