Skip to main content Skip to navigation

Research Wisdom

On Linking Health Research and Policy
By Jae Kennedy

Many current healthcare research articles begin with offhand reference to health reform, e.g. “following passage of the Affordable Care Act in 2010, payers and policymakers are paying increasing attention to…” It’s a rather transparent bid to assert timeliness and relevance to the study topic, but the policy doesn’t really drive the problem framing, analysis strategy, or interpretation of study findings. The study may be quite rigorous and focus on an important problem, but it’s not about the policy referenced in the first or second paragraph. That’s fine, by the way, there are many types of really important research questions that don’t have a clear link to public policies or programs.
Many of you know that I recently received a fairly large federal grant to evaluate the impact of recent federal policy reforms on the physical, emotional and economic health of working-age adults with significant disabilities. I’m proud of the proposed work, and excited about the potential impact. As someone who has spent over half his life doing research on health and social services, I would advise researchers who aspire to influence policy to be: a) critical, b) practical, c) humble, and d) persistent.

Critical analysis
Most legislation, regulation and program development is grounded in a set of general assumptions about human and organizational behavior. Policymakers assume that individuals or groups will rationally respond to changing incentives and reallocation of resources. Evaluation scientists like Leonard Bickman stress the critical importance of program theory – the causal models which lead us to hypothesize that a given intervention will lead to a desired outcome within a target population.
Pulling apart and critically analyzing the assumptions underlying an intervention is the first step in any policy research. Ask some basic questions: Why do we think this can work, how can we tell that it works, and is this the most (quick, cheap, or effective) way to reach the desired outcome? Think about generalizability – do we think this will work in other settings or with different populations?
Sometimes, program theory is quite tenuous – for example, I remain mystified about how removing my shoes in the airport security line will improve public safety. In such cases, we may simply note these logical inconsistencies and speculate on the utility of the intervention.
In other cases, the theory is quite explicit. For example, the ACA initiated a number of coordinated care demonstration projects for people dually eligible for Medicare and Medicaid, to test the hypothesis that this high-use, high-cost population would use less hospital and long-term if their ambulatory and social services were managed by a nurse or social worker. Frankly, I’m skeptical of this premise, but am happy to be swayed by strong evidence.

Methodological flexibility
Effective program evaluators and policy analysts must make a virtue of necessity with regard to research strategy. A complex and rapidly changing policy environment rarely lends itself to rigorous methodologies like clinical trials, so researchers must make due with some combination of observational data, program records, focus groups, interviews, and hastily administered surveys. There are a few notable exceptions – Oregon state officials, for example, basically created a lottery for a limited number of new Medicaid benefits, and systematically compared health services utilization and outcomes for those who did or did not receive coverage. But for the most part, policy research is reactive. It is no surprise than clinical researchers, and the agencies that fund them, are often dubious about the scientific rigor of this kind of research.
A lot of the work I do uses existing data, i.e. information drawn from ongoing national surveys conducted by the CDC, Census Bureau, Department of Labor or the Social Security Administration. Grant and journal reviewers often haughtily dismiss this work as “descriptive,” which seems to equate to superficial and/or atheoretical. But frankly, the most important questions for decision makers enacting new policies is “just how many people are we talking about?” and “what is the current health and socioeconomic status of the target population?”
Once an existing policy is changed, or a new policy is enacted, the obvious question is “who is affected, and how?” Did the intervention have its intended effect? What about unintended effects? This invariably requires asking thoughtful questions to a lot of people. New questions will emerge from this process, and participant observations will need to be verified. Any line of research can be viewed as an ongoing conversation, but policy conversations can be particularly disjointed and raucous.

Realistic expectations of impact
A lot of our most basic human problems do not readily lend themselves to policy resolution because they are too big, too difficult or too expensive to address (I’m thinking here about the platform of “Lonesome No More!” put forward by Dr. Wilbur Daffodil-11 Swain, the presidential candidate in a post-apocalyptic United States from Kurt Vonnegut’s 1976 novel, Slapstick).
John Kingdon suggests that policies emerge from the periodic convergence of different streams of problems, solutions, and opportunities. A constantly changing set of actors draw attention to the problems of special interest groups (e.g. cancer survivors, drug manufacturers, or culinary workers) and/or advocate for preferred solutions (e.g. tax cuts, unionization, or military intervention), and these can converge into new policies during brief “windows of opportunity” which open mainly in times of economic and political uncertainty.
The role of scientific evidence in this policymaking process is depressingly modest. Interest groups may seize upon a study that supports their political agenda, but if no good research is available, they will justify their preferred solution with anecdotal evidence. Nonetheless, I must believe that the availability of relevant and timely research during periods of potential change increases the chance of developing effective and efficient policies and programs.
With my current grant, this means dedicating significant resources to knowledge translation, i.e. web pages, fact sheets, training workshops, etc. I would have a lot harder time doing this if I wasn’t already a full professor – there isn’t as much urgency these days in generating a steady count of first-authored peer-reviewed journal articles. It also means developing explicit and substantive relationships with consumer groups and advocates, and investing time and money to maintain them.
I hope my research informs the policy discussion. But some individuals and groups are so wedded to their beliefs that they are incapable of processing any information which threatens those beliefs. Economists Morris Barer and Robert Evens, for example, wrote an article challenging the assertion of conservative politicians that Canadians regularly flee their single-payer health system to obtain needed surgeries in the US. They called it “Phantoms in the Snow,” and showed that there is no evidence whatsoever that any such medical migration occurs. I doubt this article lead a single Republican politician to change his opinion of Canadian healthcare. Nonetheless, it added to the evidence and provides context for future debates. That’s about the best outcome you can expect in this line of work.

Patience and persistence
You’ve probably heard the quote, attributed to Albert Einstein, Ben Franklin or Mark Twain, that “insanity is doing something over and over again and expecting a different result.” I frequently question my sanity. The vast majority of the grants, abstracts and manuscripts I submit are rejected. Moreover, I really don’t think my ratio of successes to failures has improved over time. But I have learned to keep on trying, and accept that career success ebbs and flows while longer-term impact of a research program is cumulative.
The two largest grants I’ve received in the last five years came from the fourth submission. They took literally years of work and rework. A common strategy for progressing on the tenure track is to start as a small fish in a small pond, getting to know the research and researchers in a relatively small and well defined domain. For health policy research, I still feel like a small fish plunging into a rushing stream.
But it’s exciting, and a lot of people care about the things I study. That really helps me start on the next manuscript or proposal, despite the long odds. So, to all aspiring researchers who hope to influence policy and practice, I say welcome, and get ready for a long haul.