Developing Evidence-Based Care Protocols

Content provided by The O&P EDGE
Current Issue - Free Subscription - Free eNewsletter - Advertise

ev·i·dence: noun

  1. that which tends to prove or disprove something; ground for belief; proof.
  2. something that makes plain or clear; an indication or sign: His flushed look was visible evidence of his fever.
  3. data presented to a court or jury in proof of the facts in issue and which may include the testimony of witnesses, records, documents, or objects.

When you're deciding what's best for your patients—how to form their spines, which components they'll succeed with, the kind of socket they need—what do you base your decisions on? You have options, so how do you know which one is best? Are you sure? Does the case present a problem that you don't have the answer to—that perhaps no one has the answer to? What do you do then? Is it possible that there are decisive factors in the case that you've never thought of—essential elements that you not only don't know, but that you don't know that you don't know?

More than any other time in history, O&P clinicians have access to evidence about how to treat patients, and not just evidence that's been published in journals or presented at meetings. Their patients yield evidence of treatment success or failure at every office visit, and that data doesn't reflect the rarified environment of a lab, nor someone else's clinical questions; it perfectly reflects what works or doesn't work in the real world, when a real clinician's minor variations in treatment are implemented. Collecting, combining, and understanding these two kinds of information—that which is vetted by experts and that which the everyday patient presents—can provide clinicians with an extremely powerful treatment tool, one that makes immediate what they do know, helps reveal what they don't know, and reduces what they don't know they don't know.

Feeling Good in the Mountains

A particularly illuminating example of the potential of this kind of evidence-based treatment is Intermountain Healthcare, a nonprofit hospital network based in Salt Lake City, Utah. The story of its success began in the late 1980s, when Alan Morris, MD, won a grant to study a treatment for acute respiratory distress syndrome (ARDS), a brutal condition that typically killed 90 percent of the people who developed it. In order not to confound the experiment, Morris needed to make as consistent as possible the nonexperimental elements of the way Intermountain physicians treated patients with ARDS. Morris and his colleague, Brent James, MD, decided to develop an evidence-based treatment protocol. To do this, Morris and James studied the available literature on the condition, then sought out expert opinions where the literature showed uncertainty. They then asked their colleagues to begin following the protocol, encouraging them to deviate from it whenever they felt it to be necessary, but asking them to note when, how, and why they did so on the patient's electronic health record (EHR). Intermountain's EHR system collected the ARDS patients' outcomes, and a pulmonology team met weekly to discuss the outcomes and tweak the protocol accordingly. A few months and many changes to the protocol later, the team took stock; Intermountain's patients' survival rates had risen to four times the national average.

Since then, Intermountain has developed these in-house care protocols for 49 more conditions. Their results have made them into what Newt Gingrich once called "arguably the most productive single health system in the United States," and led the Washington Post to state in 2006 that "if all doctors practiced to the standard of Intermountain...Medicare would cost 40 percent less."

Bringing It Home


Are Intermountain's results relevant to O&P? Can small practices in a field with a relatively minute evidence base begin to replicate these results or at least benefit from their example?

"Absolutely," says Sean Zeller, CPO, a clinician who has spent much of his career cultivating evidence-based practice. Zeller works at National Orthotics & Prosthetics Company (NOPCO), Boston, Massachusetts, where his teams have developed successful evidence-based protocols for scoliosis, cranial remolding, and dynamic movement orthoses, and are piloting a program for lower-limb conditions.

"Our thought was that, first, if we could monitor ourselves, we would improve simply in the process of monitoring, and secondly, we wanted to see how we compared to national outcomes studies," Zeller says. "We're not actually in the process of doing extensive studies—we just want to track what is happening to our patients within our own facilities and see how that corresponds to national outcomes. Then, if there are areas where we're deficient, we...focus our efforts on those areas, and if there are areas where we're meeting expectations, that's great.... If there are areas where we're exceeding expectations, then we want to investigate why and see how we can help others to do the same thing."

Arguments for and Against

Of course, not all healthcare practitioners think using evidence-based protocols is a good idea, and protocols aren't always implemented effectively. Jerome Groopman, MD, and Pamela Hartzband, MD, authors of "Why ‘Quality' Care Is Dangerous" (Wall Street Journal, April 18, 2009), point out that unless they are managed and implemented correctly, developing care protocols can be a time-consuming process that leads to rigid, innovation-resistant care. Intermountain Chief Medical Officer Brent Wallace, MD, told The O&P EDGE that flexibility and clinical judgment must be absolutely central in care protocols.

"We encourage our physicians to deviate from the protocol whenever they feel there is a clinical need because there's no way you can establish a protocol that's going to be right for every patient every time in every circumstance," he says. "You can get a protocol so that it'll be right for 80 percent of the patients 80 percent of the time, and if you do that you're doing pretty well." In cases where Intermountain's clinicians deviate from a protocol, they have to note the deviation and their reasons for it. "That does two things," Wallace explains. "One, it makes sure they're thinking, and two, we watch the records over time, and if we keep seeing the same type of deviation, that helps us to modify the protocol." Doing this also helps clinicians stay curious and understand where their treatment choices aren't working.

Whether this reasoning appeals to you or not, another compelling argument for beginning to understand, use, and develop evidence-based practice is becoming more and more evident: you eventually might not have a choice. Groopman and Hartzband note in their article that the White House and Congress are now working to mandate that all Medicare reimbursement be tied to "quality metrics" that are specified in consensus guidelines written by experts. "Since 2003, the federal government has piloted Medicare projects at more than 260 hospitals to reward physicians and institutions that meet quality metrics," Groopman and Hartzband write. "The program is called ‘pay-for-performance.' Many private insurers are following suit with similar incentive programs."

Learnable Skills


Fortunately, developing high-quality care protocols is something almost any facility can do, say the experts who were interviewed for this article.

First of all, when choosing a condition for which to develop a care protocol, Zeller recommends picking a condition you treat often and don't yet treat very consistently, so that you can reap maximum benefit from your process. Wallace emphasizes that as few as two clinicians can develop and implement a protocol, but it's essential to involve frontline practitioners in any protocol-development process as early as possible, so that they experience themselves as partners, rather than students or adversaries, in the process. Wallace and Mark Muller, CPO, FAAOP, clinical instructor at California State University, Dominguez Hills (CSU DH), both recommend starting protocol development with a literature review, and Muller recommends creating a clinically appraised topic of interest (CAT). To do this, Muller says, "you develop your clinical question, then one to four people research four or five different current articles on a topic and then describe the best clinical practice methods from that." The American Academy of Orthotists and Prosthetists (the Academy) Online Learning Center has several resources to start from: its State of the Science Conference proceedings, Best of Resident Directed Studies articles, and Literature Update.


Next, Wallace says, lay out the steps of the treatment process in a flow chart, including points at which you will collect outcomes. Phil Stevens, MEd, CPO, FAAOP, authored two articles on outcome measures in O&P that can help you choose the right measure. In "Clinimetric Properties of Timed Walking Events Among Patient Populations Commonly Encountered in Orthotic and Prosthetic Rehabilitation" (Journal of Prosthetics and Orthotics, January 2010), he lists nine different tests of timed ambulation performance and describes their applicability to eight different patient populations. In "Clinically Relevant Outcome Measures in Orthotics and Prosthetics" (The Academy TODAY, February 2009), he then explicates the strengths and weaknesses of ten different outcome measures suitable for O&P. He also emphasizes that it's essential to use an outcomes measure that reflects relevant and accurate data points for the condition you're treating, and that it's easy to choose the wrong one.


David Boone, CP, MPH, PhD, chief technology officer of Orthocare Innovations, Oklahoma City, Oklahoma, and editor in chief of the Journal of Prosthetics and Orthotics (JPO), has been involved in developing a more advanced tool for collecting, analyzing, and reporting clinical outcomes—Orthocare's upcoming Galileo system. The system provides what Boone calls "a virtually workless process for the clinician to track physical functional outcomes." In the system, the patient spends a week wearing a StepWatch, a highly accurate activity monitor that tracks steps per a variable interval of time. Then, the clinician uploads the activity data to Orthocare, which analyzes it using proprietary algorithms. Finally, Orthocare transmits a report to the clinician, showing the patient's mobility patterns over the course of the week plus a calculation of K level and the interval at which devices should be serviced that is derived from cadence variability and physical performance.

Whatever outcome measure you use, Wallace notes, "it's essential to take baseline measures that can be tracked, and then it's really helpful to develop a report that is visually easy to interpret." He adds, "The other thing that's really critical, especially early on, is for the organization to develop a feedback loop where the individual physicians can tell you when they think something is wrong with the data.... One of the first things that happen if you're trying to get feedback to providers is that data doesn't show what the individual provider thinks it ought to show, and their first reaction is that something's wrong with the data."

Getting the Word Out

Once a protocol program has begun yielding data, it has many uses. The primary use is to improve the protocol itself. Intermountain protocol teams meet weekly, monthly, quarterly, and annually to analyze how the feedback from data should affect their protocol. NOPCO protocols undergo similar reviews, but the company's clinicians also share their data with referral sources, and sometimes even patients. Stevens adds that though small facilities likely won't end up publishing any outcomes information they discover, it can still benefit the profession if they communicate important discoveries to researchers who may take it further.

"I don't think it's time for everybody to start implementing outcome measures in every patient interaction—I think that's over the top, and we're not reimbursed for that time," he says. "But I think it's time for people to be experimenting with these outcome measures, so when the appropriate case presents itself, we are in a position to start implementing things."

Muller adds that no matter how clinicians decide to use outcomes data, he'd like to see the profession take homegrown research more seriously. "At the last Academy meeting," he recalls, "we had a clinical technique session where a practitioner presented with a physical therapist and a physical therapy assistant, and they went through ten different outcome measures, all of which can be easily completed in the office. If I remember correctly, at that annual meeting there were 1,600 people, and only six or seven of them showed up for that presentation…. Evidence-based practice will become relevant to clinical care in the next ten years, and we need to understand how it works and how to develop it so that when it's time, we're not starting at square one."

Muller concludes, "Things will really start moving when people realize that researching and using evidence isn't a big hairy monster—it's something that we can all do every day."

Morgan Stanfield can be reached at

Bookmark and Share