Saturday, March 11, 2017

5. Information Mastery: Practical Evidence-Based Family Medicine -Cheryl A. Flynn, Allen F. Shaughnessy, and David C. Slawson
Remember Marcus Welby? He symbolized the ideal family doctor—knowledgeable even about rare conditions, caring and compassionate, making multiple house calls with his little black bag, and devoting his complete attention and the best resources for the care of a single patient.
Fast forward to the new millennium, with health maintenance organizations (HMOs), schedules with 20 to 40 patients per day, and a huge information explosion, yet still having the responsibility of knowing the latest updates in medicine. As family doctors, we strive to maintain the characteristics embodied by that fictitious symbol—good history and physical examination skills, an understanding of the patient in the context of the family and community, and the ability to meld the two in diagnostic and therapeutic decision making. Yet with the exponential growth of information, and rapidly expanding medical technologies, it seems easy to blink an eye and miss some important new development. Lifelong learning skills and strategies to manage the jungle of medical information are the new survival tools for today's family doctors.
Enter evidence-based medicine (EBM), which is defined as "the conscientious, explicit, and judicious use of the current best evidence in making decisions about the care of an individual patient."1 This practice encourages us to apply the highest quality information available at the time in the care of our patients. Critics argue that we have been using evidence all along; EBM is merely a new name for an old practice. But EBM is not simply the use of research in practice. Rather it is a systematic process to answer clinical questions with the best evidence. It requires lifelong learning skills not generally taught in medical school. A 1984 study found that physicians' knowledge of treating hypertension was inversely related to the year they graduated from medical school.2 A later study demonstrated that those who attended a school where EBM was taught had no such knowledge decline.3
If EBM has been practiced all along, then why would an ophthalmologist from a well-respected institution advise journal readers to use an eye patch to treat corneal abrasions despite knowing that there were seven randomized controlled trials showing no benefit and possible harm ("We've always done it this way")?4,5 If our profession incorporates evidence into practice routinely, why were only two of 28 landmark trials implemented in practice in the 3 years following publication.6 If you are a clinician who already is using the best evidence in practice, then we challenge you to train your colleagues and help train our future physicians, because clearly as a profession we do not routinely practice using the best evidence.
The newer definition of EBM is one that incorporates the best evidence, clinical experience, and patient perspective into medical management plans—a patient-centered evidence-based practice. This chapter outlines a new and more useful model of EBM, especially fitting for family physicians; offers practical strategies for using evidence in answering clinical questions; outlines a model for keeping up to date with the latest medical developments; and addresses some key concepts in the application of evidence in clinical practice.
Information Mastery
The traditional EBM model involves five steps to solve a clinical problem: developing answerable clinical questions, searching for and selecting the best evidence, evaluating the quality of that information, interpreting and applying it back at the patient level, and assessing one's practice. Although seemingly complete, this model has some limitations, especially for the busy family physician.
First is the lack of feasibility. It is estimated that the average physician generates about 15 clinical questions per day. Although some questions are simply "What is this drug?" or "What's the proper dose?" more than half are focused on identifying the best treatment or diagnosis strategies. Since it takes an average of 20 minutes to perform a Medline search, one would need several hours of uninterrupted time per week just to find the evidence for answering these questions. It is understandable, then, that the majority of the questions generated in practice remain unanswered. The unfortunate part is that half of the answers would have the potential to influence practice.7
A second, essential element of an evidence-based practice is the ability to keep up to date with the latest developments. To seek answers only to those questions we generate may leave us in the dark about new or previously unconsidered therapies. Worse still, it may result in medical gossip8—finding an answer to your question without the context of all the research of that area may result in the inappropriate application of the evidence.
Finally, the traditional EBM model presumes that the only source of medical information is the literature. Colleagues are the first source clinicians turn to for answers during practice.9 The medical information system is expansive, and includes the World Wide Web beckoning from your personal computer, pharmaceutical representatives knocking at your door, and continuing medical education (CME) programs making broad-based medical recommendations. These sources are in addition to the estimated 6000 articles published each day in medical journals.10 Family physicians need tools to help sort through this overwhelming quantity of medical information.
Information mastery (IM) was designed to be more user-friendly for busy clinicians. All sources of medical information are not equally useful, but depend on three factors:
[Inline Image]
Here, work refers to any resources devoted to finding and using information. This conceptual model tells us that sources requiring little work are more useful. However, if an information source is either irrelevant or invalid, then regardless of the work, its usefulness will still be zero; all three factors must be balanced. The latter sections of this chapter offer practical tips and examples of ways to minimize work when answering clinical questions or attempting to stay current with medical information developments.
Determining Relevance: DOEs, POEs, and POEMs
One strategy to minimize work is to first assess relevance. Only if the source of information passes the relevance criteria do you need to follow through with a validity assessment. In medicine, we naturally create a hierarchy of relevance. It is uncommon that we'd apply data that were based solely on test tubes or animal models directly to our patients. Within clinical studies, there is also an additional hierarchy of data, that between disease-oriented and patient-oriented evidence. Disease-oriented evidence (DOE) refers to outcomes of pathophysiology, etiology, and pharmacology. Often these include test results and may also be called surrogate markers. We count them as important because we assume these intermediate outcomes are directly linked to the final outcomes. Consider guidelines that tell us to check for proteinuria in the diabetic patient. Why is the amount of protein in the urine important? Because it represents a marker for renal disease, we assume that less protein means that patients won't need dialysis or at least the need is delayed. Instead of assuming that an intervention that alters the quantity of proteinuria delays the need for dialysis or helps our diabetics live longer, why not study the final outcomes of morbidity and mortality? Insisting on evidence that is linked to final outcomes eliminates the assumption step and lets us know that what we are doing for our patients is more like to help than harm. These final outcomes are patient-oriented evidence (POEs) which are outcomes of mortality, quality of life, and disease prevention.
Why is this distinction so important? DOEs represent what ought to be based on our understanding of pathophysiology. What "ought to be," however, may not always turn out to be true. The medical literature is wrought with examples of medical decisions based on intermediate outcomes that were found not to withstand the longer term studies evaluating POEs: external fetal monitoring for low-risk pregnancies, calcium channel blockers for hypertension, antiarrhythmics for premature ventricular contractions after a myocardial infarction. These and other examples of POEs and DOEs are outlined in Table 5.1.
Two additional criteria must be considered when determining the relevance of medical information. First is the frequency with which the problem studied is encountered in your practice. Obviously, common problems are deserving of more attention. Second is deciding whether the information matters to you as a clinician. Would this evidence, if true, oblige you to change your current practice? If the perfect study were conducted demonstrating that penicillin treatment of strep pharyngitis prevented rheumatic heart disease, this should have little impact on our practices. However, a study demonstrating that estrogen replacement worsens urinary incontinence in postmenospausal women may offer motivation to not recommend this treatment to incontinent women. In this latter case, where our practice should be altered, the POE becomes a POEM, patient-oriented evidence that matters. The next step is validating the information to determine whether it should be applied.
Assessing the Validity of New Information
New research is believable only when it has been shown to be internally and externally valid. Internal validity is how well the evidence reflects the truth. To apply the results from a well-done study, the patient population needs to be similar enough to your patient or clinical population. This generalizability of the information to your own practice is external validity.
Determining validity is the hardest part of EBM for most people. Readers are often overwhelmed by statistical jargon and want to just accept that the editors have done that for them. Key validity considerations for different study types are outlined in Table 5.2. Readers can get a more detailed explanation from the "User's Guide" series (go to http://www.cche.net/principles/content_all.asp [Preview]).11 The IM worksheets offer a simplified version of validity assessment and can be obtained from the authors on request. One tip to lessen the work of evaluating study quality is to do this step as a group, for example in a resident journal club, or rotating responsibility among your clinical partners. Another option is to seek "prevalidated" sources, those where a known EBM/IM expert has done the quality evaluation for you.
The IM model tells us that focusing our attention on common, valid POEMs will maximize usefulness and help us offer the best care to our patients. When encountering any information source, first assess relevance (is it a common POEM that, if true, changes practice?), and, only if relevant, proceed to do the work of validating the evidence.
Practicing Information Mastery
The vast amount of medical information available to us can be a jungle of opportunities and traps. We choose to enter this jungle for one of four reasons: to refresh our memories of something forgotten (retracing), out of interest (sporting), to answer clinical questions (hunting), and to keep up to date (foraging). For medical problems with which we have less clinical experience, our questions tend to be simplified: What are the causes of excessive vomiting in a 2-month-old? How does Crohn's disease usually present? These are background questions12 and fall more into the first category of learning (or relearning). Sporting refers to seeking information that is uniquely interesting to us, our own research interests, or exploring the details about Aunt Agnes's zebra illness. Sporting, therefore, should be delegated to personal or academic time. Hunting and foraging have a direct impact on how we practice medicine and care for our patients every day and require the use of the best current information. This section offers practical suggestions for beginning your evidence-based practice: how to hunt, how to forage, and how to approach nonliterature sources of medical information. Remember, the usefulness equation is our model for all three: minimizing work, maximizing relevance, and maximizing validity.
Hunting
If a patient asks a question or one arises during patient care, we must find the answer. Right? Not necessarily. Doing so is the equivalent of reading every article encountered, and is likely not feasible. Thus the first consideration in hunting is deciding whether we actually need to hunt! This parallels the common criteria for relevance outlined above. A general rule to follow here is asking, "Will this answer apply to another patient before it becomes out of date?" If not, then it may not be worthwhile to hunt for the answer yourself. Suppose a patient with hairy cell leukemia asks your advice about the best treatment for her cancer. It's not likely that you as the family doctor will be prescribing that, nor is it likely that today's answer will be tomorrow's (or next year's) answer when you next encounter someone with this cancer. This question could be deferred to the patient's oncologist. Another patient whose psoriasis calms in the summer sun wonders if buying a light box for home use in the winter will help. Relative to other skin conditions, psoriasis is less common in primary care, but you will likely soon encounter other patients with this problem and proceeding to find an evidence-based answer is appropriate.
The next step is deciding when to hunt. Those newly in practice or those new to EBM will likely find it challenging to do evidence searches during busy office hours. Questions need not always be answered while patients are there; set a follow-up appointment and commit yourself to finding an answer before then. Keep a list of questions that arise during practice, prioritize them for relevance, and hunt for evidence-based answers whenever you can—during lunch or before returning patient phone calls, or when on call. Faculty might find some time during resident precepting. The bottom line here is to just do it; whatever steps you take toward answering clinical questions with evidence are steps toward an evidence-based practice. As your skills and information technology advance, finding answers "on the fly" will be easier. Programs that search multiple Internet sites simultaneously (TRIP, http://www.tripdatabase.com [Preview]; SumSearch, http://www.sumsearch.uthscsa.edu [Preview]), and newer evidence-based information tools (Medical Inforetriever, http://www.medicalinforetriever.com [Preview]) are available on personal and handheld computers to bring evidence answers to the point of care.
Finally, knowing how and where to hunt is critical. Because a good answer begins with a good question, learning to ask well-constructed questions is the first step. Foreground questions are those specific questions about the best treatment or testing strategy; they arise more frequently as our medical experience increases and thus are best answered by using current evidence.13 The four components of a good question form the PICO acronym:
Patient and problem information (age, race, severity of illness, setting, comorbid illnesses)
Intervention proposed (which may represent medications, or advice, or screening tests)
Comparison group (no intervention, or standard of care)
Outcomes of interest (which should be POEMs).
This PICO format helps convert your clinical question into a search strategy that maximizes your chance of finding relevant information.
Developing reasonable searching techniques will also help minimize the work of hunting, although many busy clinicians will not have the time do to this on their own. A medical librarian (http://www.crmef.org/curriculum [Preview]) can help train you to a sufficient level of skill for independent searching, addressing such things as Boolean search terms, truncation of keywords, and linking terms to medical subject headings. Established search strategies have been developed that help maximize the return of valid studies. For example, searching in PubMed offers the advantage of the clinical queries feature. Selecting the search purpose (therapy, diagnosis, prognosis, or etiology) links your clinical search terms with study design terms to improve the retrieval of more valid study types.
The last part of efficient hunting is knowing where to start. The medical information system can be envisioned as a pyramid (Fig. 5.1), with the most useful information, the most relevant and valid or predigested sources, at the top. Many of us were trained to look for evidence by searching Medline; however, this is the largest and least sorted database and therefore takes the most work to search. By starting at the top of the pyramid of sources and drilling down only as far as necessary to find a relevant and valid answer to the question, much time can be saved. The pyramid also shows us which databases are the most relevant and valid. Table 5.3 contains further information on each of these databases, as well as tools to search through the various databases.
It will likely take time and practice for the average clinician to develop efficient hunting skills. However, even asking questions and considering the quality of the evidence one finds are simple first steps in the continuum toward a more evidence-based practice.
Figure Thumbnail
Fig. 5.1. The medical information system depicted as a pyramid. The usefulness of the information increases as one climbs the pyramid.
Foraging
Doctors cite journal reading as one key method for keeping up to date; many of us have a bedside stack. In reality, though, we do a poor job of reading journals (or else there would be no stack!). Nor do we succeed at incorporating that information into practice. Instead of reading the stack, consider scanning the stack for relevant evidence. Read only the abstract conclusion and ask yourself the relevance question: Is this a common POEM that will change my practice? Read on to validate only those articles that pass the relevance criteria. In this way IM limits what we need to read, but also increases the responsibility of carefully assessing the information deemed relevant.
Even better strategies can be developed to forage the medical literature with less work and more likelihood of retrieving relevant and valid information. Four basic principles apply to a practical foraging strategy: (1) regularly casting a broad net, (2) being aware of the best sources of information, (3) using relevance criteria to screen information for usefulness, and (4) developing a retrieval system.
Since family physicians see a broad range of patients and problems, we especially have a need to be far-reaching in our attention to the medical literature. Scanning the most respected journals and all of our own specialty journals still leaves us at risk for missing a potentially relevant article. For example, a well-done trial was published in Neurology in 1998 demonstrating effective migraine prophylaxis from high-dose ribo-flavin.13 Most family physicians do not read Neurology regularly and would have missed this potentially useful information. Since less than 4% of original research represents POEMs,14 a lot of sieving is required to identify the few nuggets of gold. Even the journals with the highest POEM:DOE ratios—JAMA, Lancet, British Medical Journal, Annals of Internal Medicine, Journal of Family Practice (JFP)—have at most one or two articles per issue pertinent for family doctors.14
By perusing "POEM bulletin boards," we can let others do this filtering work for us. JFP POEMs is a site that can reduce the work of foraging. Editors scan more than 90 journals monthly, using the IM criteria to select articles of relevance specifically to family doctors. Eight of 25 to 30 relevant articles are selected each month and the summaries of the critical appraisal and key population and outcome information are published in the JFP. These reviews can be found and searched online at www.medicalinforetriever.com [Preview]. The remaining studies are critically appraised and summarized in both the Evidence-Based Practice newsletter and the daily e-mail electronic newsletter (InfoPointer).
Other abstracting services do similar work. The American College of Physicians Journal Club (ACPJC) publishes validated summaries of original research. Although they cite relevance to medical practice as selection criteria, ACPJC does not target primary care specifically, nor does it use the IM relevance criteria.15 Other services (Tips from Other Journals, Journal Watch) highlight potentially relevant research without a formal validity assessment. These may best be used by scanning the summaries and applying the relevance criteria to identify truly relevant information for your practice. Unfortunately, needing to personally perform a validity assessment greatly increases the work involved in applying the information into practice. Thus, secondary sources offering both relevant and valid information are preferred.
Newer electronic services are emerging that further lessen the work. Medical InfoPointer (www.medicalinforetriever.com [Preview]) is an abstracting service that carefully evaluates research for relevance and validity and delivers a short synopsis with a "bottom line" recommendation of one article each day via e-mail. Bandolier on the Web (bandolier@pru.ox.ac.uk) is a British evidence-based medicine resource that will e-mail its monthly table of contents to interested readers.
Regardless of the strategies employed, the final and essential step of foraging is to create a retrieval system. The traditional version of this was cutting the article out from the journal and filing it your cabinet in some reasonable ordering system. Today's clinician needs to "file" the key information (population and intervention details, outcomes assessment and magnitude of results, and the original citation) in an electronic system on handheld or personal computer to make it available quickly during patient care. Web-based software (e.g., Avant go) makes it possible to download Web pages to store in your electronic folders. Many of the foraging sites above also have searching capabilities—so if you remember that the riboflavin article for migraine prophylaxis was in JFP POEMs, you can quickly search that database to retrieve the answer in a matter of seconds. The paper version of foraging is still a great first step toward better information mastery. But ultimately, technology phobia or not, you'll likely need to develop some simple computer skills in searching and filing or you'll be left behind.
Other Medical Information Sources
As a general rule, when evaluating the usefulness of any source of information consider the work, the relevance, and the validity of the information. A sampling of information sources are highlighted below using the usefulness equation as the guide.
Medical Literature that Is Not Original Research
Summary reviews are those that paint a broad landscape of a clinical topic; they likely include the classic presentation, epidemiology, and diagnostic and therapeutic suggestions. We may be enticed by these reviews; they seem to be a low-work option, one stop shopping. Yet because the authors usually do not specify their methods for finding or evaluating the evidence, we are often not sure of the relevance and quality of information upon which recommendations are based. In fact, the quality of these reviews varies inversely with the level of expertise of the author,16 suggesting that authors may begin with their conclusions and report only the data that support their recommendations, while ignoring contradictory reports! When reading summary reviews, if you stumble across advice that is contrary to your current practice, check whether the recommendation is based on a POEM, and if so (or if you can't tell), consider finding the original study yourself to evaluate the true usefulness. This added work makes this type of article less useful in the long run. Summary reviews may best be reserved for retracing, or when we have background rather than foreground questions.
Similar issues exist for clinical practice guidelines (CPGs). The intent of CPGs is to provide recommendations supported by available information that help clinicians make medical decisions. However, the quality of these seemingly low-work sources can vary greatly, from purely consensus-based opinion to a summary and synthesis of only quality evidence. Characteristics of quality CPGs include a brief summary statement for each recommendation, a long reference section pointing to original research, a methods section explaining how evidence was obtained and evaluated, and a detailed discussion of the evidence. Identifying an evidence table, a balance sheet, or some indication of the strength of the supporting evidence increases the likelihood of the CPG being evidence-linked. The relevance varies between and within CPGs; scanning for recommendations that are POEM-based can help identify those that are more relevant. Specific criteria for evaluating the validity of CPGs have been developed.17
Continuing Medical Education (CME)
Doctors cite attendance at CME programs as the second most common strategy to keep up to date. Yet the passive, lecture-based CME rarely improves knowledge and almost never changes behavior. Since most CME talks are similar in structure to the summary type reviews, we may be falsely lulled into thinking we're learning a lot when in fact we're not. Interactive educational processes and those that incorporate an audit/feedback system are more likely to influence us. But to truly make CME useful requires your attention and participation as a member of the audience. Keep your ears open for any recommendation by the speaker that would change your current practice. Ask follow-up questions about the evidence on which the speaker based her suggestions. Was it POEM data? What was the quality of the data? Are the references available? Implement only those that are valid POEMs.
Experts
In the context of medical information, an expert is anyone of whom we ask a clinical question. Most often we turn to "content experts," those with more expertise in the topic of inquiry. Yet the answers they give can often be quite subjective, based more on experience than valid data.18,19 Clinical scientists are those with expertise in evaluating information for validity but may not necessarily be content experts. The best experts are YODAs (your own data analyzers),20 who have and share the evidence basis for their recommendations, balancing it with their clinical experience. Our suggestion is to seek out the YODAs in your own community. When referring patients or asking questions of your consultants, include specific requests for the source of their recommendations. It may be solely experience based, but knowing this will help you keep your eyes and ears open for valid POEMs in the future.
Pharmaceutical Representatives
Pharmaceutical representatives (PRs) are seemingly the source of medical information that requires the least amount of effort—they come to you often bringing lunch! Frequently the information they supply is not relevant (DOE based) or the methodology of the studies isn't sound. This serves to remind us that work must be balanced with relevance and validity to define usefulness. Ask for their sources to assess the usefulness of their information.
Even more helpful may be to ask PRs explicitly for the information needed to decide if their suggested therapy is better than what you currently prescribe. The STEPS mnemonic is a helpful way to remember these key questions. Safety refers to the long-term absence of harmful drug effects, whereas tolerability is the significance of more short-term side effects. Since we cannot judge whether a patient's headache is more significant than his stomach upset, the best measure of tolerability is pooled dropout rates from placebo-controlled trials. This information tells us who had side effects of any kind severe enough to warrant discontinuing the medication. Effectiveness is not only whether the medication works—for POEM outcomes—but how well it works. We need to consider the clinical significance of the data presented (see application section). Price refers to the cost not only of the medication but also of any associated monitoring required. Simplicity is the ease of the medication regimen from the patient's perspective, and may influence compliance.
Application of Evidence
Clinical Significance
An important consideration in the application of evidence to individual patients is the clinical significance of the effect. It is not sufficient to ask whether one treatment is better than another; we need also to ask how much better. For example, one of the currently available antivirals is proven to shorten the duration of symptoms in adults with influenza (i.e., the statistical difference). But the amount of benefit is approximately a half day less of symptoms (i.e., the clinical difference). In the course of a 7-day illness, this may not seem worth the expense or risk of intestinal side effects to most patients. Yet to a busy stockbroker, taking the drug to possibly be able to return to work 4 hours sooner may be worthwhile. Clinical experience and patient perspective are the basis for deciding the clinical significance of such a finding.
The number needed to treat (NNT) is another measure of clinical significance. Calculated as the inverse of the rate difference, NNT tells us how many patients need to be treated for one to receive benefit. Consider two patients with elevated cholesterol: first is a 63-year-old male smoker with hypertension, total cholesterol of 250, and a high-density lipoprotein (HDL) of 35; the other is a 37-year-old woman with a total cholesterol of 328 and an HDL of 40. The statins have been shown to lessen the risk of a cardiac event by approximately 30%.21 Intuitively we'd encourage the man to take lipid-lowering medication more so than the woman, because his cardiac risk is greater. NNT allows us to quantify the benefit for each. Using incidence data from the Framingham study22 to calculate baseline risk, the man's 10-year risk of a cardiac event decreases from 30.4% to 20% and the woman's from 3.2% to 2.2% if treated with a statin. This yields NNTs of 9.6 and 100, respectively. Thus, the same medication yields a very different level of clinical benefit for each patient and should influence who receives treatment.
Clinical Jazz
If EBM were solely medical decision making based on evidence, it would become what critics call cookbook medicine, and could be done by computers. Either that, or we'd be paralyzed, unable to care for patients at all because there just aren't valid POEM data for much of what we do. Yet if we practiced only experience-based medicine, we may still be bloodletting our preeclamptic patients because some of them got better. Lest we consider this an unrealistic example, how many of us are victims of the latest bad experience bias? Objective evidence of this bias is seen in obstetricians whose cesarean section rates increase following an adverse event.23 Clinical experience is important, but as the sole evidence source it is fraught with biases that would never be acceptable if presented in a research article: small sample sizes, lack of blinding or randomization, lack of standardized outcome measurements, and nonrandom loss to follow-up.
EBM is not really in competition with clinical experience. The newer definition of EBM integrates the use of evidence, balanced with clinical judgment and the patient's preferences. In the IM model, this is clinical jazz. And like fine jazz music, it requires structure—the evidence of valid POEMs—along with improvisation—our clinical experience. Following this structure can actually be liberating. Basing our decisions on well-done outcomes-based research helps us avoid being ping-ponged between conflicting recommendations and may increase our confidence with medical decision making. The simplicity of the structure allows us ample room for improvisation. We use our judgment every time we make a decision in the absence of ideal evidence: POEMs with study flaws, or valid DOEs, or no existing evidence addressing our clinical questions. A key component of EBM in these situations is the awareness that our decisions are based on this lesser-than-ideal level of evidence and keeping our eyes open to replace that information when better quality data are available.24
Conditions with multiple valid POEMs, such as hypertension, provide opportunities to improvise as well. We rely on our clinical experience to apply most research data, since the patients we see in our offices are rarely as healthy, nor is our follow-up as rigorous, as those in randomized controlled trials.

Finally, our artistry and communication skills are needed to negotiate with patients whose preferences differ from the evidence. One patient may refuse colon cancer screening, despite high-quality relevant data in support of flexible sigmoid-oscopy; a mother may demand a computer tomography (CT) scan to evaluate her child, who has an acute headache but a normal exam and evidence demonstrating no need for a CT scan. A restricted view of EBM would suggest we only perform those services with evidence to support them; patient-centered medicine may seem like bowing to the patient's wishes regardless of the evidence. Clinical jazz is harmonizing the evidence, our experience, and our patients' views together to come to a reasonable decision. This is a true evidence-based medical practice!

No comments: