Author: Taylor Conrad

Edited by: Eden Kim, Taylor Douglas, Robby Allen

What is EBM?

Evidenced-Based Medicine (EBM) is an important part of all medical disciplines and has been integral throughout the development of Emergency Medicine. According to the Agency for Healthcare and Research Quality (AHRQ) at the Department of Health and Human Services, evidence-based practice is “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of the individual patient.”[1] Ultimately, this is a lot easier said than done. Not all research is created equally, to say the least, and in current practice, there is an incredible amount of information available to practitioners that can be overwhelming. There is also the potential issue of research being misunderstood and applied inappropriately. Furthermore, what may be demonstrated at a population level may not necessarily apply to the individual.

Accad and Francis discuss these points and others in the BMJ Head to Head series. Accad points out how medical societies and governmental agencies more often utilize the EBM to create standards of care or best practices. This points out the antithetical concept of applying population-based standards to the individual patient. Do these standards supersede clinical judgment? Francis counters this point with the argument that providers tend to overestimate their ability to personalize care.[2] He concludes with the poignant story of a provider promoting cardiac resynchronization therapy and his personal experience of success. When this therapy was tested in a clinical trial, however, the mortality in treated patients increased by 80%.[3]

EBM’s 6 Dangerous Words

The ideals of EBM are important to uphold, but we must recognize that the implementation of these ideals into practice may be less than perfect. This is also evident with EBM’s six dangerous words: “there is no evidence to suggest…”[4] Consider the following statements:

  • • There is no evidence to suggest that looking both ways before crossing a street compared to not looking both ways reduces pedestrian fatalities.
  • • There is no evidence to suggest that using a parachute compared to not using a parachute reduces mortality in skydiving. 

Taken in this context, the statements are clearly extreme and would not change our current societal norms or practices, yet it is true that the evidence is not present. When discussing medical therapies, diagnostics, and practices, we can certainly get into a more nuanced discussion. Due to population characteristics, logistical issues, or rarity, for example, certain studies are either difficult or nearly impossible to perform. Take, for example, a board review question that presented a case of a hypotensive, tachycardic variceal bleeding patient. The question posed was which of the following therapies are associated with decreased mortality? Among the multiple-choice answers, there was: “A. Administration of ceftriaxone” or “B. Blood transfusion”. The correct answer was “A. Administration of ceftriaxone”. The rationale given was although this patient who was hypotensive from bleeding will need a blood transfusion, the only answer provided that has evidence for mortality benefit is administration of antibiotics for variceal bleeds. No one is going to perform the study that blood administration versus no blood administration in a hypotensive, actively bleeding patient improves mortality, because it would be unsafe, irrational, and it’s not needed. This is not how we should approach decision-making. On the other hand, it is appropriate to ask more nuanced questions that can challenge current dogma, such as: Is a more conservative transfusion approach in traumatic, hemorrhagic shock better than achieving “normal” blood pressures, which really is more of a question of how we appropriately assess shock.[5] In his article “EBM’s Six Dangerous Words,” Brathwaite proposes we replacing “there is no evidence to suggest…” with the following four phrases based on the United States Preventative Services Task Force (USPSTF) evidence grading system:

(1) “Scientific evidence is inconclusive, and we don’t know what is best.” (USPSTF grade I with uninformative Bayesian prior)

(2) “Scientific evidence is inconclusive, but my experience or other knowledge suggests ‘X’.” (USPSTF grade I with informative Bayesian prior suggesting “X”)

(3) “This has been proven to have no benefit.” (USPSTF grade D)

(4) “This is a close call, with risks exceeding benefits for some patients but not for others.” (USPSTF grade C)

(The USPSTF Grading system is located here.)

What makes something “good evidence”?

We’ve discussed some of the issues arising with the availability of evidence and how to frame our understanding when it is lacking. Now, when there is evidence, how do we appropriately assess its quality and utility for answering our questions? This is clearly an entire discussion on its own and an integral part of EBM, but the following are sample questions to ask as we review the literature:[6,7]

  • • Is the clinical problem well-defined?
  • • Does the study population represent the target population (is there any spectrum bias)?
  • • Does the study population focus on ED patients or are they ICU or admitted patients?
  • • Did the study recruit patients consecutively (was there a selection bias)?
  • • Was the diagnostic evaluation sufficiently comprehensive and applied equally to all patients (or was there verification bias)?
  • • Were all diagnostic criteria explicit, valid, and reproducible?
  • • Was the reference standard appropriate?
  • • Was there good follow-up?
  • • Was a likelihood ratio presented in the paper?

Our understanding of what EBM is and is not and how it is applied, will be fluid and continue to change as our practice is improved. It is important to develop a foundation of critically appraising evidence, understanding the limitations of research, and applying this knowledge to provide the best outcome for the specific scenario considering the individual patient’s values and conditions. 

Type 1, Type 2 Thinking and Cognitive biases

No discussion on decision-making in medicine would be complete without discussing Type 1 and Type 2 thinking (Also known as System 1 and System 2), made popular by Kahneman’s “Thinking, Fast and Slow”:[8-10]

Type 1: Intuitive or reflexive decision based primarily on pattern recognition and previous experiences. This is arguably faster but potentially prone to more error.

Type 2: Analytical or problem-solving based approach which involves consciously reviewing the decision-making process, analyzing biases that may exist, and continuously adjusting probabilities based on new information with continuous self-reflection and determining alternative diagnoses.

We understand now that there is no “right or wrong” way of thinking, but how we apply these cognitive models to everyday practices. Type 1 thinking is useful in rapid decision-making scenarios such as resuscitations, and when applicable, adjusting to the type 2 more cognitive approach when possible. Individuals should reflect on decisions past to develop more refined heuristics and adjust for biases to develop a more “accurate” type 1 thinking for situations that require fast decision-making.

Lastly, understanding common biases in decision making can lead to more refined decision-making and avoidance of errors. Common biases encountered in EM include the following[8]:

  • Anchoring bias – Early decision on diagnosis without adjusting to new information
  • Diagnosis momentum – Accepting a previous diagnosis without considering the differential diagnosis adequately
  • Confirmation bias – Looking for evidence to support a preconceived opinion and potentially ignoring non-supporting evidence
  • Premature closure – Concluding work-up with one diagnosis when in reality alternative explanations exist (i.e., altered mental status due to UTI in an elderly patient, when really patient is septic from an alternative source, or chest pain diagnosed as “acute coronary syndrome” that’s actually an aortic dissection)

In conclusion, EBM as defined by the AHRQ as “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of the individual patient” is clearly more complicated than that single statement may suggest. Our implementation and understanding of EBM will continue to evolve, and we should continue to apply a cognitive approach to our utilization of evidence and our understanding of our decisions and biases to provide the best outcomes for our patients.

References

1. Clancy CM, Slutsky JR, Patton LT. Evidence-based health care 2004: AHRQ moves research to translation and implementation. Health Serv Res. 2004;39(5):xv-xxiii.

2. Hoffmann TC, Del Mar C. Clinicians’ Expectations of the Benefits and Harms of Treatments, Screening, and Tests: A Systematic Review. JAMA internal medicine. 2017;177(3):407-419.

3. Accad M, Francis D. Does evidence based medicine adversely affect clinical judgment? BMJ. 2018;362:k2799.

4. Braithwaite RS. EBM’s Six Dangerous Words. JAMA. 2020;323(17):1676-1677.

5. Tran A, Yates J, Lau A, Lampron J, Matar M. Permissive hypotension versus conventional resuscitation strategies in adult trauma patients with hemorrhagic shock: A systematic review and meta-analysis of randomized controlled trials. J Trauma Acute Care Surg. 2018;84(5):802-808.

6. Diagnostic Test Appraisal. Best Evidence in Emergency Medicine. https://emergencymedicinecases.com/wp-content/uploads/filebase/pdf/Diagnostic-Critical-Appraisal-Checklist.pdf. Published 2012. Accessed.

7. Himmel W, Hicks C, Dushenski D. Diagnostic Decision Making in Emergency Medicine [Internet]: Emergency Medicine Cases; 2015 April 2015. Podcast. Available from: https://emergencymedicinecases.com/diagnostic-decision-making-in-emergency-medicine/

8. Helman A, Himmel W, Hicks C, Dushenski D. Decision Making in EM – Cognitive Debiasing, Situational Awareness & Preferred Error [Internet]: Emergency Medicine Cases; 2016 January 2016. Podcast. Available from: https://emergencymedicinecases.com/decision-making-in-em/

9. Kahneman D. Thinking, fast and slow. New York, NY, US: Farrar, Straus and Giroux; 2011.

10. Sinclar D, Hicks C, Helman A. Cognitive Decision Making and Medical Error [Internet]: Emergency Medicine Cases; 2011 February 2011. Podcast. Available from: https://emergencymedicinecases.com/episode-11-cognitive-decision-making-medical-error/

The following two tabs change content below.

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *