Skip to content

BoringEM has MOVED!

Thanks to everyone that has been supporting my blogging by reading, commenting and sharing. Over the past two months I have learned a ton blogging and plan to continue long into the future.

However, with the advice and assistance of my #FOAMed/blogging/social media mentor Mike Cadogan, I have transferred my site from wordpress.COM to wordpress.ORG. A month ago I had no idea that there was even a difference between these two sites and I certainly wouldn’t have been able to make the transition alone. I owe a huge thanks to Mark Hale and Mike for assisting with this transition and to @emchatter for hooking me up with my swanky new theme.

If you are following my OLD RSS feed or having posts from my obsolete site e-mailed to you and would like to continue following (and I hope you do!), please sign up to my new feed(s) at BoringEM.ORG.

Thanks again for your support!

Brent Thoma @boringem

Would you rather misdiagnose or misdispose?

Over the past two weeks I have been completing a rotation focusing on the administrative aspects of the emergency department. Halfway through a shift with one of my admin mentors, the quality improvement ninja and philosopher king known to most as Dr. Mark Wahba, we played a brief game of “Would you rather?”

If you have yet to be initiated, you probably need to get out more. “Would you rather?” is a party game played by the immature at heart that forces you to choose between two (generally) less than desirable options. It has spawned its own wikipedia page, three large websites filled with often inappropriate (you’ve been warned!) questions (here here here), a television game show and a B-list horror movie.

In this case, the dilemma was more serious and involved two desirable options, but I found it no less difficult. The question:

Would you rather get the correct disposition? Or the correct diagnosis?

Before you say both, remember that you can’t. I don’t care if you’re that good, that’s not how the game works – choose one!

To me, the answer seemed intuitively obvious at first. Disposition, of course! Who cares if we don’t get the right diagnosis if we at least get the patient where we need them to go? Sure, it might hurt my delicate ego to be wrong, but at least the patient is where they need to be. There’s no harm in that.

Or is there? What about the harm that comes to the patient from the misdiagnoses? The extra irradiation and/or procedures? The additional time/energy/money that needs to be invested into their care? The opportunity cost lost by using resources to determine the correct diagnosis that could have been better used? And if emerg doctors turn into nothing more than effective disposition machines, are we any better than a good triage system?

In the end, we broke the rules, quit playing the game and concluded with the obvious: both disposition and diagnosis are important and which is more important depends largely on the patient’s situation.

In the very sick, disposition takes precedence – we need to recognize the sick and get them to where the need to be at all costs. We can’t let them walk out the door! In the less sick, diagnosis takes precedence – we need to arrive at the diagnosis to treat them as effectively as possible. It may not be ideal, but if we’re wrong they can come back or go see someone smarter.

But in the game, that answer is a cop-out. We see tons of mildly ill patients for every extremely ill patient we see. So back to the game: which would be worse?

Would you rather always misdiagnose or always misdispose?

If you enjoyed this post please tweet/retweet the hell out of it, e-mail it to your twitterless friends/colleagues, follow me on twitter, sign up for my RSS feed (top right corner – it seems to work better to sign up when using Explorer or Firefox; Chrome hates me), sign up to receive e-mails after each post (right column), and/or leave comments.

Thanks again for all of the support over the past few months!

Brent Thoma @boringem

PS – Shout-out to Dr. Mark Wahba for inspiring this post! Check out some of his spectacular posts on here.

Injecting into the (Carpal) Tunnel

It’s been awesome working in the ED with the benefit of the knowledge gained on off-service rotations in specialties like Plastic and Orthopedic Surgery. Great learning experiences on these rotations have led to a run of success with injections into all sorts of places and have given me a desire to inject into… well… most things that there is evidence for treating with needles.

Recently I’ve treated a frozen shoulders (steroid/lidocaine), Colle’s fractures (lidocaine/bupivicaine hematoma block), traumatized hands (lidocaine/bupivicaine median and ulnar nerve blocks) and Carpel Tunnel (median nerve steroid/lidocaine injection) with the little pointy things. Of course, all patients were followed-up (or seen with) the appropriate specialty service.

I have found that some of my EM Attendings are more comfortable than others with my needle fascination. In general, there is a fair amount of of comfort with the hematoma block and shoulder injection, less familiarity (but still acceptance) with the median and ulnar nerve blocks, and no EM experience with the Carpal Tunnel injection. Most of my experience with the latter procedures came under the watchful eye of a surgeon during my Plastics rotation. I find the blocks and carpel tunnel injections intensely satisfying – which leads me to the topic of the day:

Carpal Tunnel Syndrome: to inject or not to inject?

I posed this question on twitter (thanks for the responses Minh, ElishaT, Alex and TheSGEM!) and got the same response as I did in my department. The summary: No one does this, you Weirdo. If you want to provide conservative treatment, hook them up with a splint and send them to Plastics.

Fair enough. However, after examining the literature I’m unsure why there’s such a hate-on for the injection. The procedure itself is similar to one clearly within our scope (median nerve block), there’s evidence that it works (see below), and it can effectively treat many patients while they wait to see Plastic Surgery. In fact, this is likely the treatment they’ll get on their first visit with the surgeon anyways. Certainly, its not a super urgent problem, but the same could be said for many of the other conditions that we treat. Why not get them started on their treatment and give them some relief before the potentially long wait to see the surgeon??

Does it really work?

A Cochrane review was done to answer this (and related questions) in 2008. It found evidence that steroid injections were effective relative to placebo for up to one month. Unfortunately, this limited conclusion was reached because there were only two trials included in the analysis that examined this particular question and neither maintained control groups longer than 1 month.

The first RCT from Dammers et al in 1999 posed the following PICO question: In a Population of patients with carpel tunnel symptoms for >3 months is the Intervention of injecting 40mg of methylprednisolone (Depo-Medrol) proximal to the carpal tunnel more effective than the Control treatment of placebo at improving the Outcome of no or minor symptoms that require no further intervention (the “responders”) at 1, 3, 6, 9 and 12 months. Criticisms of this study include its largely female (84%) population and lack of clarity on the population’s carpal tunnel severity. There were no complications.

While the population was small (30 control, 30 intervention), the treatment was effective with 50% of patients in their treatment group still responsive at 12 months versus 7% in the placebo group (NNT = 2.3). However, at one month 23/30 (77%) intervention vs 6/30 (20%) control patients responded (NNT = 1.75). Nonresponders were moved to open treatment (injection or surgery as indicated) at each follow-up appointment. The Cochrane review concluded that this prevented comparison beyond 1 month, but I’m not sure why we can’t continue to draw conclusions from the data of the patients that continued to respond. As nonresponders could not become responders again in the subsequent data, it seems to me that the proportion of responders/non-responders in each group would still be accurate in follow-up (EBM people – thoughts on this? Leave a comment please).

A related but more complicated RCT by Armstrong et al from 2004 included 81 patients and compared 6mg of Betamethasone versus placebo but was more difficult to interpret because it allowed for up to q2month injections and only provided data for responders/non-responders at 2 weeks post-injection. It found a NNT 2.8 for patients being “highly” or “somewhat” satisfied with their outcome at 2 weeks when nonresponders were offered surgery (intervention group) or steroid injection (treatment group).

A larger, 52 week trial to answer this question has been proposed.

So does it work? The data available (limited as it is) suggests that it does work in a significant proportion of patients for at least a limited period of time.

How would do you do it?

The procedure is similar to a median nerve block and could be done using ultrasound guidance. It can be done with an injection both at the wrist and more proximally with equal efficacy. I will not try to reinvent the wheel:

-eMedicine has a good overview here.
This is a decent youtube video discussing two blind injection techniques.
-The Ultrasound Podcast has a great video uploaded to GMEP on identifying the median nerve. Injecting near it after you find it shouldn’t be much of a problem.

Multiple steroids are mentioned in the literature. The ones used in the two studies discussed above were 40mg/1mL Methylprednisolone (Dammers) and 8mg/1mL Betamethasone (Armstrong). 10mg (1mL of 1%) of Lidocaine can also be added to help confirm the correct location of the injection.

Just like with a median nerve block, you want to avoid damaging the nerve by injecting directly into it. Some of the techniques described above help you to avoid that, but you should still advise the patient to let you know if they have any shooting pain/numbness during the injection.


Based on the tepid response that I received to the idea of doing these injections in the ED and acknowledging that I have no business being innovative at this point in my baby career, this review will likely be my only experimentation with these injections out of the sight of a plastic surgeon. If someone smarter, more experienced and better respected ever comes along and decides to advocate for doing this, I’ll be happy to follow their coat-tail. Until then, I’ll keep leave the steroids in the pharmacy and wouldn’t advise anyone else to start injecting into the (Carpal) Tunnel.

Despite its potential, with that conclusion this is a “negative” (ie – I’m not recommending that we do this) review. I imagine its readership will be as low as a negative study’s? Guess I’ll find out! If you’ve stuck with me this long, I hope you enjoyed the diversion.

I’d appreciate you supporting my ongoing posts by e-mailing this to your friends, retweeting it on twitter, following me on twitter, signing up for e-mail notification of new posts (right column), and/or following my RSS feed (top right corner).

Thanks for reading!

Brent Thoma @boringem

PS – I’ve heard that some people are having difficulty getting my RSS feed to work. Is anyone else having that issue? Any ideas how to fix it?? It works for me.

The Reference Letter Triple Crown

Interviews for the Canadian Residency Matching Service (CaRMS) are over and they were as difficult as ever. One thing nobody appreciates on the medical student side of the CaRMS equation is how difficult it is for the programs to come up with our rank list. The applicants this year were spectacular and ranking them was more difficult than splitting hairs. Fortunately, the depth of the applicants makes us confident that we will be matching two exceptional people. It would be hard not to with so many great ones to choose from!

In any case, with this year’s CaRMS cycle at an end, it is time I started to focus my mentorship posts on the class of 2014. While the CaRMS website will not open for many months, many of you have or will soon begin your electives and the associated hunt for reference letters. My next two posts will discuss these issues. This post will attempt to define “What makes a spectacular reference letter?” (according to me!) while its sequel will focus on how to get them on electives.

So what makes a spectacular reference letter? Before answering, I’d like to say preemptively that my opinions come solely through my own experience reading nearly a thousand reference letters for a Canadian FRCPC-EM program over the past few years. They are neither evidence-based nor likely to be universally agreed with and are potentially somewhat specific to EM. I’d urge you to slot my opinion in with those of others that you respect, to think critically, and then to formulate the truth-according-to-you.

With that said, the best reference letters I read each year meet the three characteristics of the Reference Letter Triple Crown:

-The Mikey Likes It Criteria
-The Award-Winning Author Criteria
-The Important Persons Criteria

It’s extremely difficult to get a reference letter meeting all three criteria, and unfortunately, even if you do you likely won’t know it. More on that later!

The Mikey Likes It Criteria

Named for the classic 1970’s Life Cereal commercial, this criteria is pretty simple:

They like you a lot and are willing to go to bat for you.

If your referee likes you as much as Mikey likes Life Cereal, they’ll likely be able to convince the others to eat it (err… interview you). These best of these reference letters can be described as glowing. While most still try to make professional, accurate assessments, when a physician sounds like they’d name their first-born child after the applicant, you can tell and take it as a strong endorsement.

While not impossible, this is a hard thing to develop over a few random shifts in the ED. Anecdotally, it seems like referees who have known the candidates extremely well are more likely to write references like this, possibly because they have invested a lot in developing you as a physician over the years. Developing this sort of a relationship is, in my opinion, one of the greatest side-benefits of doing research in EM.

Of course, the glowing letter is at one end of the spectrum. It is extremely rare to see a truly negative reference letter (I do not think that I have), but the opposite of the glowing letter is one that doesn’t truly endorse the candidate as being above the norm for a medical student with a similar level of training and EM career-focus.

The Award-Winning Author Criteria

This is a category that, for good reason, is often completely overlooked by medical students. Unfortunately, as a student that hasn’t read any CaRMS reference letters before, you have absolutely no idea who is good at writing reference letters and who isn’t. However, I can tell you from experience that unfortunately, physicians’ ability to write helpful reference letters vary dramatically! This is best partially illustrated by this hilarious spoof published in the BCMJ here (this is highly recommended for anyone that has spent hours of their lives reading these letters!).

What does a bad letter look like?

The worst ones I’ve seen have been incredibly brief and were written in point form with minimal punctuation. Interestingly, these ones often don’t even say anything bad about the candidate. Rather, it seemed that these referees simply didn’t care to put any effort into writing them. Incredibly non-specific letters are unhelpful as are those that do not comment on all of the areas that CaRMS asks for. While a letter like this doesn’t red flag a candidate (I would actually feel bad for the candidate for unknowingly asking a poor writer for a letter), it doesn’t add anything to their application.

What does a good letter look like?

It’s detailed and specific enough to tell that the referee knew the applicant well without being incredibly long. It comments specifically and critically on the areas that CaRMS requests. In some way, it puts the candidate in context for the reader. Is this candidate a superstar? Amazing? Great? Good? Average? This can be indicated in various ways, from percentiles to the tone of a letter relative to others written by the same referee. If I feel that I get a good idea of an applicants strengths and weaknesses as well as the writers gestalt about where this person fits in their CaRMS applicant class from a letter, I know that it was a good one. To speculate, I would guess that physicians involved in reviewing applications for a residency program are more likely (but not guaranteed) to write a solid letter.

The Important Persons Criteria

The important persons criteria for a spectacular reference letter is almost exactly what you’d think and it seems to be the one that everyone guns for. I think the best definition I could give it is the following:

The extent to which your referee is known and respected in the community to which you are sending their letter.

The reason that this is important is because references carry more weight when they come from people that we know and trust. Note that this does not necessarily refer to the most well-known or celebrated physician you can think of, although many people meeting this definition will also be one or both of those things, and that it can be somewhat location specific.

Some illustrative examples:

The Program Director: Generally, all of the program directors know and respect each other. Additionally, they are all educators that work with residents and medical students so they are able to perform comparative evaluations effectively. Their opinions are taken seriously.

The Chief of the ED: The prominence of this person’s position implies that their opinion should be respected and, generally, it is. However, their letters are likely to have more impact in areas where they work because the people there know them personally (ie – the Chief at an ED in Dalhousie would be better known out East than in Vancouver and their letter would therefore carry more weight at that location).

The Random EM Physician that you happened to get scheduled with a lot: These are still important letters and you should definitely use them. However, they get their importance from their performance on the other Criteria more than this one. The exception to can be these physicians’ local program or where they trained. Their letter will have more impact in places where they are well-known.

Dr. Oz: He’s super famous, but not necessarily respected in our community. If the doctors that review your application agree with this article, they probably aren’t going to give this letter much weight even if he promises that you’ll be a better researcher than Ian Stiell.


The Reference Letter Triple Crown is difficult to attain. There are just too many unknowns for an applicant to know if they’ve done it. However, those that do (and also have a strong application otherwise) tend to be the ones that sweep the interview circuit in even extremely competitive specialties like EM. More commonly, strong candidates have a variety of consistently strong letters with various strength/weakness in each of the Triple Crown domains.

This Important Persons Criteria was listed last intentionally because I think that there is already an excessive and unhelpful focus on it. When gunning for letters from high-profile people, remember that while this may be the only thing completely under your control, it is not the only thing that matters. If your important person doesn’t write great letters or think that you’re that exceptional, hounding them for a reference letter probably isn’t worth it (hounding anyone for a letter isn’t a good idea of course – more on that in the sequel!).

So what’s an applicant to do? Stay tuned next week for advice on electives and getting those Triple Crown Reference Letters. Also, if you’re reading this you’ll likely be interested in reading my CaRMS Interview Trilogy: Pre-Game: Preparing for the Interview, Game Time: The Interviews, and Post-Game: The Rank List.

If you found this post helpful, I’d greatly appreciate it if you share it or leave a comment! The appreciative feedback and endorsement is has kept me writing these mentorship posts. I’d appreciate you offering support by e-mailing this to your friends, sharing it on facebook, retweeting it on twitter, following me on twitter, signing up for e-mail notification of new posts (right column), and/or following my RSS feed (top right corner).

Thanks for reading!

Brent Thoma @boringem

A Review of Systematic Reviews

Dr. Wikipedia said that:

“An understanding of systematic reviews and how to implement them in practice is becoming mandatory for all professionals involved in the delivery of health care.”

And to me, the word of Wikipedia is the next best thing to the word of Weingart.

As usual, I think Dr. Wikipedia is correct. Systematic reviews are where a lot of the evidence-based medicine that we aspire to practice is consolidated, and we require literacy in their methodology to understand the evidence for many of the things that we do or don’t do. Their importance in modern medicine is evident with this statement and this review noting that with them we stay up-to-date, ground clinical practice guidelines and plan research agendas.

An expert could mislead us, a case report could dupe us, an RCT could fool us, but systematic reviews are the Pharaoh that lives in the penthouse of the Evidence-Based Pyramid (picture credit here), they wouldn’t mislead us. Or would they?

Evidence-based pyramid


Check out some long, formal definitions here (section 1.2.2), here, here and here. My definition in a sentence:

A systematic review is the result of smart people analyzing every piece of literature they can find related to a well-defined question, assessing its methodology and appropriateness, and synthesizing all of it to provide the best answer possible with the available evidence.

I also really like this picture as an analogy (picture credit here):

networking circle with puzzle pieces

Each puzzle piece represents a study and a systematic review is the picture that results when smart people put the pieces together.


After reading of the glories of systematic reviews I finally understand why the library ladies that taught us how to do literature searches incessantly referred us to that Cochrane website. While the concept of a systematic review seems obvious to those of us that were trained with resources like the Cochrane Collaboration at our fingertips, they are a relatively new concept.

The history of systematic reviews is summarized here and here. These sources note that it was Archie Cochrane who initially agitated for developing medicine based on randomized controlled trials in his seminal book, Effectiveness and Efficiency: Random Reflections on Health Services (1972, available freely for download here or for purchase for a ridiculous amount of money here). Later, his call for the critical summary of all RCT’s (1979) led to the establishment of a collaborative database of perinatal trials. In the 1980’s systematic reviews of RCT’s began being published and in 1987 he encouraged others to adopt the methodologies used in these reviews. This led to the formation of the Cochrane Collaboration shortly after his death.

What are the characteristics of a systematic review?

After reading many articles on systematic reviews, I was pretty convinced that the characteristics it required could not be published in anything besides a point-form list half the length of a computer screen. Fortunately, my hero Sherlock Holmes managed to pull it off in a single sentence in a statement published >90 years posthumously. As he explained to his dear assistant Dr. Watson using a preponderance of commas, semi-colons and colons:

“there are four main indicators of a sound review: firstly, a comprehensive literature search; secondly, explicit, detailed, inclusion and exclusion criteria; thirdly, a detailed assessment of the quality of the included studies; and, fourthly, appropriate methods of pooling the data. The `Sign of Four,’ if you like, gentlemen!” He turned to me. “Is that succinct enough for your memoirs, Watson?” I nodded. “In fact it’s… er… elementary!”

Thanks for breaking it down for us, Sherlock! You came and went before your time.

For a much, much, much more detailed outline of what makes the ideal systematic review, check out The PRISMA Statement (PRISMA = Preferred Reporting Items for Systematic Reviews and Meta-Analyses). This 2009 open-source statement (written in Canada, eh!) consists of a 27 item checklist of items to include when reporting a systematic review.

What’s a meta-analysis?

Prior to writing this post I often confused systematic reviews and meta-analyses. The terms are pretty much interchangeable, aren’t they?

Apparently not. Contrary to a systematic review, a meta-analysis is a statistical process used to summarize and combine data from multiple studies. They are graphically represented by a blobbogram (aka a “Forest Plot” – see the distinction that Cochrane makes between the two here (section 1.2.2)). So, while systematic reviews often include meta-analyses, they do not necessarily require one. A systematic review that does not include a meta-analysis can also be called a narrative review.

What’s a narrative review?

This is where the definitions get a bit mucky. The definition of a systematic review that I provided above refers to a systematic review rooted in the evidence of a meta-analysis. A narrative review is a different thing. This article defines it:

Narrative reviews are the traditional approach and usually do not include a section describing the methods
used in the review. They are mainly based on the experience and subjectivity of the author, who is often an
expert in the area. The absence of a clear and objective method section leads to a number of methodological
flaws, which can bias the author’s conclusions.

Put more bluntly, while the search strategies used may be included to provide the guise of a systematic approach, a narrative review is expert opinion in a systematic review’s clothing and is likely to contain all of the reviewer’s inherent biases. While these reviews can be useful to examine questions in which there is not sufficient data appropriate for meta-analysis or to review broader topics, they fulfill a different function than a systematic review of a particular clinical question that is supported by a meta-analysis.

Throughout this post the term “systematic review” refers to a review supported by a meta-analysis. Interestingly, while they were not what Archie Cochrane was asking for when he advocated for summaries of RCT’s, narrative reviews are much more prolific than systematic reviews.


Systematic reviews sound awesome. As outlined in this spectacular article, a well-done systematic review can increase the precision of a conclusion, assimilate a large amount of data, decrease the delay in knowledge translation, allow formal comparison of studies, and identify/reflect on heterogeneous results. One would think that their accessibility and brevity (at least relative to the studies they summarize) would give them an important role in knowledge translation.

While systematic reviews can be a great resource for all of these reasons, they also have less desirable characteristics. To summarize: too many are published and too few are updated, reporting and quality standards are variable, and bias is often not well controlled. These problems may continue to contribute to the delay in translating knowledge into practice.

Systematic review overload

The rate of publication of systematic reviews was pegged at 11/day in 2007 (>4000 per year!!) with the trend suggesting that this will continue to increase. With so many systematic reviews, how can we possibly keep up? In addition to the difficulty of keeping up with all of the systematic reviews that are produced, there is substantial opportunity cost associated with the publication of multiple reviews on the same topic.

Opportunity cost explained (cartoon credit here):


Efforts have been made to address the problem of redundant reviews. In the future, the PROSPERO project, the creation of a database of prospectively registered systematic reviews with >1000 records, may allow for notification of systematic reviews in progress and prevent redundancy of effort.

They are out of date

This study noted that the rate of trials had increased from 14/day in the 1970’s to 75/day in 2007. A 2007 Cochrane Colloquium presentation found outlined in the same study concluded that more than 1/2 of the Cochrane Collaboration’s systematic reviews were out of date! A survival analysis of systematic reviews found a median survival time of only 5.5 years and that 7% were out of date at the time of publication! Systematic reviews “expired” secondary to quantitative (change in primary outcome or mortality of >50%) or qualitative (changed statement of effectiveness, new evidence of harm, or new caveats that affect practical application) new evidence. With this ongoing proliferation of trials and the resource intensiveness required to complete a systematic review, how will the medical community possibly keep up?

I am unaware of any projects specifically aimed at addressing this problem. However, one intriguing idea (unfortunately, I lost the reference – help! If anyone sees something on this please let me know.) that might have a small effect if it gained widespread adoption, was to direct residents to write and/or revise systematic reviews instead of conducting their basic research projects. This missing publication argued that a systematic review would be better for the residents’ development of critical appraisal and methodological skills as well as being better for curation of the medical literature than the (often) small resident studies of questionable significance.

They vary in their structure and reporting

Examinations of systematic reviews have reported substantial variability in the methodologies used and the characteristics reported. This heterogeneity makes it difficult or impossible to determine quality, compare methodology between studies, or perform critical appraisals. Additionally, it allows for the possibility of complicating contradictory reviews being published, likely as a result of methodology that it may not be possible to effectively compare because of incongruous reporting.

This problem is not universal. The Cochrane Collaboration has strict guidelines for how their systematic reviews must be reported. Additionally, the PRISMA guidelines are readily available to guide the reporting characteristics of systematic reviews. Hopefully the next time a review is done there is substantially more compliance and homogeneity in reporting standards.

They do not account for bias

Publication bias and reporting bias are well-documented phenomena that results from the selective publication and submission of trials with desirable and/or positive findings. Their potential effect in systematic reviews are a double whammy: in addition to having to contend with the biased publication of the trials that make up the components of its meta-analysis, systematic reviews without positive/desirable findings may also be less likely to be published. As this is one of the biggest criticisms of systematic reviews, substantial effort has been made to combat it.

The biggest effort to minimize publication bias in trials has been to deny publication to those that were not prospectively registered. In 2004 the International Committee of Medical Journal Editors (ICMJE) announced in the NEJM that their journals would no longer publish trials that had not been prospectively registered. It was thought that prospective registration would prevent the data from small trials with null results from “disappearing” if not published as a result of publication bias or selective reporting bias. Unfortunately, for reasons beyond the scope of this post the prospective registration of trials has not been completely successful. As outlined in this article that was based on a sample of trials registered with the WHO’s International Clinical Trial Registry Platform, these registrations often contained non-specific, poor quality, or missing information. This article showed that, while the ICMJE are publishing registered trials, they don’t seem to mind if that registration is inadequate. Hopefully, efforts to improve compliance are ongoing.

Systematic reviews do no better. This 2007 review found that only 23.1% of the 300 systematic reviews that it reviewed from 2004 assessed for publication bias. While difficult, there are analytical techniques that can be used to quantify publication bias in meta-analyses. Additionally, efforts must be made to track down every piece of relevant data as the major databases miss a significant number of relevant studies, unpublished clinical trials and other grey literatureThe same 2007 review also noted that not a single one of the systematic reviews that it reviewed were registered. Although this was not a common practice at the time of the review, PROSPERO (a database of prospectively registered systematic reviews), has since been developed for this purpose. While its goal is primarily may have a role in bringing unpublished systematic reviews to light, if used effectively it could reduce reporting and publication bias in systematic reviews.

They have not improved knowledge translation

This is a debatable statement. I certainly think that systematic reviews improve knowledge translation. When was the last time that you needed a quick answer to a clinical question and passed over a systematic review for an RCT? One benefit of the proliferation of systematic reviews is that there seems to be one for everything. Searching “Systematic Review” on Google Scholar returns 2.58 million results in 0.04 seconds.

On the other hand, despite our proliferation of systematic reviews, challenges still exist in translating the massive amount of information that is available into evidence-based clinical practice.  Simply disseminating the best evidence does not seem to translate it effectively into practice. This may be partially because of the many problems that still exist with systematic reviews, or it may simply be because change is hard.

Perhaps they have not improved knowledge translation enough.

The Next Frontier

While systematic reviews have demonstrated their utility in medical science, they are not perfect.  If even these are insufficient, what is the next frontier?

Could it be FOAM? If you are reading this blog, you are likely engaged in the online community dedicated to providing Free Open-Access Medical Education (they’re pretty much the only people who read this stuff). The content produced by this group is made freely available, open to discussion and free of industry-bias. As discussed in my previous post FOAM: A Market of Ideas, the dissemination of the best content is supported when other members of the community publicize it.

It could be argued that FOAM is a regression to the bottom of the evidence-based pyramid where bias-soaked expert opinion rules the day. However, the expert opinion at the bottom of the pyramid is supposed to prevail only in the absence of evidence. In a world with an overwhelming number of systematic reviews, I would like to think that we could flip the pyramid on its head (picture credit here):


to represent a movement that allows the masses to take control of the medical literature through an ongoing, crowdsourced, instantaneous review of the best evidence. The Skeptic’s Guide to Emergency Medicine has been explicit in stating that his goal is to decrease the knowledge translation gap to a single year… and I think he’s on to something.


I think this may have been my longest post ever and I didn’t include everything that I intended. Stay tuned for more on systematic reviews including an (over)simplification of chi2, funnel plots, blobbograms and an approach to appraising a systematic review. This stuff is certainly boring, but I hope that explaining gives you (and me) a better understanding of evidence-based medicine. I’ll try to keep it tolerable by looking at contemporary and/or historically significant studies.

As always, thanks for reading. I always appreciate the feedback left in my comments so please leave some! If you thought this was a helpful review, I would also appreciate it if you referred your friends, followed my through e-mail (right column), signed up for my RSS feed (top right corner), tweeted about it on twitter or followed me on twitter.


Brent Thoma @boringem

An Approach to Palliative Care in the ED

Last weekend my residency program had a winter retreat. It was a smashing weekend. In addition to enjoying the excellent company, tobogganing, cross-country skiing, and Baileys-infused Tassimo coffee, we also learned a lot. Our guest speaker was Dr. Leneela Sharma, an emergency physician practicing in Edmonton, Alberta that has additional training in palliative care.

In retrospect, I recognize that my education on palliative care is extremely lacking. I never rotated through a palliative rotation in medical school and this was the first lecture on the subject that I had in residency. I thought the FOAM literature (FOAMiture?) was similarly scant, with an EMgoogle search of “palliative care” identifying a “short snippet” summary of an article in February of 2012 on LITFL, several reflective pieces like these ones by torontoemerg blog and storytellERdoc, and a reasonable amount of  traditionally published literature. However, following the posting of this blog I was directed to several other palliative care resources and a big one that I (embarrassingly) missed completely by Scott Weingart on EMCrit (I clearly need to spend more time on my FOAM pre-search: huge apologies). Also, a keen medical student (who tweets from @Want2beMD and blogs here) directed me to a great discussion of DNR orders on the Canadian radio show White Coat Black Art and Chris Nickson linked me to the extensive GeriPal blog by @GeriPalBlog. (Please let me know if I missed anything else so I can add and learn from it!)

With many thanks to Dr. Sharma who encouraged us to disseminate her teaching in any way we can, I will be borrowing heavily from her presentation to work through her 4 step approach to the palliative patient in the ED. But first: what is palliative care and why does it matter in the ED?

What is palliative care?

Dr. Sharma defined it as “the prevention and relief of suffering.” This is much broader of a definition than I would have provided if asked prior to the talk, but following her presentation, I can see how well it fits.

Why does it matter in the ED?

Obviously, because if our patients will suffer / are suffering, we want to be able to relieve it. Dr. Sharma went on to define the primary skills of palliative care in the ED as managing pain and symptoms, delivering bad news, and helping families to make difficult decisions. Three things that we do on a daily basis and could likely improve upon with increased focus.

The 4 step approach:

1. Assess the ED Presentation

Stable or unstable?

If they are unstable, determine if they have an advance care directive that specifies their wishes regarding resuscitation and provide care in accordance with it. This is standard emergency practice that we do not have trouble with. Things get more complicated when they are stable. In that case, besides doing the regular history and physical exam, we need to work through the ABCD’s of palliative care.

The ABCD of Palliative Care

If they are stable/stabilized, assess the following in addition to the usual history and physical exam:

A – Advance Care Directive

Acquire it if it is not with the patient. Review it to get a good understanding of the patient’s wishes. If possible, review it with the patient to ensure that their wishes have not changed.

B – Better

Make the patient feel better! Re-hydration, pain control, antiemetics, etc.

C – Caregivers

Determine who they are, where they are at with caring for the patient and how they are coping at home.

D – Decision making capacity

Determine if the patient is still able to make their own decisions regarding their care. If not, get in contact with whoever is responsible for decisions.

2. Assess the global end-of-life trajectory

The disease trajectowhat? Again, perhaps it is just me, but the explicit discussion of disease trajectories and the idea that they can be labelled on our charts had evaded my medical education prior to this. As it turns out, there are four well described disease trajectories. They are well described in this freely available 2003 JAMA article that also provided this depiction:

s_joc22440f1 Sudden death

I don’t think this one needs much explanation for those working in emergency medicine. This is the trajectory that previously healthy folks take when they suddenly develop fatal pathology.

Terminal illness

Cancer fits this trajectory. It is expected that the patient will have a prolonged illness while generally maintaining their function until near the end when function sharply drops off.

Organ failure

Most often referring to cardiac and respiratory failure, the trajectory of organ failure involves a relatively steep decline with intermittent exacerbations. They ultimately die during one of these exacerbations and, when they survive, they never reach their prior level of function.


These are the elderly folks that never develop any severe illness but slowly decline in function and ultimately die from a complication of their progressive disability.

3. Determine the prognosis

Based on a knowledge of disease trajectories, the notes of the consulting physicians, and the patient’s current level of function, it should be possible to formulate a reasonable prognosis. This can be done qualitatively (curable vs non-curable) or quantitatively (length of time).

I noted that I often shy away from committing on this point unless the patient or their chart spell it out for us. I suspect that many of my fellow EM residents +/- consultants do act similarly. This may be the result of a lack of knowledge of prognostic markers, the short-term nature of the care that we provide, or a lack of teaching on this aspect of care. Efforts such as the development of the EPEC-EM curriculum (Dr. Sharma’s primary resource) have been made to close this knowledge gap, but I think it still exists. This is unfortunate, as Dr. Sharma demonstrated the importance of the prognosis in considering interventions and formulating the goals of care.

For example:

For a patient that is expected to have only days to live the goals of care might be to provide comfort with interventions such as pain control and getting the family to the bedside.

For a patient that is expected to have weeks to live, some patients may prioritize quality of life over duration. These patients might prefer going home with comfort measures to being admitted for an invasive workup and/or treatment.

With the prognosis stated clearly, the goals of care and interventions to consider follow much more clearly. Of course, the goals of care and potential interventions should be discussed openly with the patient and/or their family as they will differ between patients with similar prognoses based on their values. However, if we are unsure of the prognosis or hesitant to label it, these discussions are not held at the same depth and we may end up recommending interventions that the patient may not have otherwise chosen.

How can we better prognosticate?

This review provided an excellent overview of physician views on prognostication. One important pearl it mentioned is the importance of explicitly prefacing any discussion of prognosis with the acknowledgement that we can’t predict how an individual will respond to an illness or its treatment.

Dr. Sharma had two great pearls to assist in prognostication. She recommended asking the patient:

“Is your cancer curable?”

While this question is obviously only be applicable to cancer patients, I find it striking in its simplicity and effectiveness. In four short words, it invites the patient to tell us their understanding of their prognosis  as well as subtly opening the door to allow them to share their hopes about it. Have they been told that their illness is terminal? Do they have a timeline? And what are their goals?

“How much time do you spend in bed?”

Dr. Sharma told us that the most important prognostic factor is functional ability and this question is an excellent way to measure it. She stated that if 50% or more of a patient’s time is spent in bed their median survival is approximately 3 months. As seen in the terminal illness trajectory, the development of significant symptoms and functional impairment places them on the steep part of the curve. I looked for a specific reference for this information, but could not find one. When asked, Dr. Sharma noted that this is a rule of thumb based on various prognostic tools that she has studied and references included in the EMEC-EM curriculum. She recommended the Karnofsky Performance Status as a tool to quantify the functional ability of palliative patients more objectively. While the prognostic value of this question should be considered expert opinion until further evidence (perhaps from this study) is published, it seems like a great way to inquire about functional status.

While I expect that prognostic knowledge and comfort with speaking to patients about prognosis will develop with clinical experience, it will be of no use if we do not offer patients the knowledge that we do have.

4. Make a Care Plan

A palliative patient’s care plan should be consistent with their presenting problem, disease trajectory and prognosis. It may be drastically different than the care provided to a non-palliative patient and will depend on their goals of care. Dr. Sharma classified goals as cure, providing comfort, prolonging life and preventing complications and noted that they may shift over time.


Dr. Sharma illustrated the concepts that she discussed by posing the case of a hypothetical patient with lung cancer that presented to the ED with progressive SOB multiple times worried that she had developed a PE. At her previous visits she had been sent for CTPA’s that did not show emboli. After working through these steps, it was evident that the patient knew that her cancer was terminal and accepted that she only had weeks to live, but was hoping to live out her last days at home. She continually presented because she was worried that her worsening SOB was due to a PE that could further shorten the brief period of time that she had left. Rather than order another CT, treatment with outpatient LMWH injections was arranged to prophylactically treat any clots and alleviate her anxiety.

Was this standard of care for PE workup and treatment? No. But it made sense in the context of this patient who understood she was at the end of her disease trajectory and presented with the goal of prolonging her quality of life at home for as long as she could by preventing a PE. At earlier presentations, her goals of care would have been cure and an extensive workup +/- hospital admission would have been as appropriate for her as any other patient.


I found Dr. Sharma’s presentation to be enlightening and think that all emergency physicians should develop a formal approach to seeing palliative patients. While using an approach like this may take more time up front, the above example illustrates the potential results: improved patient and family satisfaction, more appropriate care, decreased resource utilization, and, likely, improved physician satisfaction. Dr. Sharma deserves a huge thank you for traveling 8 hours to speak to us on our retreat and reviewing this post.

As always, I appreciate any comments – they always teach me a ton. If you found this post useful, pretty please pass it on via e-mail, post it on Facebook, or tweet about it on twitter. You can keep up with new posts by signing up for e-mail notification in the right column, adding my RSS feed to your list by clicking on the link in the top right corner, or following me on twitter @boringem. Thank you!

Brent Thoma @boringem

CaRMS Post-game: The Rank List

In the final post of the trilogy I will be discussing the dreaded rank list. If you haven’t already, I suggest that you review my previous posts on CaRMS interviews before this one – CaRMS Pregame: Preparing for the Interview and CaRMS Game Time: The Interviews.

Following the interviews you will likely be exhausted. Hopefully you have a few days before you go back to work. Regardless, consideration of your rank-list should begin right away. After visiting multiple programs, often with the same group of applicants, one program starts to blending into the other and the tour gets hazy. If you didn’t do so while you were traveling, consider making pro/con lists or some jot-notes about each program and city. At least get something down while it’s fresh – then you’ll have a few weeks to sweat over it, reconsider, and make multiple mood-induced changes.

This post will discuss thank you notes, program rank lists, applicant rank lists and going unmatched.

Thank you notes

I am often asked whether or not applicants should send thank you notes to the programs that they interviewed at. Some seem to feel strongly that you shouldn’t, others that you should, and amongst the latter, opinion is divided on whether they should go in the form of an e-mail or a card.

Honestly, I don’t think it matters at all. I certainly don’t think whether you did or did not formally say thank you will raise or lower your stature on anyone’s rank list, and here’s why I’m confident about that: the rank lists for most programs are decided in a matter of hours after the interviews. Nobody wants to meet again later. Everything is fresh in the minds of the interviewers that day, so we get it done then.

So send them if you like. If you were raised thinking that this is essential, then I wholeheartedly support that and hope that any spawn I happen to have are as considerate as you. On the other hand, don’t sweat it if you were not. Realistically, the programs are trying to recruit the best applicants and you just spent your hard-earned money (errr… line of credit) to travel there for an interview. Just as you could thank us giving you an interview, we could thank you for coming.

Program Rank Lists

So how do programs make their rank-lists?

This will vary from program to program, but from what I’ve seen and heard it goes something like this: Generally, everyone involved in the interview process (interviewers, tour guides, hang-out room residents, etc) get together. Each of the interviewers ranks the applicants independently based on their subjective opinions of applications, personal letters, reference letters and interviews. Those rankings are combined and the applicants are sorted. Then the discussion begins.

What does everyone think? Does it make sense? What is up with the outliers? (Applicants ranked disproportionately high or low by one of the interviewers.) What do the residents think? (They are free to advocate for applicants.) Any input from the admin support? (No one wants to match someone that’s a jerk to the administrative folks.)

There is always debate. Most of the attention is paid to the top of the list because these are the applicants most likely to match. A consensus slowly develops and the final rank list is submitted by the PD.

Keep in mind that there is going to be a lot of variation on how this is done from program to program and specialty to specialty.

Oh, and contrary to what I frequently hear, the interview does matter. A lot. Sure, we have some idea who the top applicants will be prior to the interview based on your applications, but nothing is decided and a good interview can bump up you almost as far as a bad one can drop you down.

Your Rank List

While I hope that the rest of this post was informative for anyone going through CaRMS now/soon, this is the only part that matters because, after interviews are over, it is the only thing you can do that will affect the outcome. For a more detailed set of examples, check out the CaRMS site here. For a more general approach on how to decide which program is best to you, check out this great post by Nikita Joshi on ALiEM.

How should I rank?

While making decisions about a rank list is incredibly difficult for an applicant, fortunately, it is extraordinarily simple. What program/city do you want to match to the most? Make that #1. How about 2nd most? That’s #2. This process continues. That is as complicated as it needs to be. Please IGNORE anyone that tries to tell you anything different because they do not know what they are talking about!!

I used to think that this was self-evident. However, I frequently speak to med students that seem to think they can out-smart the system somehow and, for example, guarantee that they will match by ranking highly the programs that liked them the most. This is wrong and demonstrates a lack of understanding of the CaRMS process.

I will explain why, but before I do, I hope you can promise me that no matter what anyone tells you, you will not do this. The most likely result of this ignorant strategy is that you will end up somewhere that you do not want to be just because they seemed like they liked you. Newsflash: we act like we like everyone, just like you all act like you like us. There is no “like” in CaRMS, there is a match or there isn’t.

Why should I rank like that?

CaRMS uses a seemingly complicated algorithm to do something very simple: match each applicant to the program that they rank highest on their rank list that has a spot available for them. It favors the applicant in that it only considers the program’s preferences after the students.

Example 1

For example, say Winnipeg really likes you so they ranked you number 1. You thought they were really great but wanted to go to Toronto EM more, so you ranked them number 2. The only way that Winnipeg will match you is if Toronto fills all of their spots before they get to you on their rank list. That is to say, you will only drop to your 2nd choice if you can not match to your 1st choice.

Example 2

Say you were silly. You really want to match to Toronto, but thought that Winnipeg liked you more. Due to:

1 – A misguided belief that you would be more likely to match if you rank the schools that like you more higher, or

2 – A desire to match to a school that likes you. Everyone likes to be wanted, right?

You rank Winnipeg 1st and Toronto 2nd. Because Winnipeg ranked you #1, you would match there no matter what. Even if you were wrong about their level of “like” for you and Toronto also ranked you #1, you would still match to Winnipeg because it was your 1st choice. Toronto would then move to the next person on its list and you would come to Winnipeg even if you had really wanted Toronto.

Once again, there is no “like” in CaRMS, there is a match or there isn’t.

Remember that no matter where you match, they will be happy they matched you because they wanted you more than the people below you in round 1 or taking their chances in round 2.

The Couples Match (added 21/1/12)

Dr. du Rand’s comment below reminded me that I neglected to speak to the couple’s match. Whoops. Better late than never, I guess!

The couples match is a great thing. With the preponderance of medcest that occurs, keeping doctor couples together and happy is important and allowing this option in the match truly helps to do that. Some people might say that you shouldn’t couples match, especially if you’re trying to match to a competitive specialty, because it makes you less competitive. That is untrue. While it is more complicated, if you are willing to consider every option (some couples may not) you can have exactly the same chance of matching as you would if you matched independently. How so?

The key is that you can list as many combinations for matches as you want. Your top choices will likely be partner 1 and partner 2’s top choice specialty at each institution. Then you can rank different institutions that are near enough that you can still live together (ie Hamilton / Toronto), then you can rank institutions that would require you to live far away (ie Vancouver / Halifax). Finally, you can rank you and your partner at each institution that you interviewed at independently (ie Vancouver / Unmatched). As you can see, if you rank every possible combination the chances of you matching are the same as if you matched independently, except you are much more likely to end up somewhere together.

Circumstances for some couples might preclude the option of living in far away cities. Children and/or marriage and/or various other circumstances make this completely justifiable, but would increase the possibility of one member of the couple ultimately going unmatched that otherwise wouldn’t have. This would occur if, for example, the only program that would have taken partner 1 was UBC and the only program that would have taken partner 2 was Dalhousie and that combination of the two of them was not ranked. Either UBC/Unmatched or Unmatched/Dalhousie would then end up being the match, depending on which was ranked higher.

Regardless, I think couples matching is the way to go for every medical couple. If they are willing to part or try long-distance for the sake of their careers, they can rank every possibility so they are more likely to end up together but no more likely to be unmatched. If they are unwilling to part or try long distance they can ensure that they do not while still ranking the best options for their future together.

Going Unmatched

One last word of advice on your rank list. Some students seem included to not rank some of the programs that they interviewed at. In a way, this is okay. You certainly shouldn’t rank a program if you would absolutely not want to match to it.

However, you should remember that every time you don’t rank a program, you are effectively saying that you would rather be unmatched than go there.

If that is the case, you have some cahunas and I wholeheartedly support you. However, before you do this, consider what you would do if you went unmatched. Would you take a year off and try again? Try to match into whatever is left in round 2?? And are those options preferable to one of the programs that you didn’t rank??? In most cases, I don’t think that they are.


If you go unmatched, can’t you just get into another program and transfer into the program you wanted?

Great question. This is a backup strategy for some students, especially those trying to match to a competitive specialty like Emergency, Plastics, Opthalmology, Dermatology, etc. Many have pulled it off and it may be possible for you too, but it may not. Barriers to transfers that I can think of include:

-Residents that matched to Family Medicine generally only have 2 years of funding. This makes it difficult to acquire funding for an additional 3 years.
-Funding is generally given by the province which makes it more difficult to transfer between provinces.
-Some programs have extra capacity, a lot do not. Some programs are hesitant to accept transfers or refuse them outright. Just because they’ve taken transfers before doesn’t mean that they will continue to do so in the future.
-All programs will take some time to consider transfer applicants. Many will say no and they have every right to.
-The program that you do match to may not be excited about the idea of letting you transfer out.
-If they didn’t match you in CaRMS it may be because, unfortunately, they didn’t want you in their program.
-Matching to a program with the intention of transferring out of it is generally considered uncool for multiple reasons that I’m sure you can conjure up yourself.

While the fact that it is a possibility should be acknowledged, I don’t think matching with the intention of transferring is a great CaRMS strategy.


And this concludes my CaRMS interview trilogy. As the year progresses I will be posting on reference letters and CaRMS applications, among other things, so do check back.

Orrrrr… you could sign up for e-mail reminders each time I post something (see the right column), or sign up for my RSS feed (top right corner), or follow me on twitter (@boringem). If you found these articles helpful, I’d also appreciate it if you could tweet about them and/or forward them to your classmates.

Thanks for reading!

Brent Thoma @boringem