Tuesday, November 20, 2007

Statistics and the Medical Residents Who Misunderstand Them

Dr. Helen recently blogged on a study showing that medical residents aren't too good at statistics. This of course raises the possibility that they won't be able to critically read the medical literature and that their patients will ultimately be shortchanged.

Click here for more.
Dr. Helen recently blogged on a study showing that medical residents aren't too good at statistics. This of course raises the possibility that they won't be able to critically read the medical literature and that their patients will ultimately be shortchanged.

As one who teaches residents, I have a couple of points to make about this.

First of all, the study's results seem quite plausible to me. I like to think however, that our residents at Harbor-UCLA do a little better since we run a weekly Journal Club (which I occasionally lead). It's fairly well-attended because there's usually a free drug company lunch...I know. That's another blog topic altogether.

In Journal Club, we take one article with important clinical implications and dissect it in all of its gory details. We pay particular attention to the biostatistics and methodologies involved. The purpose of such discussions is to understand the study and its potential biases. We also try to determine whether its results are valid and its conclusions generalizable to the patients we actually treat. The bottom line is that I think we make a fairly reasonable attempt to teach this stuff.

I will concede however, that my belief that our residents are not typical of those studied may be wishful thinking on my part.

But there are some larger issues here. While I agree that an understanding of basic biostatistics is essential to putting the articles comprising the medical literature in their proper perspective, many of the methodologies currently employed are extremely complex. Without a very strong background, rigorous understanding of a lot of these articles is all but impossible.

Even for those papers that don't use esoteric statistical methods (stochastic modeling, complex applications of logistic regression, nonlinear correlation methods, etc.) the amount of time necessary to digest them just isn't there most of the time; not for residents, not for most clinicians. That being the case, many doctors rely on clinical guidelines for basic decision-making. The idea is that a bunch of top experts in a particular specialty get together in Zurich for a week or so and discuss the world's literature between ski runs. They then hammer out a set of recommendations that summarize their collective knowledge.

These guidelines are generally quite readable and have the advantage of representing a consensus of these supposedly great minds.

Admittedly, such position statements have many disadvantages. There are all kinds of biases that can creep into them especially given the substantial conflicts of interest that top opinion leaders accumulate over their careers. The reality is that few other solutions are that well embraced at present and no one is going to read every significant study that comes out on his own.

My point is, don't come down too hard on the poor resident with his or her suboptimal understanding of basic statistics. I bet Sir William Osler wasn't that mathematically inclined either.

Here is a whimsical piece of irony: Dr. Donna Windish, the first author of the study mentioned above points out that many residents only read the abstract of journal articles rather than the body itself. She goes on to say that there is data to suggest that abstracts don't accurately reflect the implications of their studies. I then noticed the following in the abstract of her paper:
"Residency programs should include more effective biostatistics training in their curricula to successfully prepare residents for this important lifelong learning skill."
For this statement to be true, the authors would have to be able to cite data that establishes that
  1. Such training does in fact "successfully prepare residents for this important lifelong learning skill."

  2. That this learning skill makes them demonstrably better clinicians (or why develop it?)
I seriously doubt that either of these points has been firmly proven in the literature...which more or less justifies Dr. Windish's point that abstracts don't accurately state the implications of their studies.

Finally, if my discussion of this study seems a bit superficial, it's probably because I only read the abstract.

Labels: ,

1 Comments:

Anonymous Clinical Cases and Images - Blog said...

Some of our residents found this simple approach useful:

PP-ICONS: A Simple Method for Evaluating the Clinical Literature

http://www.aafp.org/fpm/20040500/47asim.html

November 20, 2007 10:40 AM  

Post a Comment

<< Home