Friday, June 27, 2014

Check the Y-axis when reading a chart


Here is an interesting way to make statistics more persuasive without really lying.

Take a look at this chart. It was labeled "Hospital readmissions sharply declined."


Now look at this one. The same data are charted, but the decline does not look nearly as sharp.


A common trick is to abbreviate the Y-axis of a chart. Proponents of this will tell you it makes the chart more compact and easier to read. However, the downside is that a small change is made to appear much larger.

Even though the change was said to be statistically significant in this instance, the casual reader would certainly be impressed much more by the sharp decline depicted in the first chart.

I believe the first chart was produced by the Centers for Medicare and Medicaid Services (CMS) to show that its policy on readmissions was working. I was unable to find the original source for it.

This also works with other graphics as shown in this pair of bar charts from the Visualizing Data blog.


I encourage you to look for this type of manipulation when you read research papers. pharmaceutical ads, or any other depiction of data.




Wednesday, June 25, 2014

1 in 5 elderly U.S. patients injured by medical care (or not)

A recent paper in BMJ Injury Prevention found that almost 19% of Medicare beneficiaries suffered serious adverse medical events (AMEs), 62% of which occurred from outpatient claims. Not surprisingly, poorer health, more comorbidities, and impaired activities of daily living were associated with higher risk.

Over 12,500 patients were surveyed and their Medicare claims were analyzed. Nearly 80% of patients who did not experience an AME survived to the end of the study compared to 55% of those who had AMEs. Statistical significance was not mentioned, and confidence intervals and p values were not stated.

The authors concluded that AMEs should be avoided because of the excess mortality and costs.

It is hard to argue with that, but as is true of many papers like this, the terminology changed in the body of the paper. An article about it quoted the lead author, a gerontologist, as saying, "These injuries are caused by the medical care or management rather than any underlying disease." Thus, AMEs became "injuries."

In the methods section, the authors list all of the ICD-9-CM codes included in the study.

Some of the codes are clearly preventable medical errors such as 997.02 Iatrogenic cerebrovascular infarction or hemorrhage, 998.2 Accidental puncture or laceration during a procedure, not elsewhere classified, 998.4 Foreign body accidentally left during a procedure, 998.7 Acute reaction to foreign substance accidentally left during a procedure, and the codes E870-867 "misadventures."

However, many may or may not be preventable like 997.1 Cardiac complications, not elsewhere classified, 997.31 Ventilator associated pneumonia, 997.41 Retained cholelithiasis following cholecystectomy, 998.00 Postoperative shock, unspecified, 998.30 Disruption of wound, unspecified, 998.5 Postoperative infection not elsewhere classified, and 998.83 Non-healing surgical wound.

A series of codes, E930–E949, comprises adverse drug events, most of which are not preventable.

The numbers of patients with each specific complication were not provided.

This did not stop medical news media from proclaiming more doom and gloom.

HealthDay: "1 in 5 Elderly U.S. Patients Injured by Medical Care"
WebMD: "1 in 5 elderly patients injured by medical care"
Today Topics: "Medical injuries affect almost one in five older adults in receipt of Medicare"

It is impossible to conclude from the data that all of these AMEs were caused by "medical care or management." You can quibble about whether some complications are preventable or not, but the percentage of preventable AMEs is far less than 19%.

And how many more deaths would have occurred had the patients not been subjected to "medical care or management"?

I wish people would stop writing these kinds of papers and ease off on the sensationalist reporting of them. But I guess if they did, I would have less to write about.

Monday, June 23, 2014

Do operating room checklists improve outcomes?

The other day Atul Gawande tweeted the following:



I am not against checklists. When I was a surgical chairman, I implemented and used one in both the operating room and the ICU. They do not add costs and may be helpful.

However, the randomized trial that Gawande referred to does not necessarily settle the issue about whether checklists really do reduce complications and deaths.

The paper, published online in Annals of Surgery [full text here], looked at 5,295 operations done in two Norwegian hospitals. The intervention was a 20-item checklist consisting of three critical steps–the sign in before anesthesia, the timeout before the operation began, and the sign out before the surgeon left the operating room. Using a stepped wedge cluster design, patients were randomized to control or the checklist.

Complications occurred in 19.9% of the control patients and 11.5% in those who got the checklist, a significant difference with p < 0.001.

A look at Table 2 finds that of 27 complications or groups of complications, 14 occurred in significantly fewer patients in the checklist group.

Of the significant 14, a few, such as cardiac or mechanical implant complications, could possibly have been prevented by the implementation of the checklist.

For most of the others, the relationship between the use of a checklist and a post-operative complication is tenuous. How could a checklist possibly prevent technical complications like bleeding requiring transfusion, surgical wound dehiscence, and unintended punctures or lacerations?

Here are a few more of the complications that occurred significantly less frequently in the checklist cohort—urinary tract infection, pneumonia, asthma, pleural effusion, dyspnea, and the nebulous categories of "complications after surgical and medical procedures" and "complications to surgery not classified."

What item on a checklist prevents asthma, UTI or any of those on that list?

Embolism, sepsis, and surgical site infection, three complications one would expect a checklist to impact because of reminders to give prophylactic antibiotics and anticoagulation, did not occur at significantly lower rates in the checklist group.

Even the cardiac complication category is open to question because none of the 5 subcategories (cardiac arrest, arrhythmia, congestive heart failure, acute myocardial infarction) differed significantly between the two groups. Only when the 5 were combined did statistical significance emerge.

In the 300-bed community hospital, checklist use was associated with a significantly lower mortality rate than non-use, 0.2% vs. 1.9% respectively (p = 0.02), but no mortality difference was seen in the 1100-bed tertiary care hospital.

The tertiary care institution enrolled 3,811 patients while the 300-bed hospital contributed 1,083. If more patients had been in the latter group, the difference may have disappeared due to the principle of regression to the mean.

Despite the heightened vigilance associated with an ongoing research project, compliance with checklist use was only 73.4%.

Before you go off on me, I will remind you that I do not oppose checklists. Most things we do in medicine are not based on Class 1 evidence.

Just don't tell me that checklists have been proven to reduce complication rates or save lives.

Thursday, June 19, 2014

What's in a name? Med schools sell to the highest bidder

The 12th oldest medical school in the United States, Jefferson Medical College, has just changed its name to Sydney Kimmel Medical College after Mr. Kimmel donated $110 million.

Declining tuition revenues (ha!) and the reluctance to refuse such a large donation were important factors in the renaming.

When I checked, I found that several other allopathic and osteopathic medical schools are named after benefactors. The oldest medical school in the country, at the University of Pennsylvania, changed its name to the Perelman School of Medicine in 2011.

Some of the names are at least related to medicine like the Frank H Netter MD School of Medicine at Quinnipiac University in Connecticut. Dr. Netter was a famous medical illustrator and author. There's also the Dr. Seuss Theodore Geisel School of Medicine at Dartmouth University.

Three years ago, I blogged about the increasingly forgettable names being attached to stadiums such as M & T Bank Stadium [aka Ravens Stadium at Camden Yards, PSINet Stadium, Ravens Stadium], O.com Coliseum [aka overstock.com Coliseum, Network Associates Coliseum, McAfee Coliseum, Oakland-Alameda County Coliseum], and Sun Life Stadium [aka Joe Robbie Stadium, Pro Player Park, Pro Player Stadium, Dolphin Stadium, Dolphins Stadium, and my favorite—Land Shark Stadium.]

How much money would it take to get a school to change its name to The Tostitos School of Medicine or Yahoo Medical College?

Taking a cue from all of this, I am willing to sell the naming rights to the Skeptical Scalpel blog for the right offer. Operators are standing by.

Tuesday, June 17, 2014

A non-US citizen international student's chances of matching in surgery

On my "Ask Skeptical Scalpel blog, an non-US citizen going to medical school in Egypt asks what his chances of obtaining a categorical general surgery residency position are, and I try to answer. here's the link.

Friday, June 13, 2014

Uncertain diagnosis or CT scan radiation? Which would you choose?

It is so nice to be right.

To summarize what I wrote almost 4 years ago, here and here—based on my experience, patients and families will accept the theoretical risk of a future cancer if it means they'll get an accurate diagnosis.

A recent study validates that opinion.

MedPage Today reports that before receiving any recommendation for CT scanning, 742 parents of children who presented with head injuries were surveyed by researchers from Toronto's Hospital for Sick Children.

Parents, almost half of whom had previously known that CT scanning might cause a cancer to develop in the future, were told of the radiation risks of CT scanning in detail. The authors found that although the parents' willingness to go ahead with the CT scan fell from 90% before the explanation of risk to 70% after they were briefed about radiation, at crunch time only 42 (6%) of them refused to let their child be scanned.

And of the 42 who initially refused, 8 eventually went ahead with the scan after a physician recommended it.

So to put it another way, even after they were fully informed of the potential risk of CT scan radiation to their child (lifetime risk of cancer is about 1 in 10,000, according to the authors), nearly all parents opted for the scan.

Also of note are the following:

The median age of the children was 4.
12% of the children in the study had undergone at least one previous CT scan.
97% of the children were diagnosed with only concussions or mild head injuries.

An article in Scientific American puts some of the radiation risk into perspective. It is long, but worth reading as it explains how risk has been calculated, the best guess as to the true level of risk, and what radiologists are doing to lower the radiation exposure associated with CT scanning.

According to that article, "Any one person in the U.S. has a 20 percent chance of dying from cancer [of any type]. Therefore, a single CT scan increases the average patient's risk of developing a fatal tumor from 20 to 20.05 percent."

No one ever comments about weighing the potential harms that may have been prevented by a timely CT scan diagnosis against the radiation risk.

CT scans should be ordered judiciously. The area scanned and the amount of radiation should be limited as much as possible.

But if you need a CT scan to help diagnose your problem, go ahead and have it.

Bottom line. When it comes to accuracy in diagnosis vs. radiation-induced cancer risk, parents overwhelmingly chose the former.

Thursday, June 12, 2014

A different take on Medicare's release of doctor payment data



Journalists have had a good time with the Medicare data on payments to doctors. The most recent exposé is headlined "Taxpayers face big Medicare tab for unusual doctor billings" by the Wall Street Journal. Because of a paywall, most people did not have a chance to read the article.

It recounted several anecdotes about physicians who received huge amounts of money for procedures of dubious worth. I will summarize two of them.

In 2012, an internist in Los Angeles was paid close to $2.3 million for a procedure known as "enhanced external counterpulsation," or EECP, which is supposed to ameliorate angina in patients who are not surgical candidates.

Although not a cardiologist, he apparently used EECP on 615 patients. At the Cleveland Clinic, whose chairman of cardiology says the procedure should rarely be used, the procedure was performed on only 6 patients in a year—that's 6 patients total by a staff of 141 cardiologists.

A Florida dermatologist received $2.41 million from Medicare in 2012 for 15,610 radiation treatments for melanoma in 94 patients, an average of 166 treatments per patient. The usual number is 20 to 35 treatments. The doctor said he billed for each lesion separately and treated each one about 40 times.

A radiation oncologist who was interviewed questioned the appropriateness of the machine the dermatologist was using and said, "When a patient has several lesions, they commonly get treated simultaneously and are billed for as a single treatment, he said."

That is the way Medicare handles most multiple procedures. At best you might get away with billing a partial amount for an additional treatment.

Any physician who has spent time in the private practice of any medical specialty that involves the treatment of elderly patients can tell you that Medicare will nickel and dime you to death over a minor dispute about an evaluation and management code.

Medicare is also notorious for holding back money due to physicians who are just trying to make a living. A classic ploy is to request a copy of the dictated operative note for a simple procedure. This will add 4 to 6 weeks to the eventual cutting of a check.

They routinely perform unannounced on-site audits of doctors offices looking for discrepancies in documentation. I once experienced one myself and luckily was not cited or fined.

Here are some questions that I haven't seen any journalist ask.

Why does the Wall Street Journal have to point out such flagrant outliers? What does the Wall Street Journal know about detecting these practices that Medicare could not do for itself? How can Medicare continue to pay top dollar for questionable treatments and billing practices? Why doesn't Medicare do something simple like automatically reviewing any practice that receives more than say $500,000 in a single year?

Inquiring minds want to know.

What's your opinion?


Saturday, June 7, 2014

More about why live tweeting conferences is bad

Yesterday, I blogged about why live tweeting from conferences is not worth it. (Link here.)

Such tweets are difficult to comprehend, lack context without a detailed explanation, and might be detrimental to the persons who are tweeting because they aren't paying attention to the lecture.

I have never had such a response to a blog post before. Live tweeters were highly indignant that I should question what they are certain is the greatest marvel in medical education since the invention of PowerPoint.

I said it took a minute to compose and type a tweet, and many claimed they can do it much faster. Others also said that they use their tweets as notes for later reference.

A twitter colleague, Dr. John Mandrola (@drjohnm), unknowingly stepped into the conversation by posting a link to an article [link fixed 6/8/14] in The New Yorker about college teachers proposing to ban laptops in their classrooms.

It referenced a 2003 study from Cornell "wherein half of a class was allowed unfettered access to their computers during a lecture while the other half was asked to keep their laptops closed."

"The experiment showed that, regardless of the kind or duration of the computer use, the disconnected students performed better on a post-lecture quiz. The message of the study aligns pretty well with the evidence that multitasking degrades task performance across the board."

A New York Times piece about handwriting said, "For adults, typing may be a fast and efficient alternative to longhand, but that very efficiency may diminish our ability to process new information."

It cited a study showing "that in both laboratory settings and real-world classrooms, students learn better when they take notes by hand than when they type on a keyboard. Contrary to earlier studies attributing the difference to the distracting effects of computers, the new research suggests that writing by hand allows the student to process a lecture’s contents and reframe it—a process of reflection and manipulation that can lead to better understanding and memory encoding."

The New Yorker article concluded, "Institutions should certainly enable faculty to experiment with new technology, but should also approach all potential classroom intruders with a healthy dose of skepticism, and resist the impulse to always implement the new, trendy thing out of our fear of being left behind." [Emphasis mine.]

Friday, June 6, 2014

The elephant in the room—Live tweeting conferences

Live tweeting from conferences has become very popular, but I'm not sure why. The biggest problem is this—lucid communication of a point made by a speaker using more than 140 characters at a time is difficult to capture in a tweet.

The tweets tend to be filled with obscure abbreviations and references to previous tweets that may seem quite clear to the tweeter but not the tweetee. Some also post out-of-focus photos of the dreaded PowerPoint bullet slides taken from acute angles. Lacking context or explanation, they tend to be useless.

What about the one doing the live tweeting? How can you fire off 15 or 20 tweets in an hour and continue to pay attention to what the speaker is saying?

Please don't tell me what Symplur or some other data disgorging company says a meeting's impressions were. Here's an example from the recently concluded meeting of the American Society of Clinical Oncology. (#ASCO14) for May 30 through June 4.

There were 38,896 tweets generated by 7,284 participants. Let's very conservatively estimate that it took each tweeter 1 minute to compose a tweet, type it into a mobile device, and send it. That is 648 hours worth of tweets. The leading tweeter at ASCO produced 975 tweets or 16 hours worth of tweets.

You might say, "Hey, there were 134,569,479 impressions. That number represents over 40% of the population of the US." But hold on. Impressions are the number of tweets delivered to a follower's Twitter feed and potentially available to be viewed. There is no way to determine if anyone has actually read a specific tweet.

Other than counting retweets or replies, which apparently is not done by Symplur, there is no way to measure engagement. And even a retweet does not guarantee that a tweet was read. [See my previous post on this subject.] Favoriting (yes, that's a Twitter verb) is not a countable Twitter metric and even if it was, it's not a surrogate for reading.

Most of the time, I solve the problem by temporarily unfollowing someone who is live tweeting a conference.

What do you think about live tweeting of conferences?

6/7/14 ADDENDUM: 


More about why live tweeting conferences is bad. Link here.

Tuesday, June 3, 2014

Is ultrasonography overrated? A radiologist thinks so

In response to an article in the New England Journal of Medicine that discussed whether bedside ultrasonography (US) should be taught to medical students, radiologist Dr. Saurabh Jha recommended that clinicians do a proper history and physical instead of point-of-care ultrasound.

His post appeared on the KevinMD website.

As if a radiologist advising doctors to do an H&P wasn't shocking enough, Dr. Jha then confessed that he thinks "ultrasound images look like a satellite picture of a snow blizzard."

He worried that rather than finding hidden pathology, indiscriminate use of US by inexperienced physicians will simply lead to more and more testing.

Even seasoned radiologists tend to overcall abnormalities on US said Dr. Jha. This leads to increased use of other imaging studies, most of which turn out to be normal. Using US to avoid the risks of ionizing radiation often results in patients having CT scans anyway.

In the comments section of the post, Dr. Jha emphasized that he was talking about situations where the pretest probability of finding something wrong is very low. Directed US based on clinical indications is obviously of value.

Emergency medicine physicians who
Photo via Dr. Ryan Radecki (@emlitofnote)
commented listed several instances which bedside US can be useful such as in identifying pericardial effusions and fluid or blood in the abdomen of trauma patients.

Ultrasound is clearly the test of choice for right upper quadrant abdominal pain. There is nothing better for identifying gallstones, but thickening of the gallbladder wall and fluid surrounding the gallbladder are best seen with US done in the radiology department.

Probing all body cavities with a transducer for no specific indications is another matter.

Is there still a role for a good history and physical examination in modern medicine? Yes.

Is US a useful test? Yes, in the proper context, it can be very helpful.

Should every medical student be taught how to do bedside US? I don't think so. A course is just the beginning. Learning how to perform US requires a lot of repetitions. Many medical specialists will never use it.

I agree with Dr. Jha that the time should be used to "Teach them to organize their thoughts coherently."

What's your opinion?

Note: These folks also tweeted the photo.@EM_Educator @MDaware @EBMGoneWild @choo_ek