Wednesday, September 17, 2014

Can Google Glass make you a better surgeon?

Advocates of Google Glass in surgery are apparently desperate to find some use for the device.

An article headlined "Google Glass makes doctors better surgeons, Stanford study shows" concluded that the study offered "compelling preliminary evidence that the head-mounted display can be used in a clinical setting to enhance situational awareness and patient safety."

Using an app capable of displaying vital signs on Google Glass in real time, 7 surgical residents recognized critical desaturation in simulated patients having procedures under conscious sedation 8.8 seconds faster than a control group of 7 residents relying on standard monitors. Glass-wearing residents also became aware of hypotension 10.5 seconds before the control group.

Not mentioned in the article but present in a linked abstract of the paper not yet submitted for peer review was this pearlneither difference was statistically significant.

This evidence is not that convincing. Even if the difference had been statistically significant, it is surely not clinically important.

How seeing vital signs on Google Glass is better than relying on the simple alarms that are built in to every monitor is not clear. Either way, you must stop the operation and look up to see the vital signs.

In a brief video accompanying the article, a surgeon can be seen rather clumsily activating and resetting the app on his Google Glass. The time required to perform these maneuvers apparently was not discussed.

The article, probable written directly from a press release, took a comedic turn with this sentence, "One test demanded that the resident perform a bronchoscopy, in which the surgeon makes an incision in the patient’s throat to access a blocked airway." But bronchoscopy does not involve making an incision in the throat or anywhere else.

If you would like to hear a different side of the Google Glass story, check out this video review from GeekBeatTV entitled "Google Glass is the worst product of all time." You can forward to the 3:45 mark to get past the woes of wearing prescription glasses with Google Glass and hear about the poor battery life, the balky commands, the system crashes, and more.

Tuesday, September 16, 2014

Aortic dissection leads to man's death in the ED: His wife's perspective

A woman wrote to me about the day her husband died. I have edited her email for length and clarity and changed some insignificant details to protect her anonymity as she requested.

Joe passed away outside in the parking lot while they were getting on a helicopter for transport to a hospital equipped to do his surgery.

He had presented to the ED in terrible pain with lots of thrashing and writhing. His right hand was very cold. His right arm tingled to the point of hurting bad. The vision in his right eye was cloudy, and his hearing was muffled on the right. This was in addition to being very pale and diaphoretic upon admission. This is when I felt a dissecting aorta should have been suspected.

I don’t recall the vitals in the beginning, but they were changing and his blood pressure was dropping very fast. As soon as they finished the EKG-in the first 5 minutes of the visit, I asked the doctor about John Ritter's death [the actor died of a dissecting thoracic aneurysm in 2003]. First I asked if he could check for the condition that caused John Ritter's death. I called it an abdominal aortic aneurysm. The doc corrected me and said that it wasn’t an AAA it was a dissected aorta. I said OK, then check for that. This was 1 hour before the CT scan that led to his diagnosis.

Thursday, September 11, 2014

More ratings—this time it's residency programs

Can you really decide which surgical residency program is right for you using Doximity's Residency Navigator?

I don't think so, and here's why.

The rankings of residency programs were obtained by surveying surgeon members of Doximity. They were asked name the five top programs for clinical surgery training. When the survey was announced in June, I predicted that most respondents would probably overlook the word "clinical" and focus on the usual famous academic institutions.

I also pointed out that anyone not intimately familiar with a program would be unable to judge whether it is good or not and suggested that reputation would be the main driver of results.

In fact, that is exactly what happened. Of the top 40 programs listed, all are based at university hospitals, as are 66 of the top 70. Back in June, I speculated about the top five programs and got the first two correct but in the wrong order.

A 2012 survey of surgical residents with over 4200 responders (an 80% response rate) found that community hospital trainees were significantly more satisfied with their operative experience and less likely to worry about practicing independently after graduation. Wouldn't you then expect a few community hospital programs to be among the top 40 hospitals for clinical surgery training?

Proof that the survey's findings are not reliable is that every one of the 253 surgical residency programs in the country was mentioned by one or more of those who responded. This included one program that has been terminated by the Residency Review Committee for Surgery. At least it appears near the bottom of the list.

The number of voters who cited the lower ranking programs must have been very few, meaning the difference between the 200th and 240th program ranks is probably not statistically significant.

Some programs that were rated are so new that very few or no residents have graduated yet. How could anyone know if they are turning out competent clinical surgeons?

Board passage rates for programs, which are available online, were omitted for some and were not clearly identified as the percentage of residents who passed both parts of the boards on the first attempt only.

The percentile rankings of alumni peer-reviewed articles, grants, and clinical trials are displayed prominently. What do those data have to do with the research question—which residency programs "offer the best clinical training"?

So what's the bottom line?

You can put the Doximity Resident Navigator in with the other misleading ratings of hospitals and doctors. Applicants considering surgical residencies should not rely on it for guidance.

It has warmed the hearts of faculty and residents at highly rated programs, but I wonder how the OR lounge discussions are going at places where programs ranked lower than expected.


Tuesday, September 9, 2014

From the trenches: More about grit

The following was compiled from two comments on my recent post about grit written by a doctor who calls himself "Geronimo." It is reproduced with permission.

Grit cannot be assessed by a survey. I wholly agree. As a military physician, my firmly founded opinion is that grit is essential to the practice of medicine. Grit is the elusive characteristic that carries the clinician through the challenges that exceed ordinary capabilities. You cite a paper that argues for surgical training to borrow aspects of SEAL training. I applaud any measure that would allow senior faculty and program directors to unilaterally shape their residents’ training, whether or not it bears any resemblance to the rigors of BUD/S [Basic Underwater Demolition/SEAL training].

The 2011 loss of 30-hour call for medical students and interns was a fatal blow to residency training, in my estimation. I count myself fortunate for having a 30 hour call internship before embarking on my operational career. While downrange, it is not at all uncommon to be woken at inconvenient hours of the night to tend to the wounds of war. If you don’t know how you function cognitively, physically, psychologically, and emotionally while sleep deprived, exhausted, hungry, cold, and pissed off, you’re behind the curve. While it isn’t any fun to work in such a state, or to work with people so challenged, it is decidedly less fun to be a patient expiring for want of any medical provider, let alone a tired one. American medicine used to be in such a place in the not so recent past, to hear the story told by my forbearers.

Monday, September 8, 2014

Chance can turn a surgeon into a killer

Risk-adjusted 30- to 90-day outcome data for selected types of operations done by specific surgeons and hospitals are now being publicly posted online by England's National Health Service.

According to the site, "Any hospital or consultant [attending surgeon in the UK] identified as an outlier will be investigated and action taken to improve data quality and/or patient care."

After cardiac surgery outcomes data were made public in New York, some interesting unexpected consequences were noted.

Surgeons and hospitals resorted to "gaming the system" by declining to operate on patients who were high-risk and tinkering with patient charts to make those they did operate on seem sicker. This can be done by scouring the charts for all co-morbidities and making sure none are overlooked when they are coded. An article from New York Magazine explains it in more detail.

Interpreting outcomes data can be tricky.

In a post three years ago about a report that nine Maryland hospitals had higher-than-average complication rates, I pointed out that whenever you have averages, some hospitals are going to be worse than average unless all hospitals perform exactly the same way or, like medical students, are all above average.

A much more sophisticated way of looking at this subject appeared in a fascinating 2010 BBC News piece by Michael Blastland, who is the Nate Silver of England [or maybe Nate Silver is the Michael Blastland of the US], called "Can chance make you a killer?"

Blastland set up a statistical chance calculator for a hypothetical set of 100 hospitals or 100 surgeons performing 100 operations each. The model assumes that every patient has the same chance of dying and that every surgeon is equally competent. The standard is that a mortality rate 60% worse than the norm set by the government for any hospital or surgeon is not acceptable.

You are assigned one hospital. Using a slider, you may choose an operative mortality rate anywhere from 1% to 15%. After you do this a number of times and recalculate for each mortality rate, you will notice that the number of unacceptably performing hospitals or surgeons changes randomly for each percent mortality and your hospital may appear in the underperforming group strictly by chance alone.

The whole concept is explained in more detail on the site. I encourage you to try it for yourself. The link is here.

So it may be difficult for the NHS to separate the true outliers from the unlucky surgeons who happened to fall outside the established norms.

What do you think about this?

Wednesday, September 3, 2014

Health Care and the $20,000 Bruise: A different take

Twitter is buzzing about yet another medical billing horror story. This one appeared in the Wall Street Journal and was written by Eric Michael David who is an MD PhD JD and an officer at a biotech company.

He saw a large, swollen bruise on his three-year-old son's head several days after falling off his scooter. Other than the bruise, no other abnormalities were mentioned. He took the boy to "one of the top pediatric emergency rooms in the country" to have a CT scan done. It showed "a small, 11-day-old bleed inside his head, which was healing, and insignificant."

Dr. David received a bill for $20,000, $17,000 of which had been paid by his insurance company. He was responsible for the remaining $3000.

He noted a $10,000 charge for a trauma team activation which he said never happened. After a lengthy series of exchanges with the hospital's billing department and Dr. David having to prove that a trauma team activation was unwarranted and not permitted by certain regulations, he was able to have the charge rescinded.

The essay went on for some 1200 words listing the steps that he went through. He correctly described what a mess American healthcare delivery is and why as long as overuse and upcoding are rewarded, the Affordable Care Act will not fix it.

Dr. David was right to contest the $10,000 charge for a trauma team activation that wasn't indicated and didn't even occur.

What he didn't address was this.

Why would a doctor who said that he had "served on trauma teams in two of the busiest hospitals in New York City" feel the need to take his apparently asymptomatic son with an 11-day-old injury to an emergency room for a CT scan?

Doesn't this imply overuse of a different type?

Secondary questions:

Did anyone bring up the issue of radiation from the CT scan?
Did the docs in the ED think a CT scan was necessary?
"Inside his head" is a rather odd phrase. Does it mean intracranial? Intracerebral?
Was "one of the top pediatric emergency rooms in the country" the only option or could this asymptomatic boy have been seen in a doctor's office?
Why is the charge for a trauma team activation $10,000?

Improving the M&M conference

"Surgical pathology works more than 80 hours per week, has no regard for your gender or your life situation, and can be devious and sneaky in its presentation."

The following is a guest post by Dr. Leo Gordon, a surgeon from Los Angeles.

A recent paper in Annals of Surgery found that 24% of graduating surgical residents "were unable to recognize early signs of complications." One possible solution is a redesign of the morbidity and mortality (M&M) conference .

I have spent a significant part of my professional life in an effort—at this point it is a crusade—to change the nature of the M&M conference. For 11 years, I moderated 495 conferences, 1485 presentations, and 30 written examinations based on the error and complication-reducing points raised during the discussions.

If properly implemented, a redesigned M&M conference can satisfy the ACGME core competencies, the suggestions of the Institute of Medicine, and the public's demand for a reduction in medical errors.

What I have dubbed the "M&M Matrix" converts the weekly conference into a vibrant educational effort and creates a constantly updated patient safety curriculum for the resident and attending staff.

If the M&M Matrix is such a valuable idea, why hasn’t it been widely adopted?

Here are the reasons: