Twenty years ago, medical director of a hospital, I took a call from a woman who wanted to know how many Abdominal Aortic Aneurysms (AAA) her husband’s surgeon had repaired the prior year at our facility. She asked a good question, because the surgery is difficult, fraught with risk, even when done electively, which in this instance it would be. Too often, it is done after the aorta ruptures. My cousin’s husband died of a ruptured AAA; I have dealt with the issue emergently, and it is difficult to control the bleeding while simultaneously repairing the vessel.
I didn’t know the answer. Therefore, I had no idea her husband’s chances of survival, how long he would likely be hospitalized, or his condition six months later. We didn’t track that. It took me four years to get the hospital to track outcomes from cardiac surgery, after I exploded one night in the ICU saying that I had been consulted 26 times in 270 open heart cases in one year. Consulting a neurologist after a heart case usually doesn’t bode well.
I mention this, because AAA is one of the outcome measurements Leapfrog uses in determining how well a hospital performs. So is Aortic Valvular Replacement. The Tucson hospitals that used Leapfrog scored no better than “C”; one scored “D”. Some of these hospitals had marketed themselves as being “one of America’s top 100 hospitals.” It seemed that they were not quite as good as they thought they were.
Leapfrog tracked drug errors, too, and no hospital in Tucson scored better than “C”. On 2 May 2002, I met with administrators at University Hospital in Tucson to outline my reporting program to reduce medical errors. A year earlier, I had met with their cardiac surgery program to help track outcomes better. I can’t believe I was so naive to think that I, who had practiced, been an administrator, had a Master’s in statistics, and 2 months earlier had written an op-ed on an error reporting system we needed in medicine would take on Big Hospitals. Capitals mine.
Both groups wanted to know, in an unfriendly tone, who I was. Being from the same city was a minus; had I been from outside, I might have had more credibility. It would have helped if I were good-looking, exuded charisma, and showed glossy paper with colorful bar graphs, rather than having sound ideas and a quiet demeanor.
Needless to say, the cardiac surgery program wasn’t interested, and I was assured, that second of May, that University Hospital had “one of the best safety records in the country.” They gave me no data. They wanted to know what software I would use. I didn’t need software; I needed reports of errors in order to understand them better. Unfortunately, computers and charisma mattered more to them.
Leapfrog was initiated by a group who had the smarts, the looks, the networking ability, and the leadership skills I lacked. My ideas were ahead of theirs. In 1974, I was counting outcomes in medicine when I was an intern. I was selecting my surgeon to do carotid surgery in the mid-1980s, based upon his outcomes. I raised concerns about our cardiac surgery program in 1990. I wasn’t surprised that hospitals were graded “C.”
Every member of my immediate, now small family, has suffered from a medical error. People make mistakes. I accept that. People should learn from them, too, which they often don’t. For years, we had lousy data and lousy tracking systems. No, we had no data and no tracking systems. We hadn’t a clue, and we let Big Medicine, called the Joint Commission, dictate what hospital quality was. I met with the Joint Commission, too, on 14 August 2001, in Chicago, at my expense. They were quite interested, so they said, but I never heard again from them. No e-mails, no calls, no response to my written requests, nothing.
It takes 30 seconds to compose and send an e-mail saying one is not interested. They weren’t too busy. They were rude, arrogant, and wrong, as wrong as Condoleezza Rice had been 8 days earlier and the Bush administration would be four weeks later. The only difference is a lot more people die from medical errors every year than died on 9/11. We just don’t know how many. Our estimates are bad, and the margin of error of those estimates is seldom given. That violates a basic rule of statistics.
We should be tracking outcomes of common procedures in medicine. When I broke my hip in an accident, the surgeon had no idea I had done well until I wrote him. I fractured my fifth metacarpal, had a cast for four weeks, and told that alcoholics often took off their casts with no sequelae. When I was told I needed two additional weeks of the cast (which did not change the angulation of my metacarpal), the comment my father made was “that is what your doctor learned to do where he trained.”
“Why?” I asked, “don’t we know whether somebody with a broken metacarpal even needs a cast? Why don’t we know the optimal time? Do metacarpals heal depending upon geography?” This is not a rare injury. If we don’t need a cast, wouldn’t that save money and time? How many other conditions don’t we know the results? Perhaps some shouldn’t do certain procedures, like colonoscopy, lumbar punctures, bronchoscopy or angioplasty. How many of these have you done, doctor? And what happened to the patients?
We physicians like to say we are scientifically trained, and non-physicians don’t have data to show they make a difference. Where are the numbers? What should they be? And what are we doing to achieve those numbers? Too many ideologues argue using rhetorical questions, which I find annoying. A statistician’s job is to ask questions. Ours are good questions, answered with data, uncertainty and appropriate inferences.
We don’t need high speed computers to measure outcomes. Pen and paper work just fine, with a lot of curiosity, and an open mind.
May 30, 2014 at 19:07 |
I am amazed at what you have presented since I worked at some of those hospitals and had no idea we were not tracking such significant info. By the way: I noted slight differences between your blog wording and the WordPress article. Who does that editing – juust curious.
May 30, 2014 at 19:15 |
I always read it after I think it should be posted. I invariably find a few mistakes. “program” was changed. I don’t have an editor: there are disadvantages to that, since editors can really make something great (A Wise Owl became special because of David Goldblatt, M.D.), but they can also distort your meaning (Backpacker Magazine editor, 2007, who totally ruined what I wanted to say).