The JC was once a consensus building organization that followed the lead of its subscribers and accomplished a great deal. When Congress conveyed "Shall Deem" authority for medicare certification upon the JC, it wrested control of the JC from it’s subscribers and transformed it into a regulatory limb of congress and CMS. While the JC is undeniably under the direction of honorable people, their efforts have become distorted by the politics of money and power which surround them. It's no secret around hospitals that practitioners see no connection between the Joint Commission process and quality of care. Indeed, every practitioner understands that some of the worst hospitals attain JC accreditation almost effortlessly, while some of the best struggle to maintain their certification. Historically, JC inspection was centered on physical plant and policy/procedure. Dreadful care was fine, as long as the policy and procedure manual was up to date and concordant with the most recent guidance. Joint Commission accreditation was, and is, a high stakes game, and unfavorable decisions are very likely to be contested in court (or with the threat of litigation). Consequently, JC regulatory activity has progressively focused on inspection activities that can withstand such litigation. This trajectory has relentlessly uncoupled Joint Commission inspection and accreditation from even a remote relationship to quality of clinical care. High scores and necessary accreditation have become contingent upon putting up a temporary facade of strict compliance, which frequently obstructs, rather than enhances, care. For the past two decades, the closest thing to a Potemkin village in the American culture has been a hospital preparing for a JC survey.
Since clinicians are primarily focused on care of individual patients, incentives are significantly misaligned between those clinicians and facility administrators. Those administrators have the unenviable task of reconciling the absolute need for accreditation (upon which most insurer reimbursement depends) with the uncooperative ambivalence or even hostility of clinicians who see the survey as a "Chinese fire drill" disrupting routines of care, and diverting attention from more relevant clinical concerns. This is exacerbated by the JC's frequent utilization of tin-eared apparatchiks as surveyors.
Over the past decade, other certifying organizations have arisen to compete with the Joint Commission. In response, the JC has attempted to diversify its portfolio, embracing clinical quality and safety as meaningful additions to its mission. The Joint Commission has had limited success with these endeavors. Why? Because, with the mindset of inspectors, they are constitutionally incapable of this transformation. Just as putting on a white coat does not make you a clinician, declaration of intent does not transform the JC from an inspecting organization into a quality/safety organization. There is an enormous amount to know about both, and the Joint Commission has struggled (along with all of health care) in even understanding where the state of the art currently resides. There is perhaps no better example of this struggle than the Joint Commission’s Sentinel Event policy, which has been in force for more than a decade, been through multiple revisions, and has generated almost no meaningful reporting. Why? Because, even with its Sentinel Event policy, institutions feel threatened by the JC. Thus, the only events reported are those unlikely to generate any regulatory interest. Almost always, the first hospital discussion of a sentinel event is one that justifies classification of the particular event as not reportable to the Joint Commission. As a result, in a world filled with sentinel events, the Joint Commission’s database has failed to capture most of them. Of all of the entities in health care, none is currently better positioned than the Joint Commission to study, analyze, and learn from such events and to widely distribute the lessons learned. This lost opportunity is staggering. Example? Wrong side surgery.
Wrong side surgery happens very rarely; but, in a country of 300 million people, it happens regularly. The JC is determined to change this, and has developed its ‘Final Verification’ Protocol to extinguish the problem. The outcome? Absolutely no measurable change. None. Zero. Why? Because they didn’t go out and study how such failures occur. This would have required a different sort of approach; not a regulatory focus, but an investigative one; an approach that requires intellectual resources, specialists, outside expertise, and the insight that mere proscription is insufficient. It could be done given the will and vision, but it would require major transformation of the JC culture. Wrong side/site surgery happens because it is very tricky to prevent 100% of the time. Preventing it requires more than a mandate to fill out a form (indeed, only people disconnected from bedside care could imagine that this could be effective).
The irony is this: the Joint Commission has worked hard to develop a comprehensive database that catalogs such sentinel events but has not developed an appropriate infrastructure to understand how such events happen; they are not process savvy. This lack of understanding is the root cause of the failure of final verification. Thank goodness that the National Transportation Safety Board (NTSB) does not take a similar approach to aviation accidents. This is important. For thirty years, healthcare quality efforts have been primarily modelled on the manufacturing industry; Deming, six sigma, Total Quality Management. That's what the consultants have been selling, that's what the healthcare business-people have been buying. Wrong model. That's a production-oriented measurement philosophy. That's not what the NTSB does; their model is based on an intimate understanding of process, and how it fails in specific instances. The NTSB is deep with Human Factors Engineers, and the first object of attention in any flight mishap is the recorder; the detailed process record stored in a virtually impregnable, beacon alerting box. The healthcare environment needs something besides "widgets-off-the-line" thinking to help it improve the very difficult business of providing care to patients, and at present, that necessary something is not to be found within the Joint Commission, nor do they appear to be heading in a promising direction. But, as the saying goes, "you can't beat something with nothing."
Fortunately, the University HealthSystem Consortium is just such a something. As the name implies, UHC is a group of academic health systems collaborating to advance systems of care that make clinical and economic sense, guided by data and ongoing experience. They are attempting to elevate care through careful understanding of the processes involved in the provision of bedside care, and by helping institutions deal with the daunting logistical effort required to support that care. In this effort, they are enlisting the help and input of participants at all levels in the care chain. All of this stuff is hard; much harder than it appears to outsiders, who imagine that caregivers should instinctively know what to do. Ask any clinician how they define quality, for instance, and you are likely to get the Potter Stewart answer; " I know it when I see it." (Justice Stewart was referring to pornography, but never mind that...) Although it is valuable, it's not sufficient to drive improvement. The truth is that the state of the art is elusive, variable, and continually evolving in ways that are difficult to perceive or explain. In UHC, everyone participating, a self-select group, sees the floor, and is trying to get further away from it. Amongst academic practitioners, UHC participation carries far more weight than Joint Commission accreditation.
Quality, like the proverbial elephant, has a radically different feel depending upon which blind man you are and what part of the elephant you are touching. If quality were easy to understand or measure, very little would have been published about it, and no one would have been able to build a career in healthcare founded upon it. For a quick introduction, here are three resources. The original framework for discussions of medical quality was exhaustively laid out by Donabedian in 1966 in the Milbank Quarterly. From a modern population perspective, Berwick et al have distilled two decades of original work by them and others into a nice summary, branded as "The Triple Aim." From the individual patient perspective, a particularly pragmatic definition can be found in a document published by the AHRQ. Since what constitutes quality in healthcare remains a matter of discussion and dispute, is it any wonder that quality improvement is a difficult issue?
Quality improvement is at its core a translational activity. It imports ideas from other domains, maps them to the terrain of clinical experience, and tries to find a better, safer, cheaper, path through the jungle of clinical medicine. The current state of the science of quality improvement in healthcare is woefully incomplete. Our understanding is more akin to Aristotle's primitive notions of earth elements than it is a systematic understanding; we have neither attained the insightful scope of the theory of evolution, nor the immense power and detail of molecular biology. Most good ideas for quality are doomed to fail, mostly for reasons that are only obvious in hindsight (there are a few prescient practitioners who can see these failures prospectively). Progress on quality is arduous, and indeed does require the kind of deep understanding (or serendipity) required for progress elsewhere in medicine. Perhaps the biggest obstacle to progress in quality is that very few people, inside or outside medicine, truly understand and believe this. Until the actual process of how to investigate quality and its improvement undergo fundamental intellectual advancement, our efforts will be inefficient and disappointing.
In the meantime, the inspections will continue until quality improves.
This post was co-authored by Mike O'Connor and Mitch Keamy.