Journal of the New Zealand Medical Association, 13-December-2002, Vol 115 No 1167
How safe are our hospitals?
Mary Seddon and Alan Merry
In this edition of the Journal, Davis et al publish the results of their study into adverse events in New Zealand hospitals.1 The results – 12.9% of admissions were associated with an adverse event – are again guaranteed to produce headlines. When the same group published details of their regional study, the New Zealand Herald ran the story under the headline, “Blunders kill hundreds in hospitals”.2 Is this a valid response? The answer would appear to be yes, and no.
Studies in several healthcare systems have shown that sick patients – admitted to hospitals, receiving multifaceted medical management in complex systems of care – are vulnerable to adverse events and harm. It would have been surprising if the New Zealand study had not also found this. The problem was demonstrated more than ten years ago in a landmark study of 30 000 medical records in New York State (3.7% of hospitalisations were associated with an adverse event),3 and has been confirmed subsequently in Australia (16.6%)4 and Britain (10.8%).5
But do these bald figures represent the whole story? The number of adverse events associated with death in the NZ study (~1 in 320 hospital admissions) is a case in point. Before we become too alarmed, the clinical context and the underlying prognosis of the patients should be taken into account. In an American study, 6% of 111 acute care patient deaths were found to be “definitely or probably preventable.”6 However, after considering the prognosis, and adjusting for the variability and skewness of the observers’ ratings, it was estimated that only 0.5% of the patients whose deaths were even possibly preventable would have lived three months or more in good health if care had been optimal. This represented 1 patient per 10 000 admissions. The preventable proportion of deaths in the New Zealand study is not clear.
It is only the preventable adverse events that a better health/hospital system can improve upon.
Buried at the end of the discussion section of Davis et al’s paper, is the statement that “6.3 % of admissions in New Zealand public hospitals were associated with adverse events that were both preventable and occurred in hospital.” This figure is roughly half the 12.9% figure quoted in the abstract, and yet in terms of accuracy and quality improvement, this is the more important one. (Davis et al’s second paper, entitled “Adverse events in New Zealand public hospitals II: preventability and clinical context,” was not available at the time of writing this editorial.)
Direct comparisons of adverse event occurrences between countries are unreliable because of differences in study methodologies and the healthcare systems under review. In addition, the subjectivity of chart review and retrospective judgements about the quality of care creates considerable scope for variability in results. Inter-observer agreement (or lack of it) is a key issue. Indeed Hayward’s study reported that if one reviewer rated death as “definitely or probably preventable,” the probability that the next reviewer would hold the opposite view (18%), was actually higher than the probability that he would agree (16%).6 The “kappa” statistic compares the actual agreement between observers with the agreement that might be expected by chance alone. In the Davis study, the authors quite properly draw attention to the fact that the kappa value in their study (0.47) indicates only moderate agreement between the medical officer reviewers and the expert reviewers (the level of agreement between the medical reviewers and the nursing reviewers is not reported). However, the interpretation of kappa depends on the underlying frequency of the events being studied, and on the way in which it is calculated.7 This is not explained in the Davis paper, in which the kappa is first mentioned in the discussion, and not in the methods or results section. This is not the only aspect of the paper that is difficult to understand from the material presented – more detail on a number of methodological issues would have been helpful.
A further point is that ‘safety’ is only one (albeit important) dimension of healthcare quality. Others, such as ‘effectiveness’, may in fact improve patient outcomes more than an excessive focus on reducing adverse events. Effective or appropriate care means avoiding overuse (providing ineffective care, in which the benefits are outweighed by the risks) and underuse (not providing effective care).8 Underuse is a particular problem in New Zealand as evidenced by the rationing of the provision of effective treatments such as coronary artery bypass surgery.9 ‘Access’, encompassing acceptability, timeliness, affordability, and equity is another important dimension of quality; it is not much use having safe care if patients are unable to access it. ‘Patient-focused’ care is increasingly seen as a defining dimension of quality,10 and involving patients in decisions is likely to improve their safety. The Davis paper did not appear to measure patient safety in relation to appropriateness of, or access to, healthcare.
The magnitude of the problem of adverse events may be open to debate, but the fact that it is substantial is not. Where do we go from here? There is a balance that must be struck between putting resources into improving the capture of adverse events with better reporting systems (reactive approach), and putting them into strategies to address known risk areas (proactive approach). Improving patient safety in our hospitals requires the direction of much more effort into the latter option.
Drug prescribing and administration is known to be a high-risk area. Several solutions have been tested and shown to be effective. These include the provision of clinical pharmacists on ward rounds; computerised physician order entry (CPOE) linked to patient data; palm pilots loaded with drug reference guides; and bar coding on patient bracelets and drug administration sheets. The reductions (up to 90%) in adverse drug events through initiatives such as these have been dramatic.11–13 And yet in New Zealand, very few hospitals have enough clinical pharmacists to attend ward rounds, no hospital has invested in a CPOE prescribing system, and only a few have trialled a limited system of electronic drug administration. Instead, we rely on exhorting hospital staff to try harder not to make mistakes.
The work on human error by James Reason demonstrates the fallacy of this approach.14 It is part of the human condition to make errors; therefore systems must be designed to reduce the likelihood of errors, and to limit the impact on patients of those errors that (inevitably) still occur. A culture of safety, in which errors can be admitted without fear of blame or retribution, and in which everyone is responsible for highlighting clinical quality risk situations, is fundamental to the improvement of safety.15 It is disappointing, therefore, that the Health Practitioners Competence Assurance Bill (currently before the Select Committee), seems to go against current thinking on how to improve the quality of patient care, with its emphasis on individuals rather than on teams and systems. Indeed, Clause 51 specifically excludes protection for participants in sentinel event investigations. It is hard to see how we can learn from and improve our system failures if individuals are scared of coming forward.
The Davis study was expensive,16 and difficult to carry out. Did we need it? It should not have been necessary to provide local data to convince funding authorities of the intuitively obvious fact that the problem of iatrogenic harm in New Zealand was similar to that found elsewhere, and was worth addressing. Regrettably, in reality it probably was. It is, however, disappointing, that the opportunity to go beyond the previous studies – to extend the scope in terms of putting the adverse events into context and looking at solutions – does not seem to have been taken.
The title of this editorial asks the question, how safe are our hospitals? Notwithstanding its limitations, this study, (like previous overseas studies) provides an answer – not safe enough. It is futile to carry out research of this type in the absence of a commitment to respond to its findings. These local data should provide a catalyst for greater investment in patient safety and quality improvement in our hospitals. The onus is now firmly on the Government, its Ministry of Health, its DHBs, and indeed all those who work in healthcare, to respond to the challenge, and to make our healthcare system one which, in the words of our Minister of Health’s New Zealand Health Strategy,17 “all New Zealanders can trust”.
Author information: Mary Seddon, Head of Quality, Medicine and Acute Care, Middlemore Hospital, Auckland, and Member of the Effective Practice Informatics and Quality (EPIQ) Group, Department of Community Health, School of Medicine, Auckland; Alan Merry, Department of Anaesthesiology, School of Medicine, Auckland
Conflict of interest: Professor Alan Merry has financial interests in improving safety in healthcare.
Correspondence: Dr Mary Seddon, Middlemore Hospital, Private Bag 93311, Otahuhu, Auckland. Fax: (09) 276 0282; email: MZSeddon@middlemore.co.nz
issue | Search journal |
Archived issues | Classifieds
| Hotline (free ads)
Subscribe | Contribute | Advertise | Contact Us | Copyright | Other Journals