Journal of the New Zealand Medical Association, 18-August-2006, Vol 119 No 1240
Quality improvement in New Zealand healthcare. Part 4: achieving effective care through clinical indicators
John Buchanan, Allan Pelkowitz, Mary Seddon, on behalf of EPIQ*
[*Effective Practice Informatics and Quality (EPIQ) based at the School of Population Health, Faculty of Medicine & Health Sciences, Auckland University]
Performance indicators, key performance indicators (KPIs), and clinical indicators—what are they, how do they differ, and how are they used? In this article we will attempt to answer these questions and equip clinicians with the tools to spot useful clinical indicators.
Performance indicators were placed firmly on the healthcare agenda in 1986 when the Joint Commission on the Accreditation of Health Care Organisations (JCAHO) in the United States launched an “Agenda for Change” to modernise the accreditation process. The attempt to collect and report “performance” data was a centrepiece for the JCAHO’s new direction.1
Performance data incorporated into accreditation was to be used to satisfy the demand by the payers of healthcare, for objective evidence on the quality of that care. At the same time, healthcare organisations were progressively embracing the concept of “continuous quality improvement” (CQI) and exploring the role of performance indicators in the quest to improve the effectiveness of care. Data generated through the use of reliable and valid performance measures were recognised as central to the CQI process.
Thus from the outset there have been two principal uses of performance indicators:
The distinction is very important because, as Freeman3 has pointed out, the use of performance indicators in assurance and performance management systems—summative indicators—has the potential to undermine the conditions required for continuous quality improvement in the clinical setting. Summative performance indicators (e.g. accreditation, Pay for Performance) may increase compliance costs, meaning that there is less money available for CQI. If they are used to ‘punish’ behaviour they may also drive down innovation and trust, leading to gaming of data.
Clinical indicators are a subset of performance indicators. They have been variously defined but they are essentially “an objective measure of either the process or outcome of patient care in quantitative terms.”4 They are usually rate based with a numerator and denominator, both of which must be clearly defined. They do not measure quality directly, but flag potential problems and possible opportunity to improve care.5
It is important to appreciate that “The benefits to be gained from the use of clinical indicators do not lie in the collection of the data, but in how those data are used; that is, in the data analysis and the actions taken to achieve sustained improvements in clinical practice. Clinical indicators do not ‘work’ unless used effectively by clinicians and managers to bring about improvements.”6
There are different objectives for clinical indicators, depending on who is using the indicators and whether the assessment is intended to be summative or formative. They can be used by the “manager” to control clinical behaviour, usually with the aim of decreasing costs. They can also be used by the Ministry of Health as international benchmarks, and as a means to direct funding.
For clinicians, the prime objective is to use clinical indicators to improve patient care. They do this by measuring an aspect of care over time, using indicators as flags to possible problem areas and/or potential areas for improvement. Clinical indicators can also be used to provide evidence that any changes introduced have in fact resulted in improvements in care provided. Clinical involvement from the “bottom-up” helps to ensure that indicators are used as a formative mechanism for quality improvement in patient care, rather than as summative mechanisms for “top-down” external accountability with a focus on “assurance” rather than “improvement
Most of the clinical indicators in use in New Zealand hospitals are derived from the Australian Council on Healthcare Standards (ACHS) indicator sets that have been developed in conjunction with Australian and New Zealand Medical Colleges, Associations, and Societies since 1989.5
The aims of the ACHS indicator program are laudable (to increase the involvement of clinicians in evaluation and quality improvement activities, and to facilitate the collection of national data on the processes and outcomes of patient care)—but there are several problems with the current reliance on the ACHS indicator set.
Firstly, the ACHS clinical indicators are mostly not evidence-based, and they do not adequately represent the subspecialties within the many disciplines. Secondly, forced use of externally derived clinical indicators removes clinical ownership and makes their use for quality improvement less likely. And, thirdly, benchmarking against a standard can have the effect of encouraging complacency once the benchmark is reached, which is at variance with the continuing quality improvement ethos.
There are recognised criteria for selecting clinical indicators. A brief understanding of Donabedian’s model for quality improvement will help guide decisions (see Box 1).7
Box 1. Model for quality improvement
Outcomes are of prime interest, but there are problems with measuring these directly: it may take too long to observe outcomes (therefore need high volumes and/or early endpoints); they can be confounded by problems outside the healthcare sphere of influence (e.g. poor housing, poor incomes); and they are expensive to collect. There is a consensus that measuring process indicators is preferable if there is good evidence that the process being measured is related to outcomes of interest.8 For example, there is good evidence showing that giving aspirin and beta-blockers (process measures) to patients suffering an acute myocardial infarction improves their survival (the outcome of interest).9,10
Table 1. Key attributes of clinical indicators11
For clinical indicators to be useful in improving patient care (and therefore outcomes) it is important for each clinical group in an organisation to identify which clinical indicators are likely to be useful for their improvement efforts. In this way, only those indicators that the clinical team identifies with are chosen, and there is likely to be better ownership and association with improvement efforts.
There are several key attributes to consider when choosing an indicator (see Table 1).11 There are nine key attributes in this Table—for example, an indicator should be clearly defined, have a clear intent, be practical to collect, and relevant. An indicator might not satisfy all the attributes—but if it does not, then the risks associated with this must be explicitly discussed and monitored.
Once an indicator is selected, it should be critically evaluated. Box 2 outlines a series of questions that may be applied to any proposed indicator in order to understand its usefulness and potential impact on clinical work. These questions attempt to extract information about the indicator in the managerial, clinical, and economic spheres.
Box 2. Critical appraisal of clinical indicators
Box 2 also provides examples of the questions one might ask to better understand the indicator and how good (or bad) it will be. It is unlikely that many indicators will satisfy all levels; however, the information gained from this exercise allows everyone to understand and explicitly state the limitations of the indicator. All can then understand why the indicator was picked, what it can and cannot achieve, and what else needs to be done to limit that indicator’s weaknesses. Box 2 also includes examples of issues that have arisen from previously used indicators, to show that these could have been foreseen with pertinent evaluation.
The analysis and interpretation of indicator data by clinicians (who are familiar with the clinical process) is important for quality improvement. Clinical indicators generate data—but data needs to be analysed and presented as useable information if it is going to be used to improve care. Furthermore, clinicians need to understand the basic principles and limitations of data analysis and presentation to be able to use the information appropriately. The usefulness of the data is primarily limited by the adequacy of data collection (“garbage in, garbage out”).
Clinical indicator data is collected over time, and the most effective way to present this as useful information is either through a run chart or a control chart. Both use a set of statistical rules to determine whether the pattern revealed by the data represents the normal fluctuations about a median that is observed in any process (common-cause variation) or whether there is something that needs further investigation (special cause variation). It is important to use these rules to avoid the common problem of seeing trends where none exist, or of over-reacting to common-cause variation (and thereby making the system of care more unstable). Several texts deal with this subject for those who want to know more about run charts or control charts.13–15
Clinical indicators can be a powerful means of effecting change if used correctly. It is important to understand who has defined the indicators and for what purpose. It is also vital that the indicators are adequately assessed in terms of the changes they will make on the whole system, before they are adopted. Even with this approach, clinicians and managers will still be surprised when something unexpected occurs, and should be in a position to promote or restrict this as it becomes apparent. This is made a lot easier with access to accurate and timely data visible to all.
Conflict of interest: No conflict.
Author information: John Buchanan, Allan Pelkowitz, and Mary Seddon—on behalf of EPIQ.
EPIQ is a School of Population Health Group (at Auckland University) with an interest in improving quality in healthcare in New Zealand.
EPIQ members involved in this Series are:
Correspondence: Mary Seddon, Senior Lecturer in Quality Improvement Epidemiology & Biostatistics, School of Population Health, University of Auckland, Private Bag 92019, Auckland. Fax: (09) 373 7503; email MZSeddon@middlemore.co.nz
issue | Search journal |
Archived issues | Classifieds
| Hotline (free ads)
Subscribe | Contribute | Advertise | Contact Us | Copyright | Other Journals