Journal of the New Zealand Medical Association, 08-September-2006, Vol 119 No 1241
Quality improvement in New Zealand healthcare. Part 5: measurement for monitoring and controlling performance—the quest for external accountability
Rod Perkins, Mary Seddon; on behalf of EPIQ*
[*Effective Practice Informatics and Quality (EPIQ) based at the School of Population Health, Faculty of Medicine & Health Sciences, The University of Auckland]
In an attempt to arrive at the truth, I have applied everywhere for information but in scarcely any instance have I been able to obtain hospital records fit for any purpose of comparison. If they could be obtained they would enable us to answer many questions. They would show subscribers how their money was being spent, what amount of good was really being done with it or whether the money was not doing mischief rather than good.1
This quote encapsulates what those measuring organisational performance are trying to achieve. The fact that Florence Nightingale said it in 1863 suggests that she was a woman ahead of her time as the issues are essentially unchanged.
The MOH, Treasury, and DHBs are still looking at ways to get useful information and have decided that investing in performance indicators is the way forward. This interest in performance management has been reinforced with the appointment of an experienced health manager—Stephen McKernan—as the new CEO of the Ministry of Health. We suggest that by this appointment the Government is signalling its intention to focus more on value for money considerations than it has in the past.
As noted in the previous article in this Series, Freeman2 emphasises a useful distinction between indicators that are designed and used to improve healthcare quality from within (formative clinical indicators) and those used in external performance monitoring (summative performance indicators).
In this article we will be focussing on summative measures as we examine the role of performance indicators and the two main ways that they are used in an attempt to improve quality of care: report card (or league tables) comparing performance across organisations, and the use of financial incentives attached to improved quality performance indicators (pay-for-performance).
Performance indicators are intended to enable outsiders to gauge how an organisation is performing, usually in comparison with other like organisations. Performance indicators measure numerically one aspect of an organisation’s performance.
The ‘performance’ that is measured varies—it may be the financial performance of an organisation, the market share, or the productivity. Information from performance indicators can be used in a number of ways—to verify activity,2 impose a policy agenda,3 or to stimulate interest in quality improvement or the quality of care.
Another reason for performance indicators, particularly in publicly-funded institutions, is a quest to ensure that money is well spent. Treasury would argue that it has been putting extra money into healthcare and what have we got to show for it? Performance indicators can also be used to satisfy the desire to hold someone accountable.4
The history of performance indicators in New Zealand is somewhat chequered. In 1989, Helen Clark as Minister of Health led the MOH and area health boards into a contractual arrangement whereby the boards had to meet certain output targets in order to receive funding. This was the beginning of performance management of productivity in the New Zealand health system.
When Crown Health Enterprises (CHEs) were formed in 1993, performance monitoring began to expand as CHEs were encouraged to compete with each other. Early work focused on searching for efficiency indicators that could be used to ‘improve management performance of individual crown health enterprises (CHE).5
With hindsight, some of the early performance indicators were plainly ridiculous—for example, CHEs performance in the area of public relations was gauged using an indicator that measured newspaper column inches of positive publicity as the numerator and total column inches of publicity as the denominator. A lot of this work developing performance indicators took place behind closed doors because of commercial sensitivity; neither clinicians nor patients were involved and the performance indicators chosen largely avoided scrutiny.
Both the United States and the United Kingdom have invested heavily in public reporting of comparative performance indicators. The evidence suggests that patients and the public at large, are strongly in favour of publicly reported performance in principle. However, most studies conclude that they make little use of such reports.6,7
Purchasers and funders of healthcare also seem to be in favour in principle but also make little direct use of report cards. The key audience for public reporting appears to be the provider organisations themselves.8
Public reporting of performance indicators, or their use in league tables, demands very good data. If an organisation’s or an individual clinician’s reputations are at stake, then it needs to be established that the indicators are comparing ‘like with like.’ The enthusiasm for public reporting is well ahead of the science8 and even the best ‘risk adjustment’ may not be able to accurately disentangle the key quality differences between organisations from those due to case-mix differences.
Organisational performance indicators, like clinical indicators, should be technically sound—derived from data that can be reliably obtained, be valid measures of what they are intended to measure, and focus on an area of importance.
The utility of an indicator is only as good as its ability to be measured accurately. As those in management and governance do not have knowledge about what goes on at the sharp end of service delivery, they can (and do) assume that data is being correctly obtained when in fact it often isn’t. Indicator data entry may be delegated to ward clerks who, when faced with ‘compulsory’ fields, may enter nonsense data so that they can complete the process.
The validity of a performance indicator may be called into question if there is not a close relationship between what is being measured and the performance of interest. For example, the MOH requires DHBs to report on the rates of readmission to hospital within 28 days. It is unclear to clinicians why 28 days was chosen. If this indicator was measuring the performance of a hospital’s discharge planning, then re-admission within 7 or 14 days would be a more useful indicator. By 28 days, many patients with chronic illnesses have developed a further exacerbation and ‘perfect’ discharge planning would not prevent their readmission.
Those in charge of funding allocation sometimes aggregate performance indicators for report cards into simple metrics such as ticks/crosses or even smiley/frowning faces.
There are two major problems with this:
Everyone using performance indicators should be knowledgeable about the difference between common-cause and special-cause variation and be comfortable with the limits that data can show.
The conceptual and technical problems with performance indicators are compounded when they are linked to financial rewards or sanctions—a situation that has been described as an ever expanding collection of carrots and sticks [in] the hope of influencing quality and cost control.9
The use of financial incentives linked to performance is most evident in the UK, where hospitals with a 3-star rating have until very recently been eligible for a £1 million bonus (rescinded as too many hospitals achieved 3-star status). The performance indicators used included key targets; patient focus; clinical focus; and capacity & capability focus performance indicators.10
Examples of the key targets are:
Breaches in performance indicators were aggressively managed by many National Health Service (NHS) trusts to ensure that their 3-star rating was protected, leading to some perverse behaviour. For example in response to the arbitrary key emergency care performance indicator: ‘no patient should stay more than four hours in the ED’ with a target of 98% compliance, some trusts aggressively managed any ‘breaches.’11
Some of the dysfunctional behaviour included:
Such unintended behaviour avoided the breach but did not necessarily address the patient’s needs or improve the quality of care.
The NHS has also introduced an ambitious scheme in primary care, with the introduction of 146 performance indicators which if satisfied, provides approximately 30% of the general practitioner’s salary (~£27,000 per GP). The first evaluation of this policy12 has shown that targets were met for 83% of patients and primary care practices earned nearly 97% of the possible performance points.
As the policy-makers had estimated that practitioners would earn only 75% of available points, the initiative has contributed to the burgeoning NHS deficit.13 Furthermore, early indications are that at least some of the bonuses were achieved by excluding large numbers of patients through ‘exception reporting.’
Exceptions were defined at the outset and included factors such as: the patient had just joined the practice, had refused treatment, or despite three attempts had not attended for care.
Since 2001, the New Zealand Government has had a national primary care strategy14 and is now primed to bring in performance measures and financial incentives. The exact level of this funding is still to be determined, but the lessons from the UK experience are important.
Our primary care performance measurement programme has identified a number of performance indicators including:
In addition, there are performance indicators which are intended to gauge the potential of Primary Healthcare Organisations (PHOs) to operate effectively and improve performance—for example:
All performance indicators give rise to perverse incentives and unintended consequences,16,17 and these are likely to be exaggerated when there are financial rewards or losses at stake (see Box 1). Both report cards and financial incentives are blunt instruments designed to change provider behaviour—in the hope that the change will be positive for the quality of care.
Box 1. Possible dysfunctional consequences of public reporting of performance indicators.21
Performance indicators by definition focus on one aspect of care—this may encourage organisations to concentrate on just those areas being measured and like clinical indicators, this means that those things which are not easily measured may be ignored.
Some aspects of healthcare quality lend themselves to measurement—e.g. waiting times in the emergency department and delays in surgery. Other important activities (e.g. accurate diagnosis and proficiency in discussing end-of-life issues with patients) are much harder to measure, and risk being ignored in the rush to report performance. Furthermore, unlike clinical indicators, externally imposed performance indicators cannot ‘drill down’ to provide information on what actions are needed to improve performance.
Organisations that gear themselves to ‘do well’ in report cards, or to increase their financial gains may be focusing on short-term goals, and in doing so, neglect the long term strategic vision and investment.
Performance indicators attempt to externally impose ‘quality assurance,’ however if clinicians do not have confidence in their validity, or if indicators do not align with professional values and assess an important clinical area, they may disengage from the process or worse still, ‘game’ the results. Gaming—‘the alteration of behaviour to look good rather than the implementation of substantive improvements’18—can be damaging not only to the quality improvement effort but also to the involvement of clinicians in quality improvement work.
For example, problems exist with accreditation in New Zealand, where clinicians know that there are deep and serious problems with care provision, yet their organisation gets accredited without these problems being exposed. This leads to cynicism about the value of being involved in the process and to the questioning of the managerial drivers.
An even more important perverse behaviour is that potentially ‘high risk’ patients are avoided—if such high-risk patients are going to make the ‘figures’ look bad in a league table, there is a danger that there will be pressure to only operate on low risk cases, or concentrate on the easy to reach patients.19
Even when performance indicators are good measures, they may in fact threaten the trust necessary for clinicians to engage in quality improvement work. By elevating the status of external inspection they decrease that of the internal informal work that most organisations have. Indeed, it has been said that the ‘indicator industry has begun to suffer from the regulators’ delusion that central systems of oversight are the sole guarantors of quality.20
Despite the enthusiasm for performance indicators and the millions of dollars spent on their development, relatively little is known about the actual impact on improving quality of care.
Performance indicators cannot capture the range and complexity of health service activity and are blunt and dangerous tools when used in pursuit of quality—that is, if they have any impact at all.22
Healthcare is complex and less deterministic than traditional industries. The link between actions and outcomes in not necessarily strong or direct, and it is modified by non-healthcare factors (such as employment, deprivation) and patient-mix.20 There is little evidence of a positive impact from performance indicators on health service delivery or health outcomes. 23 Even the oft-quoted example—the New York State league table published in a New York newspaper, where individual cardiac surgeons were ranked by the mortality rates of their coronary artery bypass patients—has been questioned. Yes, the mortality rates improved, but they also improved in other states that did not have public reporting. New England achieved similar benefits in mortality rates through confidential reporting and sharing best practices.24 Furthermore in New York here was evidence that low volume surgeons stopped operating but also that high-risk cases went elsewhere.25,26
As discussed above, the blunt instrument that is public reporting resulted in unintended consequences.
Firstly we need to be cautious about implementation of performance management systems.(Box 2) It may be useful to de-politicise public reporting.8 Both the US and UK have agencies that have some political independence (the National Committee for Quality Assurance and the Commission for Health Improvement, respectively).
In order to get clinical buy-in, to command credibility, performance indicators need to address clinically important areas—for both clinicians and patients. We need to address as much attention as possible on the technical issues of performance indicators—validity, reliability, and case-mix. We also need to anticipate and monitor unintended consequences. It is important to ensure that the performance indicators relate to processes of care over which clinicians have control,27 and that are not unduly affected by the non-healthcare factors discussed above.
Box 2. Lessons for New Zealand—ways of reducing the dysfunctional responses to performance indicators
Finally there is a substantial opportunity cost involved in developing performance indicators as well as collecting and analysing data, while seeking to risk adjust so that comparisons are useful. This is money that is not being spent on actually improving the quality of care delivered, and was a reason quoted by the Waitemata DHB CEO in his resignation—he cited stifling compliance issues as interfering with his ability to introduce patient safety initiatives. He urged the sector to refocus on concrete improvements to patient care, like better drug dispensing and infection control.28
When using financial incentives, they need to be large enough to influence performance, but not so large that they encourage distortions of the clinical processes and tenets of professionalism.
Performance indicators, used in report cards or linked to financial incentives, seem to be firmly on the political agenda in New Zealand. If the NHS in the UK is any guide, we should expect to see increasingly vigilant surveillance by the MOH and DHBs of performance and quality in the New Zealand system.
While we endorse the MOH’s interest in healthcare quality, there are serious pitfalls in how performance indicators are used. To be successful, they require funders and managers educated in the subtleties of performance management and indicator use (i.e. not to be content with aggregated measures of little validity) as well as clinicians who can challenge the indicator set.
It is clear that indicators of health care quality are not axiomatically good.2
Two things can be guaranteed; performance indicator use by those funding health systems is on the increase, and indicator use is far from an exact science. What we have tried to do in this article is balance the sometimes zealous proponents of external performance monitoring, with the realities of the problems such monitoring can cause.
Conflict of interest: No conflict.
Author information. Mary Seddon, John Buchanan on behalf of EPIQ.
EPIQ is a School of Population Health Group (at Auckland University) with an interest in improving quality in healthcare in New Zealand.
EPIQ members involved in this Series are:
Correspondence: Mary Seddon, Senior Lecturer in Quality Improvement Epidemiology & Biostatistics, School of Population Health, University of Auckland, Private Bag 92019, Auckland. Fax: (09) 373 7503; email MZSeddon@middlemore.co.nz
issue | Search journal |
Archived issues | Classifieds
| Hotline (free ads)
Subscribe | Contribute | Advertise | Contact Us | Copyright | Other Journals