NZMA Home

Table of contents
Current issue
Search journal
Archived issues
NZMJ Obituaries
Classifieds
Hotline (free ads)
How to subscribe
How to contribute
How to advertise
Contact Us
Copyright
Other journals
The New Zealand Medical Journal

 Journal of the New Zealand Medical Association, 08-September-2006, Vol 119 No 1241

Quality improvement in New Zealand healthcare. Part 5: measurement for monitoring and controlling performance—the quest for external accountability
Rod Perkins, Mary Seddon; on behalf of EPIQ*
[*Effective Practice Informatics and Quality (EPIQ) based at the School of Population Health, Faculty of Medicine & Health Sciences, The University of Auckland]
Abstract
In this fifth article in the Series on quality improvement, we examine organisational performance indicators, the consequences of their use, and what policy-makers and clinicians need to do to minimise their potential adverse effects. The Ministry of Health (MOH) and district health boards (DHB) are increasingly using performance indicators to measure activity and are looking at how they can be used to incentivise provider performance. Two different approaches are used: report cards (comparing individuals and organisational performance) and pay-for-performance (which provides financial payments for those organisations or individual providers who do what the funder wants). Given the United Kingdom and the United States experience with both approaches, it would seem prudent for New Zealand to proceed cautiously with using performance indicators to modify clinician behaviour.
We argue that the integrity of the system overall is dependent on clinicians taking an interest in the indicators used by the Ministry of Health (MOH) and district health boards (DHBs). Furthermore, those using them must invest in learning about system variation and how this affects the external monitoring of performance.

In an attempt to arrive at the truth, I have applied everywhere for information but in scarcely any instance have I been able to obtain hospital records fit for any purpose of comparison. If they could be obtained they would enable us to answer many questions. They would show subscribers how their money was being spent, what amount of good was really being done with it or whether the money was not doing mischief rather than good.1
This quote encapsulates what those measuring organisational performance are trying to achieve. The fact that Florence Nightingale said it in 1863 suggests that she was a woman ahead of her time as the issues are essentially unchanged.
The MOH, Treasury, and DHBs are still looking at ways to get useful information and have decided that investing in performance indicators is the way forward. This interest in performance management has been reinforced with the appointment of an experienced health manager—Stephen McKernan—as the new CEO of the Ministry of Health. We suggest that by this appointment the Government is signalling its intention to focus more on value for money considerations than it has in the past.
As noted in the previous article in this Series, Freeman2 emphasises a useful distinction between indicators that are designed and used to improve healthcare quality from within (formative clinical indicators) and those used in external performance monitoring (summative performance indicators).
In this article we will be focussing on summative measures as we examine the role of performance indicators and the two main ways that they are used in an attempt to improve quality of care: report card (or league tables) comparing performance across organisations, and the use of financial incentives attached to improved quality performance indicators (pay-for-performance).

What are performance indicators and why use them?

Performance indicators are intended to enable outsiders to gauge how an organisation is performing, usually in comparison with other like organisations. Performance indicators measure numerically one aspect of an organisation’s performance.
The ‘performance’ that is measured varies—it may be the financial performance of an organisation, the market share, or the productivity. Information from performance indicators can be used in a number of ways—to verify activity,2 impose a policy agenda,3 or to stimulate interest in quality improvement or the quality of care.
Another reason for performance indicators, particularly in publicly-funded institutions, is a quest to ensure that money is well spent. Treasury would argue that it has been putting extra money into healthcare and what have we got to show for it? Performance indicators can also be used to satisfy the desire to hold someone accountable.4
The history of performance indicators in New Zealand is somewhat chequered. In 1989, Helen Clark as Minister of Health led the MOH and area health boards into a contractual arrangement whereby the boards had to meet certain output targets in order to receive funding. This was the beginning of performance management of productivity in the New Zealand health system.
When Crown Health Enterprises (CHEs) were formed in 1993, performance monitoring began to expand as CHEs were encouraged to compete with each other. Early work focused on searching for efficiency indicators that could be used to ‘improve management performance of individual crown health enterprises (CHE).5
With hindsight, some of the early performance indicators were plainly ridiculous—for example, CHEs performance in the area of public relations was gauged using an indicator that measured newspaper column inches of positive publicity as the numerator and total column inches of publicity as the denominator. A lot of this work developing performance indicators took place behind closed doors because of commercial sensitivity; neither clinicians nor patients were involved and the performance indicators chosen largely avoided scrutiny.

Strategies to modify provider behaviour: report cards

Both the United States and the United Kingdom have invested heavily in public reporting of comparative performance indicators. The evidence suggests that patients and the public at large, are strongly in favour of publicly reported performance in principle. However, most studies conclude that they make little use of such reports.6,7
Purchasers and funders of healthcare also seem to be in favour in principle but also make little direct use of report cards. The key audience for public reporting appears to be the provider organisations themselves.8
Public reporting of performance indicators, or their use in league tables, demands very good data. If an organisation’s or an individual clinician’s reputations are at stake, then it needs to be established that the indicators are comparing ‘like with like.’ The enthusiasm for public reporting is well ahead of the science8 and even the best ‘risk adjustment’ may not be able to accurately disentangle the key quality differences between organisations from those due to case-mix differences.
Organisational performance indicators, like clinical indicators, should be technically sound—derived from data that can be reliably obtained, be valid measures of what they are intended to measure, and focus on an area of importance.
The utility of an indicator is only as good as its ability to be measured accurately. As those in management and governance do not have knowledge about what goes on at the sharp end of service delivery, they can (and do) assume that data is being correctly obtained when in fact it often isn’t. Indicator data entry may be delegated to ward clerks who, when faced with ‘compulsory’ fields, may enter nonsense data so that they can complete the process.
The validity of a performance indicator may be called into question if there is not a close relationship between what is being measured and the performance of interest. For example, the MOH requires DHBs to report on the rates of readmission to hospital within 28 days. It is unclear to clinicians why 28 days was chosen. If this indicator was measuring the performance of a hospital’s discharge planning, then re-admission within 7 or 14 days would be a more useful indicator. By 28 days, many patients with chronic illnesses have developed a further exacerbation and ‘perfect’ discharge planning would not prevent their readmission.
Those in charge of funding allocation sometimes aggregate performance indicators for report cards into simple metrics such as ticks/crosses or even smiley/frowning faces.
There are two major problems with this:
  • Reducing performance data into a series of smiley/frowning faces masks the complexity inherent in any data collection, it can hide the problems with validity and reliability of the measures themselves.
  • Variation in the data may be an intrinsic property of the data (common cause variation) and not a reflection of any variation of the quality of care being measured. Organisations and their managers may be getting rewards (or punishment) for processes that are operating normally—not good, not bad—within the normal variation to be expected about a mean. If they react to this common-cause variation they can damage the system of care by wasting people’s time and by making the system less stable.
Everyone using performance indicators should be knowledgeable about the difference between common-cause and special-cause variation and be comfortable with the limits that data can show.

Strategies to modify provider behaviour: financial incentives—pay-for-performance

The conceptual and technical problems with performance indicators are compounded when they are linked to financial rewards or sanctions—a situation that has been described as an ever expanding collection of carrots and sticks [in] the hope of influencing quality and cost control.9
The use of financial incentives linked to performance is most evident in the UK, where hospitals with a 3-star rating have until very recently been eligible for a £1 million bonus (rescinded as too many hospitals achieved 3-star status). The performance indicators used included key targets; patient focus; clinical focus; and capacity & capability focus performance indicators.10
Examples of the key targets are:
  • A&E emergency admission waits (<4 hours).
  • All cancers: maximum 2-week wait.
  • Breast cancer: 1 month diagnosis to treatment.
Breaches in performance indicators were aggressively managed by many National Health Service (NHS) trusts to ensure that their 3-star rating was protected, leading to some perverse behaviour. For example in response to the arbitrary key emergency care performance indicator: ‘no patient should stay more than four hours in the ED’ with a target of 98% compliance, some trusts aggressively managed any ‘breaches.’11
Some of the dysfunctional behaviour included:
  • Starting and stopping the clock at different stages in the patient journey.
  • Opening a ‘short-stay’ ward next to the ED and moving patients into this if they threatened to ‘breach.’
  • Moving patients to an inpatient ward—even of their evaluation was not complete.
  • When the indicator was measured quarterly, it was not unknown for trusts to hire more staff at the end of the audit period to inflate the percentages.
  • Managers cajoling and sometimes bullying staff to meet the target.
Such unintended behaviour avoided the breach but did not necessarily address the patient’s needs or improve the quality of care.
The NHS has also introduced an ambitious scheme in primary care, with the introduction of 146 performance indicators which if satisfied, provides approximately 30% of the general practitioner’s salary (~£27,000 per GP). The first evaluation of this policy12 has shown that targets were met for 83% of patients and primary care practices earned nearly 97% of the possible performance points.
As the policy-makers had estimated that practitioners would earn only 75% of available points, the initiative has contributed to the burgeoning NHS deficit.13 Furthermore, early indications are that at least some of the bonuses were achieved by excluding large numbers of patients through ‘exception reporting.’
Exceptions were defined at the outset and included factors such as: the patient had just joined the practice, had refused treatment, or despite three attempts had not attended for care.
Since 2001, the New Zealand Government has had a national primary care strategy14 and is now primed to bring in performance measures and financial incentives. The exact level of this funding is still to be determined, but the lessons from the UK experience are important.
Our primary care performance measurement programme has identified a number of performance indicators including:
  • Children fully vaccinated by their 2nd birthday.
  • Influenza vaccinations in the elderly (over 65s).
  • Cervical smears recorded in the last 3 years.
  • Breast screening recorded in the last 2 years.
In addition, there are performance indicators which are intended to gauge the potential of Primary Healthcare Organisations (PHOs) to operate effectively and improve performance—for example:
  • Percentage of valid National Health Index (NHI) numbers on PHO patient registers.
  • Access for high needs enrolees.15

Possible dysfunctional consequences of public reporting and pay for performance:

All performance indicators give rise to perverse incentives and unintended consequences,16,17 and these are likely to be exaggerated when there are financial rewards or losses at stake (see Box 1). Both report cards and financial incentives are blunt instruments designed to change provider behaviour—in the hope that the change will be positive for the quality of care.
Box 1. Possible dysfunctional consequences of public reporting of performance indicators.21
Organisations or individuals may alter their behaviour by:
  • Concentrating on short-term goals (getting their rating higher) and neglecting the long-term strategy.
  • Concentrating on those areas being measured to the detriment of other important areas.
  • Placing great emphasis and energy on not being exposed as an outlier, rather than on a desire to be outstanding.
  • Eschewing innovation for fear of failure.
  • Altering their behaviour to gain strategic advantage (‘gaming’).
  • Entering false or corrupt data.
  • Avoiding the treatment of high-risk patients if this is going to reflect badly in a public report.
  • Disengaging from quality improvement initiatives if the performance indicators do not seem relevant and are externally imposed.
Performance indicators by definition focus on one aspect of care—this may encourage organisations to concentrate on just those areas being measured and like clinical indicators, this means that those things which are not easily measured may be ignored.
Some aspects of healthcare quality lend themselves to measurement—e.g. waiting times in the emergency department and delays in surgery. Other important activities (e.g. accurate diagnosis and proficiency in discussing end-of-life issues with patients) are much harder to measure, and risk being ignored in the rush to report performance. Furthermore, unlike clinical indicators, externally imposed performance indicators cannot ‘drill down’ to provide information on what actions are needed to improve performance.
Organisations that gear themselves to ‘do well’ in report cards, or to increase their financial gains may be focusing on short-term goals, and in doing so, neglect the long term strategic vision and investment.
Performance indicators attempt to externally impose ‘quality assurance,’ however if clinicians do not have confidence in their validity, or if indicators do not align with professional values and assess an important clinical area, they may disengage from the process or worse still, ‘game’ the results. Gaming—‘the alteration of behaviour to look good rather than the implementation of substantive improvements’18—can be damaging not only to the quality improvement effort but also to the involvement of clinicians in quality improvement work.
For example, problems exist with accreditation in New Zealand, where clinicians know that there are deep and serious problems with care provision, yet their organisation gets accredited without these problems being exposed. This leads to cynicism about the value of being involved in the process and to the questioning of the managerial drivers.
An even more important perverse behaviour is that potentially ‘high risk’ patients are avoided—if such high-risk patients are going to make the ‘figures’ look bad in a league table, there is a danger that there will be pressure to only operate on low risk cases, or concentrate on the easy to reach patients.19
Even when performance indicators are good measures, they may in fact threaten the trust necessary for clinicians to engage in quality improvement work. By elevating the status of external inspection they decrease that of the internal informal work that most organisations have. Indeed, it has been said that the ‘indicator industry has begun to suffer from the regulators’ delusion that central systems of oversight are the sole guarantors of quality.20

What is the evidence that performance indicators improve quality of care?

Despite the enthusiasm for performance indicators and the millions of dollars spent on their development, relatively little is known about the actual impact on improving quality of care.
Performance indicators cannot capture the range and complexity of health service activity and are blunt and dangerous tools when used in pursuit of quality—that is, if they have any impact at all.22
Healthcare is complex and less deterministic than traditional industries. The link between actions and outcomes in not necessarily strong or direct, and it is modified by non-healthcare factors (such as employment, deprivation) and patient-mix.20 There is little evidence of a positive impact from performance indicators on health service delivery or health outcomes. 23 Even the oft-quoted example—the New York State league table published in a New York newspaper, where individual cardiac surgeons were ranked by the mortality rates of their coronary artery bypass patients—has been questioned. Yes, the mortality rates improved, but they also improved in other states that did not have public reporting. New England achieved similar benefits in mortality rates through confidential reporting and sharing best practices.24 Furthermore in New York here was evidence that low volume surgeons stopped operating but also that high-risk cases went elsewhere.25,26
As discussed above, the blunt instrument that is public reporting resulted in unintended consequences.

What can New Zealand learn from all this?

Firstly we need to be cautious about implementation of performance management systems.(Box 2) It may be useful to de-politicise public reporting.8 Both the US and UK have agencies that have some political independence (the National Committee for Quality Assurance and the Commission for Health Improvement, respectively).
In order to get clinical buy-in, to command credibility, performance indicators need to address clinically important areas—for both clinicians and patients. We need to address as much attention as possible on the technical issues of performance indicators—validity, reliability, and case-mix. We also need to anticipate and monitor unintended consequences. It is important to ensure that the performance indicators relate to processes of care over which clinicians have control,27 and that are not unduly affected by the non-healthcare factors discussed above.
Box 2. Lessons for New Zealand—ways of reducing the dysfunctional responses to performance indicators
  • Anticipate and monitor unintended consequences—be prepared to modify indicators and their use accordingly.
  • Consider an agency at ‘arms length’ from the Ministry of Health and the Treasury to develop and monitor a performance programme.
  • Address technical issues for validity, reliability, and case-mix.
  • Focus performance indicators on clinically important areas over which providers have control.
  • Do not set financial incentives at a level which might distort clinical practice.
  • Undertake formative evaluations of the effectiveness of the performance indicator programme as a whole, and learn from this.
Finally there is a substantial opportunity cost involved in developing performance indicators as well as collecting and analysing data, while seeking to risk adjust so that comparisons are useful. This is money that is not being spent on actually improving the quality of care delivered, and was a reason quoted by the Waitemata DHB CEO in his resignation—he cited stifling compliance issues as interfering with his ability to introduce patient safety initiatives. He urged the sector to refocus on concrete improvements to patient care, like better drug dispensing and infection control.28
When using financial incentives, they need to be large enough to influence performance, but not so large that they encourage distortions of the clinical processes and tenets of professionalism.

Summary

Performance indicators, used in report cards or linked to financial incentives, seem to be firmly on the political agenda in New Zealand. If the NHS in the UK is any guide, we should expect to see increasingly vigilant surveillance by the MOH and DHBs of performance and quality in the New Zealand system.
While we endorse the MOH’s interest in healthcare quality, there are serious pitfalls in how performance indicators are used. To be successful, they require funders and managers educated in the subtleties of performance management and indicator use (i.e. not to be content with aggregated measures of little validity) as well as clinicians who can challenge the indicator set.
It is clear that indicators of health care quality are not axiomatically good.2
Two things can be guaranteed; performance indicator use by those funding health systems is on the increase, and indicator use is far from an exact science. What we have tried to do in this article is balance the sometimes zealous proponents of external performance monitoring, with the realities of the problems such monitoring can cause.
Conflict of interest: No conflict.
Author information. Mary Seddon, John Buchanan on behalf of EPIQ.
EPIQ is a School of Population Health Group (at Auckland University) with an interest in improving quality in healthcare in New Zealand.
EPIQ members involved in this Series are:
  • Associate Professor John Buchanan
  • Professor Rod Jackson
  • Professor Alan Merry
  • Dr Allan Pelkowitz
  • Dr Rod Perkins
  • Gillian Robb
  • Dr Lynn Sadler
  • Dr Mary Seddon
Correspondence: Mary Seddon, Senior Lecturer in Quality Improvement Epidemiology & Biostatistics, School of Population Health, University of Auckland, Private Bag 92019, Auckland. Fax: (09) 373 7503; email MZSeddon@middlemore.co.nz
References:
  1. Nightingale F. Notes on hospitals. London: J W Parker; 1863.
  2. Freeman T. Using performance indicators to improve health care quality in the public sector: a review of the literature. Health Services Management Research. 2002;15:126–37.
  3. Jacobs K, Manzi T. Performance indicators and social constructivism: conflict and control in housing management. Critical Social Policy. 2000;20:85–103.
  4. Leggat SG, Narine L, Lemieux-Charles L, et al. A review of organizational performance assessment in health care. Health Services Management Research. 1998;11:3–23.
  5. Crown Health Enterprise Board Designate Briefing Pack, CHE Establishment Unit, Department of Prime Minister and Cabinet, Wellington, 1993.
  6. Marshall MN, Shekelle PG, Leatherman S, Brook RH. The public realise of performance data: what do we expect to gain? A review of the evidence. JAMA. 2000;283:1866–74.
  7. Marshall MN, Shekelle PG, Davies HTO, et al. Public reporting on quality in the United States and United Kingdom, Health Aff. 2003; 22:134–48.
  8. Marshall MN, Romano PS. Impact of reporting hospital performance. Qual Saf Health Care. 2005;14:77–8.
  9. Davies HTO, Lampel J. Trust in performance indicators. Quality in Health Care. 1998;7:159–62.
  10. NHS in England. Performance Indicators for Acute and Specialist Trusts. London: NHS; Available online. URL: http://www.nhs.uk/england/aboutTheNHS/starRatings/acuteSpecialPI.cmsx Accessed September 2006.
  11. Hughes G. The four hour target; problems ahead. Emerg Med J. 2006;23:2.
  12. Doran T, Fullwood C, Gravelle H, et al. Pay-for-performance programs in family practices in the United Kingdom. N Engl J Med. 2006;355:375–84.
  13. Panorama. Pay rises blamed for NHS deficits. London: BBC News; January 26, 2006. http://news.bbc.co.uk/1/programmes/panorama/default.stm# Accessed September 2006.
  14. Ministry of Health. Primary Health Care Strategy. Wellington: MOH. Available online. URL: http://www.moh.govt.nz/primaryhealthcare Accessed September 2006.
  15. District Health Boards New Zealand [website] Wellington: DHBNZ. URL: http://www.dhbnz.org.nz Accessed September 2006.
  16. Goodard M, Mannion R, Smith P. The performance framework: taking account of economic behaviour. In: P. Smith (ed) Reforming Markets in Health Care: an Economic Perspective. Buckingham: Open University Press; 2000: p138–61.
  17. Smith P. The unintended consequences of publishing performance data in the public sector. International Journal of Public Administration. 1995;18:277–310.
  18. Davies HTO, Marshall MN. Public disclosure of performance data: does the public get what the public wants? Lancet. 1999;353:1639–40.
  19. Davies H, Washington A, Bindman A. Health care report cards: implications for vulnerable patient groups and the organisations providing them care. J Health Polit Policy Law. 2002;27:379–99.
  20. Sheldon TA. The healthcare quality measurement industry: time to slow the juggernaut? Qual Saf Health Care. 2005;14:3–4.
  21. Marshall MN, Romano PS, Davies HTO. How do we maximise the impact of public reporting of quality of care? Int J for Quality in Health Care. 2004;16(Suppl1):57–63.
  22. Davis HTO. Performance management using health outcomes: in search of instrumentality. Journal of Evaluation in Clinical Practice. 1998;4(4);359–62.
  23. Berwick DM, James B, Coyle MJ. Connections between quality measurement and improvement. Med Care 2003;41(1 Suppl):130–8.
  24. Petersen ED, Delong ER, Jollis JG et al. The effects of New York’s bypass surgery provider profiling on access to care and patient outcomes in the elderly. J Am Coll Cardiol 1998; 32: 993-9
  25. Chassin MR, Hannan EL, DeBuono BA. Benefits and hazards of reporting medical outcomes publicly. N Eng J Med. 1996;334:394–8.
  26. Omoigui NA, Millar DP, Brown KJ, et al. Outmigration for coronary bypass surgery in an era of public dissemination of clinical outcomes. Circulation. 1996;93:27–33.
  27. Parry GJ, Gould CR, McCabe CJ, Tarnow-Mordi WO. Annual league tables of mortality in neonatal intensive care units: longitudinal study. BMJ. 1998;316:1932–5.
  28. Johnston M. Hospitals tied up in red tape, says chief. Auckland: New Zealand Herald; June 30, 2006. Available online. URL: http://www.nzherald.co.nz/ Accessed September 2006.
     
Current issue | Search journal | Archived issues | Classifieds | Hotline (free ads)
Subscribe | Contribute | Advertise | Contact Us | Copyright | Other Journals