View Article PDF

In the 1980s, many OECD countries reformed management of their publicly funded services with New Public Management.[[1,2]] New Public Management aimed to introduce best business practices from for-profit organisations into the non-profit sector. It was expected there would be improvements in performance and output through reductions in hierarchy, increased hands-on and entrepreneurial management, the application of private sector financial instruments, increased customer orientation, the introduction of managerial expertise, competition and the application of explicit standards and measures of performance.

Many features of New Public Management are incorporated in the organisation of New Zealand’s 20 publicly funded district health boards (DHBs). A central component is the measurement of healthcare performance. The Crown Entities Act 2004 requires each DHB to provide statements of intent and annual reports on the performance of the hospital and its related services. Since July 2011 each DHB has also been required to provide an annual plan[[3]] that includes performance data.

But recently questions have been raised about DHB performance management. In 2016, for example, the New Zealand Ombudsman called for improved reporting of quality-of-care measures,[[4]] and yet there remains uncertainty about how performance management contributes to healthcare efficiency and effectiveness.[[5]] Suggestions have been made as to what is required for good measurement,[[6]] but there has been little research into the degree to which healthcare organisations follow these recommendations.

Using publicly reported performance measures, this audit assesses whether data from a single DHB demonstrate continuity, patient centredness, accuracy, effectiveness and clinical relevance.

Method

Statements of intent and annual plans for the DHB were viewed for the 11 financial years 2010/11 to 2020/21. In most years, the non-financial measures reported in the statements of intent were identical to those in the annual plans, but where there was inconsistency, the measure recorded in the statement of intent was analysed. The four-year period 2014–2018 was represented by a single statement of intent. Therefore, all measures for those years were drawn from the annual plans. Measures published in the reports were usually accompanied by a short description, a baseline value then one or more expected annual targets or ranges.

Continuity was evaluated by the number of times a directly comparable measure was repeated in subsequent years. In some cases, one measure could be calculated from other measures, but this was not considered to be continuous reporting due to the reduced accessibility.

Patient-centredness was assessed by the balance of outcome to process measures. Process measures with a strong evidence-based connection to beneficial patient outcomes (eg, childhood immunisation percentages or cervical screening rates) and measures of patient experience or quality of life were re-classed as outcome measures. Projected service load figures were classed as process values. Any reference to patient involvement in measurement choice was classified as patient-centredness.

Accuracy, effectiveness and clinical relevance were assessed using content analysis software (QDA Miner 6.0, Provalis Research, Montreal) at a theme level of analysis. A deductive approach was used with predefined categories, codes and coding rules. Codes and search terms for each category are shown in Table 1. Accuracy of a measure was determined if there were any supporting references to validity, reliability or generalisability.[[7]] Effectiveness was determined by reference to a benchmark, standard, counterbalancing financial, time or opportunity measure, or alignment with a Ministry of Health (MoH)[[8]] or the Health Quality and Safety Commission (HQSC)[[9]] measure.

The presence of an action statement dependant on the result was deemed to support clinical relevance as targets had been described by the DHB as “expectations.” Trends in the direction of the target were also classed as clinically relevant. These were assessed by transforming non-denominator measures repeated four or more times into percentages of the maximum achieved values. The averages of these percentages were then graphed. Separate graphs were prepared for measures with expected positive and negative trends, as these may have been susceptible to separate biases. Clinical relevance was also assessed by how often actual values agreed with the expected value or range for that year. Only expected values of two or more years in advance were included as often the one-year expected value had already become an actual value by the time of finalisation of the statement of intent or annual plan. The most recent actual values used in the 2020–2021 annual plan were from the 2018–19 reporting period.

Categorical variables were analysed using the Chi-squared test, and linear regression was used to assess significance of trends over time. A p-value of ≤0.05 was considered statistically significant.

Ethics committee approval was not sought.

Results

Between 2010/11 and 2020/21, the DHB published 731 distinct performance measures:

  • Prevention services: 229
  • Early detection and management: 208
  • Intensive assessment and treatment: 160
  • Rehabilitation and support: 134  

Continuity

Three hundred and forty-nine measures (48%) were not repeated, 122 (17%) were repeated once and 102 (14%) repeated four or more times (Figure 1).

Figure 1: Numbers of measures repeated in subsequent years.

Patient-centredness

Five hundred and thirty-two of the 731 measures (72.7%) were assessed as processes and 199 (27.2%) as outcomes. Of the 160 measures where a target value or range was achieved, there were similar proportions of processes (73.1%) and outcomes (26.9%). Measures repeated four or more times were more likely to be outcomes than measures repeated less frequently (21.1% vs 12.7%, p<0.001). MoH and HQSC measures were more likely to be outcomes than non-MoH or non-HQSC measures (59.5% vs 11.7%, p<0.001).

Accuracy

Forty-three measures had one or more accompanying references supporting accuracy. Examples were a reference to a data source or a comment on capture methodology. There were no references to the application of statistical significance methods.

Effectiveness

Through content analysis, five measures referenced a standard, benchmark or national registry. Examples included a standard for screening the hearing of newborns and a MoH dataset. A further 172 corresponded (28 identically) with the MoH’s 2020 performance measures and four with the HQSC inpatient experience survey. There were no references to specific counterbalancing costs, but aggregated financial costs were presented in the annual financial performance measures. There were 14 measures of counterbalancing non-financial costs of which 13 were readmission rates.

Clinical relevance

One or more target ranges or goals were achieved in 160 (21.9%) of the 731 measures. There were 1,025 data pairs where an actual value was available for comparison with an expected value or range. The expected value or range was achieved in 329 data pairs (32.1%). Among measures repeated four or more times, MoH and HQSC measures were not more likely to be achieved than non-MoH or non-HQSC measures (p=0.13). Outcome measures were also no more likely to be achieved than process measures. Fourteen of 42 outcome measures had 50% or more of expected values achieved compared with 29 of 60 process measures (p=0.13). Public health measures were less likely to be achieved than non-public health measures (21.7% vs 48.1%, p=0.024). There was a non-significant trend towards increasing values in measures that were expected to increase (p=0.19) (Figure 2) but a significant decrease in values in measures expected to decline (R[[2]]=0.47, p=0.043) (Figure 3). No measures referenced a specific action being dependent on measurement result. Seventy-five measures referred to a general action such as a DHB smoking action plan or an equity programme.

Figure 2: Time trend of measures expected to increase (by financial year).

Figure 3: Time trend of measures expected to decrease (by financial year).

Table 1: Codes and search terms used in content analysis.

Discussion

This survey of one healthcare organisation’s publicly available performance measures showed that 48% were not followed-up over time and that, where comparison was possible, only 21.9% (32.1% of all data pairs) achieved the expected goal or range. There was little supporting reference to patient-centredness, accuracy, effectiveness or clinical relevance. Data verification methods and counterbalancing measures were infrequently reported; there was no reporting of tests of significance. Most performance measures were process measures without clear links to outcomes. Few measures had a benchmark, published standard or historical series for setting expected goals. However, the 172 measures corresponding to MoH performance measures and the four HQSC patient experience measures allowed comparison with national data. During the 11 years that were reviewed, there was no significant increase in values expected to rise but a significant reduction in values expected to decrease.

There do not appear to be other similar audits of a single healthcare organisation in the literature. Targets and measures used by the UK’s National Health Service (NHS) have been reported as improving some outcomes[[10]] but were not confirmed in a New Zealand setting.[[11,12]] Improved outcomes in other settings might be explained by clinician and patient co-designed indicators acting upon reputation.[[13]]

Multiple concerns have been raised about the use of performance measures in healthcare. In the NHS there are examples of fixation on the target rather than the underlying issue, as well as devaluation of unmeasured performance, short-term focus, difficulty dealing with rapidly changing environments, preference for quantitative evaluation over qualitative, inequity caused by over-rewarding and under-rewarding, oversimplification, acceptance of mediocrity, self-perpetuation of excellence by attraction of funding and staff, data misrepresentation, gaming, incorrect deductions, undermining of trust by patients and staff and the misuse of data by local and national healthcare governance bodies.[[14–17]]

Others have noted difficulties comparing data from different sources and in attributing differences.[[18]] Measurement oversight may be distributed, resulting in no clear line of responsibility. Time and money may be wasted if measures are not effective.[[19]] Confusion may occur from lack of agreement between measures,[[6]] and too much reliance may be put on process measures without enough on outcomes. Furthermore, clinical priorities may not even be suitable subjects for targets.[[20]]

Criteria have been suggested to improve performance measures. Measures should centre on the patient,[[21,22]] include quality-of-life measures and patient-reported outcomes, encompass a variety of care types,[[23]] be clinically relevant and clinically credible,[[4]] balance standardisation with variety[[21]] and address questions contributing to informed consent.[[24]] Measures should also address effective team working, the voice and influence of doctors, compassionate leadership,[[25]] allocative efficiency and aspects of service orientation such availability, affordability, approachability and acceptability.[[26]] There should be control groups, costs should be included,[[23]] the data accurate, appropriate tests of significance applied and measures publicly reported.[[6]] Results should be timely or even real-time,[[27]] be systematically analysed[[21,28]] and used to support local learning and continuous improvement.[[11,15]]

A strength of this study is that it has been able to follow goals and repeated measures systematically over several years in a defined, consistent environment. Despite some changes in management over the study period, there were few infrastructure or care-delivery changes to confound the data. The organisation’s planning teams had access to the data and also the means to apply the results to healthcare strategy.

A limitation to this study is that it only studied publicly available documents and so may have missed more robust data in documents only available to the DHB’s strategic planning department or MoH. References to data accuracy, patient involvement, effectiveness or clinical relevance may have been omitted due to space constraints, although with documents being up to 175 pages in length, size may not have been an issue. Omission may have also been due to a desire to maintain public readability despite evidence that the public’s response to performance measurements[[6]] may be less important than the effect on the organisation’s reputation.[[13]] Although a conservative approach was taken in classification, and since validity might be expected to increase with the deductive approach, classification was still subjective and would be expected to be improved with independent coders. The study was of a single DHB so may not be generalisable to other DHBs.

These results suggest that healthcare organisations may have difficulty in applying what has been learnt about performance measurement. The difficulties in continually measuring data may be due to uncertainty about the value of the measures, the cost of repeated measurement or the scarcity of clinically oriented analytics staff. Healthcare is a complex adaptive system[[21]] requiring whole-of-healthcare measurement[[29]] and subject to competing interests from multiple stakeholders. The failure to achieve expected targets may be due to the wrong choice of measures (not patient focused, clinically relevant or proven to enhance patient outcomes) or due to failure to appropriately analyse and apply the findings to healthcare improvement processes. Improved performance measurement will be necessary in any strategy to improve healthcare productivity[[30]] and to move healthcare organisations from a product-dominant logic to a service-dominant system.[[31]]

Recommendations

Statements of intent and annual plans are important documents for public accountability. The vast range of services and transactions in a DHB cannot be fully captured, so accountability reporting should focus on headline patient-reported outcome measures prepared in partnership with clinicians and management. Because performance measures not only measure benefits and harms, they may cause harm themselves. They therefore need to be specific, measurable, relevant, time-bound and evidence-based, but also fully evaluated to demonstrate benefit against a rubric that includes trustworthiness, patient-centredness and effectiveness. They should be measured consistently until formal review indicates they are no longer useful. Results should be visibly linked to specific actions in the statements of intent and annual plans. There should be a balance of service, quality and cost measures. Healthcare systems are rich in data but poor in critical analysis. Therefore, until further evidence is gained on specific measures, consideration should be made of discontinuation of some measures to avoid harms related to information overload or incorrect conclusions. Specific performance measures using the same appropriateness criteria can be used for individual services or shared services.

Further research should look at barriers to involvement of patients and clinicians in the initiation, management and analysis of performance measures. There should be study on how to improve linkages between measures and actions and continued research into defining which measures directly improve outcomes.

Summary

Abstract

Aim

Performance measurement is central to healthcare management in many countries. The aim of this study was to determine whether performance measurement in a New Zealand healthcare organisation met a range of criteria supported by healthcare management literature.

Method

Performance expectations published in statements of intent and annual plans from an 11-year period were analysed for evidence of continuity, accuracy, effectiveness, patient centredness and clinical relevance.

Results

731 distinct performance measurements were identified. 48% were measured only once. Of those where comparison was possible, 21.9% met at least one expected target or range. In published reports there was limited reference to data verification methods, tests of significance, prospective linkage to actions, counterbalancing measures, application of benchmarks or standards, or patient measure prioritisation.

Conclusion

These findings suggest that healthcare organisations do not find performance measurement easy. This may be due to the wrong choice of measures, inappropriate targets, incomplete analyses or difficulty in linking measurement results to actions.

Author Information

Colin F Thompson: Medical Advisor, Acute and Elective Specialist Services, MidCentral District Health Board, Palmerston North.

Acknowledgements

Correspondence

Diabetes and Endocrinology Service, Palmerston North Hospital, 50 Ruahine Street, Private Bag 11036, Palmerston North 4442, 06 3569169 extension 8823

Correspondence Email

Colin.thompson@midcentraldhb.govt.nz

Competing Interests

Dr Thompson reports they have been a party to discussions with management on improving DHB information quality.

1) Hood, C. The "New Public Management" in the 1980s: Variations on a theme. Accounting Organizations and Society. 1995;20:93-109.

2) McLaughlin K, Osborne S, Ferlie E. New Public Management. Current trends and future prospects. London and New York: Routledge, 2002.

3) New Zealand Government [Internet]. New Zealand Public Health and Disability Act 2000 [cited 2019 Feb 5].  Available from: http://www.legislation.govt.nz/act/public/2000/0091/latest/DLM80051.html

4) Office of the Ombudsman [Internet]. Request for surgical complications data [cited 2021 Apr 23]. Available from: https://www.ombudsman.parliament.nz/resources/request-surgical-complications-data

5) Andrews R, Beynon MJ, McDermott A. Configurations of new public management reforms and the efficiency, effectiveness and equity of public healthcare systems: a fuzzy-set qualitative analysis. Public Management Review. 2019;21:1236-60.

6) Shuker C, Bohm G, Hamblin R, et al. Progress in public reporting in New Zealand since the Ombudsman's reporting, and an invitation. NZ Med J. 2017;130(1457):11-22.

7) Leung L. Validity, reliability, and generalizability in qualitative research. J Family Med Prim Care. 2015;4(3):324-7.

8) Ministry of Health [Internet]. DHB Non-financial Monitoring Framework and Performance Measures 2021/22 [cited 2021 Apr 6].  Available from: https://nsfl.health.govt.nz/system/files/documents/pages/dhb_perf_measures_2122_april_update.docx

9) Health Quality and Safety Commission [Internet]. Adult inpatient experience [cited 2021 Apr 9]. Available from: https://www.hqsc.govt.nz/our-programmes/health-quality-evaluation/projects/patient-experience/adult-inpatient-experience/

10) Mays N. Use of Targets to Improve Health System Performance: English NHS Experience and Implications for New Zealand. The Treasury [cited 2021 Apr 23]. Available from: https://www.treasury.govt.nz/sites/default/files/2007-10/twp06-06.pdf

11) Tenbensel T, Jones P, Chalmers LM, Ameratunga S, Carswell P. Gaming New Zealand's Emergency Department Target: How and Why Did It Vary Over Time and Between Organisations? Int J Health Policy Manag. 2020;9(4):152-62.

12) Lines LM. Games People Play: Lessons on Performance Measure Gaming from New Zealand Comment on "Gaming New Zealand's Emergency Department Target: How and Why Did It Vary Over Time and Between Organisations?". Int J Health Policy Manag. 2021;10(4):224-7.

13) Contandriopoulos D, Champagne F, Denis JL. The multiple causal pathways between performance measures' use and effects. Med Care Res Rev. 2014;71(1):3-20.

14) Mannion R, Braithwaite J. Unintended consequences of performance measurement in healthcare: 20 salutary lessons from the English National Health Service. Intern Med J. 2012;42(5):569-74.

15) Alderwick H, Raleigh V. Yet more performance ratings for the NHS. BMJ. 2017; 358:j3836.

16) Hannay E. We don't have more time to wait to measure how well our healthcare system is doing. N Z Med J. 2019;132(1493):77-8.

17) Trivedy M. If I were minister for health, I would … review the four-hour waiting time in the emergency department. J R Soc Med. 2021;114(4):218-21.

18) Appleby J. The NHS in Wales: faring worse than the rest of the UK? BMJ. 2015;350:h1750.

19) Edwards N. Burdensome regulation of the NHS. BMJ. 2016;353:i3414

20) Nuffield Trust. Rating providers for quality: a policy work pursuing? Nuffield Trust. 2013:16.

21) Braithwaite J. Changing how we think about healthcare improvement. BMJ. 2018;361:k2014

22) Kerr A, Shuker C, Devlin G. Transparency in the year of COVID-19 means tracking and publishing performance in the whole health system: progress on the public reporting of acute coronary syndrome data in New Zealand. N Z Med J. 2020;133(1520):113-9.

23) McShane M, Mitchell E. Person centred coordinated care: where does the QOF point us? BMJ. 2015;350:h2540.

24) Bolsin SN, Colson M. Publishing performance data is an ethical obligation in all specialties. BMJ. 2014;349:g6030.

25) West M, Coia D. Caring for doctors. Caring for patients. How to transform UK healthcare environments to support doctors and medical students to care for patients. [cited 2019 Apr 23]. Available from: https://www.gmc-uk.org/-/media/documents/caring-for-doctors-caring-for-patients_pdf-80706341.pdf

26) Jahangir Y, Neiterman E, Janes CR, Meyer SB. Healthcare access, quality of care and efficiency as healthcare performance measure: A Canadian health service view. J Health Soc Sci. 2020;5(3):309-16.

27) Oliver D. Reducing delays in hospitals. BMJ. 2016;354:i5125.

28) Richards T. Tell us how it was for you. BMJ. 2013;347:f6872.

29) Porter ME. What is value in health care? N Engl J Med. 2010;363(26):2477-81.

30) Dixon J, Street A, Allwood D. Productivity in the NHS: why it matters and what to do next. BMJ. 2018;363:k4301.

31) Batalden P. Getting more health from healthcare: quality improvement must acknowledge patient coproduction - an essay by Paul Batalden. BMJ. 2018;362:k3617

For the PDF of this article,
contact nzmj@nzma.org.nz

View Article PDF

In the 1980s, many OECD countries reformed management of their publicly funded services with New Public Management.[[1,2]] New Public Management aimed to introduce best business practices from for-profit organisations into the non-profit sector. It was expected there would be improvements in performance and output through reductions in hierarchy, increased hands-on and entrepreneurial management, the application of private sector financial instruments, increased customer orientation, the introduction of managerial expertise, competition and the application of explicit standards and measures of performance.

Many features of New Public Management are incorporated in the organisation of New Zealand’s 20 publicly funded district health boards (DHBs). A central component is the measurement of healthcare performance. The Crown Entities Act 2004 requires each DHB to provide statements of intent and annual reports on the performance of the hospital and its related services. Since July 2011 each DHB has also been required to provide an annual plan[[3]] that includes performance data.

But recently questions have been raised about DHB performance management. In 2016, for example, the New Zealand Ombudsman called for improved reporting of quality-of-care measures,[[4]] and yet there remains uncertainty about how performance management contributes to healthcare efficiency and effectiveness.[[5]] Suggestions have been made as to what is required for good measurement,[[6]] but there has been little research into the degree to which healthcare organisations follow these recommendations.

Using publicly reported performance measures, this audit assesses whether data from a single DHB demonstrate continuity, patient centredness, accuracy, effectiveness and clinical relevance.

Method

Statements of intent and annual plans for the DHB were viewed for the 11 financial years 2010/11 to 2020/21. In most years, the non-financial measures reported in the statements of intent were identical to those in the annual plans, but where there was inconsistency, the measure recorded in the statement of intent was analysed. The four-year period 2014–2018 was represented by a single statement of intent. Therefore, all measures for those years were drawn from the annual plans. Measures published in the reports were usually accompanied by a short description, a baseline value then one or more expected annual targets or ranges.

Continuity was evaluated by the number of times a directly comparable measure was repeated in subsequent years. In some cases, one measure could be calculated from other measures, but this was not considered to be continuous reporting due to the reduced accessibility.

Patient-centredness was assessed by the balance of outcome to process measures. Process measures with a strong evidence-based connection to beneficial patient outcomes (eg, childhood immunisation percentages or cervical screening rates) and measures of patient experience or quality of life were re-classed as outcome measures. Projected service load figures were classed as process values. Any reference to patient involvement in measurement choice was classified as patient-centredness.

Accuracy, effectiveness and clinical relevance were assessed using content analysis software (QDA Miner 6.0, Provalis Research, Montreal) at a theme level of analysis. A deductive approach was used with predefined categories, codes and coding rules. Codes and search terms for each category are shown in Table 1. Accuracy of a measure was determined if there were any supporting references to validity, reliability or generalisability.[[7]] Effectiveness was determined by reference to a benchmark, standard, counterbalancing financial, time or opportunity measure, or alignment with a Ministry of Health (MoH)[[8]] or the Health Quality and Safety Commission (HQSC)[[9]] measure.

The presence of an action statement dependant on the result was deemed to support clinical relevance as targets had been described by the DHB as “expectations.” Trends in the direction of the target were also classed as clinically relevant. These were assessed by transforming non-denominator measures repeated four or more times into percentages of the maximum achieved values. The averages of these percentages were then graphed. Separate graphs were prepared for measures with expected positive and negative trends, as these may have been susceptible to separate biases. Clinical relevance was also assessed by how often actual values agreed with the expected value or range for that year. Only expected values of two or more years in advance were included as often the one-year expected value had already become an actual value by the time of finalisation of the statement of intent or annual plan. The most recent actual values used in the 2020–2021 annual plan were from the 2018–19 reporting period.

Categorical variables were analysed using the Chi-squared test, and linear regression was used to assess significance of trends over time. A p-value of ≤0.05 was considered statistically significant.

Ethics committee approval was not sought.

Results

Between 2010/11 and 2020/21, the DHB published 731 distinct performance measures:

  • Prevention services: 229
  • Early detection and management: 208
  • Intensive assessment and treatment: 160
  • Rehabilitation and support: 134  

Continuity

Three hundred and forty-nine measures (48%) were not repeated, 122 (17%) were repeated once and 102 (14%) repeated four or more times (Figure 1).

Figure 1: Numbers of measures repeated in subsequent years.

Patient-centredness

Five hundred and thirty-two of the 731 measures (72.7%) were assessed as processes and 199 (27.2%) as outcomes. Of the 160 measures where a target value or range was achieved, there were similar proportions of processes (73.1%) and outcomes (26.9%). Measures repeated four or more times were more likely to be outcomes than measures repeated less frequently (21.1% vs 12.7%, p<0.001). MoH and HQSC measures were more likely to be outcomes than non-MoH or non-HQSC measures (59.5% vs 11.7%, p<0.001).

Accuracy

Forty-three measures had one or more accompanying references supporting accuracy. Examples were a reference to a data source or a comment on capture methodology. There were no references to the application of statistical significance methods.

Effectiveness

Through content analysis, five measures referenced a standard, benchmark or national registry. Examples included a standard for screening the hearing of newborns and a MoH dataset. A further 172 corresponded (28 identically) with the MoH’s 2020 performance measures and four with the HQSC inpatient experience survey. There were no references to specific counterbalancing costs, but aggregated financial costs were presented in the annual financial performance measures. There were 14 measures of counterbalancing non-financial costs of which 13 were readmission rates.

Clinical relevance

One or more target ranges or goals were achieved in 160 (21.9%) of the 731 measures. There were 1,025 data pairs where an actual value was available for comparison with an expected value or range. The expected value or range was achieved in 329 data pairs (32.1%). Among measures repeated four or more times, MoH and HQSC measures were not more likely to be achieved than non-MoH or non-HQSC measures (p=0.13). Outcome measures were also no more likely to be achieved than process measures. Fourteen of 42 outcome measures had 50% or more of expected values achieved compared with 29 of 60 process measures (p=0.13). Public health measures were less likely to be achieved than non-public health measures (21.7% vs 48.1%, p=0.024). There was a non-significant trend towards increasing values in measures that were expected to increase (p=0.19) (Figure 2) but a significant decrease in values in measures expected to decline (R[[2]]=0.47, p=0.043) (Figure 3). No measures referenced a specific action being dependent on measurement result. Seventy-five measures referred to a general action such as a DHB smoking action plan or an equity programme.

Figure 2: Time trend of measures expected to increase (by financial year).

Figure 3: Time trend of measures expected to decrease (by financial year).

Table 1: Codes and search terms used in content analysis.

Discussion

This survey of one healthcare organisation’s publicly available performance measures showed that 48% were not followed-up over time and that, where comparison was possible, only 21.9% (32.1% of all data pairs) achieved the expected goal or range. There was little supporting reference to patient-centredness, accuracy, effectiveness or clinical relevance. Data verification methods and counterbalancing measures were infrequently reported; there was no reporting of tests of significance. Most performance measures were process measures without clear links to outcomes. Few measures had a benchmark, published standard or historical series for setting expected goals. However, the 172 measures corresponding to MoH performance measures and the four HQSC patient experience measures allowed comparison with national data. During the 11 years that were reviewed, there was no significant increase in values expected to rise but a significant reduction in values expected to decrease.

There do not appear to be other similar audits of a single healthcare organisation in the literature. Targets and measures used by the UK’s National Health Service (NHS) have been reported as improving some outcomes[[10]] but were not confirmed in a New Zealand setting.[[11,12]] Improved outcomes in other settings might be explained by clinician and patient co-designed indicators acting upon reputation.[[13]]

Multiple concerns have been raised about the use of performance measures in healthcare. In the NHS there are examples of fixation on the target rather than the underlying issue, as well as devaluation of unmeasured performance, short-term focus, difficulty dealing with rapidly changing environments, preference for quantitative evaluation over qualitative, inequity caused by over-rewarding and under-rewarding, oversimplification, acceptance of mediocrity, self-perpetuation of excellence by attraction of funding and staff, data misrepresentation, gaming, incorrect deductions, undermining of trust by patients and staff and the misuse of data by local and national healthcare governance bodies.[[14–17]]

Others have noted difficulties comparing data from different sources and in attributing differences.[[18]] Measurement oversight may be distributed, resulting in no clear line of responsibility. Time and money may be wasted if measures are not effective.[[19]] Confusion may occur from lack of agreement between measures,[[6]] and too much reliance may be put on process measures without enough on outcomes. Furthermore, clinical priorities may not even be suitable subjects for targets.[[20]]

Criteria have been suggested to improve performance measures. Measures should centre on the patient,[[21,22]] include quality-of-life measures and patient-reported outcomes, encompass a variety of care types,[[23]] be clinically relevant and clinically credible,[[4]] balance standardisation with variety[[21]] and address questions contributing to informed consent.[[24]] Measures should also address effective team working, the voice and influence of doctors, compassionate leadership,[[25]] allocative efficiency and aspects of service orientation such availability, affordability, approachability and acceptability.[[26]] There should be control groups, costs should be included,[[23]] the data accurate, appropriate tests of significance applied and measures publicly reported.[[6]] Results should be timely or even real-time,[[27]] be systematically analysed[[21,28]] and used to support local learning and continuous improvement.[[11,15]]

A strength of this study is that it has been able to follow goals and repeated measures systematically over several years in a defined, consistent environment. Despite some changes in management over the study period, there were few infrastructure or care-delivery changes to confound the data. The organisation’s planning teams had access to the data and also the means to apply the results to healthcare strategy.

A limitation to this study is that it only studied publicly available documents and so may have missed more robust data in documents only available to the DHB’s strategic planning department or MoH. References to data accuracy, patient involvement, effectiveness or clinical relevance may have been omitted due to space constraints, although with documents being up to 175 pages in length, size may not have been an issue. Omission may have also been due to a desire to maintain public readability despite evidence that the public’s response to performance measurements[[6]] may be less important than the effect on the organisation’s reputation.[[13]] Although a conservative approach was taken in classification, and since validity might be expected to increase with the deductive approach, classification was still subjective and would be expected to be improved with independent coders. The study was of a single DHB so may not be generalisable to other DHBs.

These results suggest that healthcare organisations may have difficulty in applying what has been learnt about performance measurement. The difficulties in continually measuring data may be due to uncertainty about the value of the measures, the cost of repeated measurement or the scarcity of clinically oriented analytics staff. Healthcare is a complex adaptive system[[21]] requiring whole-of-healthcare measurement[[29]] and subject to competing interests from multiple stakeholders. The failure to achieve expected targets may be due to the wrong choice of measures (not patient focused, clinically relevant or proven to enhance patient outcomes) or due to failure to appropriately analyse and apply the findings to healthcare improvement processes. Improved performance measurement will be necessary in any strategy to improve healthcare productivity[[30]] and to move healthcare organisations from a product-dominant logic to a service-dominant system.[[31]]

Recommendations

Statements of intent and annual plans are important documents for public accountability. The vast range of services and transactions in a DHB cannot be fully captured, so accountability reporting should focus on headline patient-reported outcome measures prepared in partnership with clinicians and management. Because performance measures not only measure benefits and harms, they may cause harm themselves. They therefore need to be specific, measurable, relevant, time-bound and evidence-based, but also fully evaluated to demonstrate benefit against a rubric that includes trustworthiness, patient-centredness and effectiveness. They should be measured consistently until formal review indicates they are no longer useful. Results should be visibly linked to specific actions in the statements of intent and annual plans. There should be a balance of service, quality and cost measures. Healthcare systems are rich in data but poor in critical analysis. Therefore, until further evidence is gained on specific measures, consideration should be made of discontinuation of some measures to avoid harms related to information overload or incorrect conclusions. Specific performance measures using the same appropriateness criteria can be used for individual services or shared services.

Further research should look at barriers to involvement of patients and clinicians in the initiation, management and analysis of performance measures. There should be study on how to improve linkages between measures and actions and continued research into defining which measures directly improve outcomes.

Summary

Abstract

Aim

Performance measurement is central to healthcare management in many countries. The aim of this study was to determine whether performance measurement in a New Zealand healthcare organisation met a range of criteria supported by healthcare management literature.

Method

Performance expectations published in statements of intent and annual plans from an 11-year period were analysed for evidence of continuity, accuracy, effectiveness, patient centredness and clinical relevance.

Results

731 distinct performance measurements were identified. 48% were measured only once. Of those where comparison was possible, 21.9% met at least one expected target or range. In published reports there was limited reference to data verification methods, tests of significance, prospective linkage to actions, counterbalancing measures, application of benchmarks or standards, or patient measure prioritisation.

Conclusion

These findings suggest that healthcare organisations do not find performance measurement easy. This may be due to the wrong choice of measures, inappropriate targets, incomplete analyses or difficulty in linking measurement results to actions.

Author Information

Colin F Thompson: Medical Advisor, Acute and Elective Specialist Services, MidCentral District Health Board, Palmerston North.

Acknowledgements

Correspondence

Diabetes and Endocrinology Service, Palmerston North Hospital, 50 Ruahine Street, Private Bag 11036, Palmerston North 4442, 06 3569169 extension 8823

Correspondence Email

Colin.thompson@midcentraldhb.govt.nz

Competing Interests

Dr Thompson reports they have been a party to discussions with management on improving DHB information quality.

1) Hood, C. The "New Public Management" in the 1980s: Variations on a theme. Accounting Organizations and Society. 1995;20:93-109.

2) McLaughlin K, Osborne S, Ferlie E. New Public Management. Current trends and future prospects. London and New York: Routledge, 2002.

3) New Zealand Government [Internet]. New Zealand Public Health and Disability Act 2000 [cited 2019 Feb 5].  Available from: http://www.legislation.govt.nz/act/public/2000/0091/latest/DLM80051.html

4) Office of the Ombudsman [Internet]. Request for surgical complications data [cited 2021 Apr 23]. Available from: https://www.ombudsman.parliament.nz/resources/request-surgical-complications-data

5) Andrews R, Beynon MJ, McDermott A. Configurations of new public management reforms and the efficiency, effectiveness and equity of public healthcare systems: a fuzzy-set qualitative analysis. Public Management Review. 2019;21:1236-60.

6) Shuker C, Bohm G, Hamblin R, et al. Progress in public reporting in New Zealand since the Ombudsman's reporting, and an invitation. NZ Med J. 2017;130(1457):11-22.

7) Leung L. Validity, reliability, and generalizability in qualitative research. J Family Med Prim Care. 2015;4(3):324-7.

8) Ministry of Health [Internet]. DHB Non-financial Monitoring Framework and Performance Measures 2021/22 [cited 2021 Apr 6].  Available from: https://nsfl.health.govt.nz/system/files/documents/pages/dhb_perf_measures_2122_april_update.docx

9) Health Quality and Safety Commission [Internet]. Adult inpatient experience [cited 2021 Apr 9]. Available from: https://www.hqsc.govt.nz/our-programmes/health-quality-evaluation/projects/patient-experience/adult-inpatient-experience/

10) Mays N. Use of Targets to Improve Health System Performance: English NHS Experience and Implications for New Zealand. The Treasury [cited 2021 Apr 23]. Available from: https://www.treasury.govt.nz/sites/default/files/2007-10/twp06-06.pdf

11) Tenbensel T, Jones P, Chalmers LM, Ameratunga S, Carswell P. Gaming New Zealand's Emergency Department Target: How and Why Did It Vary Over Time and Between Organisations? Int J Health Policy Manag. 2020;9(4):152-62.

12) Lines LM. Games People Play: Lessons on Performance Measure Gaming from New Zealand Comment on "Gaming New Zealand's Emergency Department Target: How and Why Did It Vary Over Time and Between Organisations?". Int J Health Policy Manag. 2021;10(4):224-7.

13) Contandriopoulos D, Champagne F, Denis JL. The multiple causal pathways between performance measures' use and effects. Med Care Res Rev. 2014;71(1):3-20.

14) Mannion R, Braithwaite J. Unintended consequences of performance measurement in healthcare: 20 salutary lessons from the English National Health Service. Intern Med J. 2012;42(5):569-74.

15) Alderwick H, Raleigh V. Yet more performance ratings for the NHS. BMJ. 2017; 358:j3836.

16) Hannay E. We don't have more time to wait to measure how well our healthcare system is doing. N Z Med J. 2019;132(1493):77-8.

17) Trivedy M. If I were minister for health, I would … review the four-hour waiting time in the emergency department. J R Soc Med. 2021;114(4):218-21.

18) Appleby J. The NHS in Wales: faring worse than the rest of the UK? BMJ. 2015;350:h1750.

19) Edwards N. Burdensome regulation of the NHS. BMJ. 2016;353:i3414

20) Nuffield Trust. Rating providers for quality: a policy work pursuing? Nuffield Trust. 2013:16.

21) Braithwaite J. Changing how we think about healthcare improvement. BMJ. 2018;361:k2014

22) Kerr A, Shuker C, Devlin G. Transparency in the year of COVID-19 means tracking and publishing performance in the whole health system: progress on the public reporting of acute coronary syndrome data in New Zealand. N Z Med J. 2020;133(1520):113-9.

23) McShane M, Mitchell E. Person centred coordinated care: where does the QOF point us? BMJ. 2015;350:h2540.

24) Bolsin SN, Colson M. Publishing performance data is an ethical obligation in all specialties. BMJ. 2014;349:g6030.

25) West M, Coia D. Caring for doctors. Caring for patients. How to transform UK healthcare environments to support doctors and medical students to care for patients. [cited 2019 Apr 23]. Available from: https://www.gmc-uk.org/-/media/documents/caring-for-doctors-caring-for-patients_pdf-80706341.pdf

26) Jahangir Y, Neiterman E, Janes CR, Meyer SB. Healthcare access, quality of care and efficiency as healthcare performance measure: A Canadian health service view. J Health Soc Sci. 2020;5(3):309-16.

27) Oliver D. Reducing delays in hospitals. BMJ. 2016;354:i5125.

28) Richards T. Tell us how it was for you. BMJ. 2013;347:f6872.

29) Porter ME. What is value in health care? N Engl J Med. 2010;363(26):2477-81.

30) Dixon J, Street A, Allwood D. Productivity in the NHS: why it matters and what to do next. BMJ. 2018;363:k4301.

31) Batalden P. Getting more health from healthcare: quality improvement must acknowledge patient coproduction - an essay by Paul Batalden. BMJ. 2018;362:k3617

For the PDF of this article,
contact nzmj@nzma.org.nz

View Article PDF

In the 1980s, many OECD countries reformed management of their publicly funded services with New Public Management.[[1,2]] New Public Management aimed to introduce best business practices from for-profit organisations into the non-profit sector. It was expected there would be improvements in performance and output through reductions in hierarchy, increased hands-on and entrepreneurial management, the application of private sector financial instruments, increased customer orientation, the introduction of managerial expertise, competition and the application of explicit standards and measures of performance.

Many features of New Public Management are incorporated in the organisation of New Zealand’s 20 publicly funded district health boards (DHBs). A central component is the measurement of healthcare performance. The Crown Entities Act 2004 requires each DHB to provide statements of intent and annual reports on the performance of the hospital and its related services. Since July 2011 each DHB has also been required to provide an annual plan[[3]] that includes performance data.

But recently questions have been raised about DHB performance management. In 2016, for example, the New Zealand Ombudsman called for improved reporting of quality-of-care measures,[[4]] and yet there remains uncertainty about how performance management contributes to healthcare efficiency and effectiveness.[[5]] Suggestions have been made as to what is required for good measurement,[[6]] but there has been little research into the degree to which healthcare organisations follow these recommendations.

Using publicly reported performance measures, this audit assesses whether data from a single DHB demonstrate continuity, patient centredness, accuracy, effectiveness and clinical relevance.

Method

Statements of intent and annual plans for the DHB were viewed for the 11 financial years 2010/11 to 2020/21. In most years, the non-financial measures reported in the statements of intent were identical to those in the annual plans, but where there was inconsistency, the measure recorded in the statement of intent was analysed. The four-year period 2014–2018 was represented by a single statement of intent. Therefore, all measures for those years were drawn from the annual plans. Measures published in the reports were usually accompanied by a short description, a baseline value then one or more expected annual targets or ranges.

Continuity was evaluated by the number of times a directly comparable measure was repeated in subsequent years. In some cases, one measure could be calculated from other measures, but this was not considered to be continuous reporting due to the reduced accessibility.

Patient-centredness was assessed by the balance of outcome to process measures. Process measures with a strong evidence-based connection to beneficial patient outcomes (eg, childhood immunisation percentages or cervical screening rates) and measures of patient experience or quality of life were re-classed as outcome measures. Projected service load figures were classed as process values. Any reference to patient involvement in measurement choice was classified as patient-centredness.

Accuracy, effectiveness and clinical relevance were assessed using content analysis software (QDA Miner 6.0, Provalis Research, Montreal) at a theme level of analysis. A deductive approach was used with predefined categories, codes and coding rules. Codes and search terms for each category are shown in Table 1. Accuracy of a measure was determined if there were any supporting references to validity, reliability or generalisability.[[7]] Effectiveness was determined by reference to a benchmark, standard, counterbalancing financial, time or opportunity measure, or alignment with a Ministry of Health (MoH)[[8]] or the Health Quality and Safety Commission (HQSC)[[9]] measure.

The presence of an action statement dependant on the result was deemed to support clinical relevance as targets had been described by the DHB as “expectations.” Trends in the direction of the target were also classed as clinically relevant. These were assessed by transforming non-denominator measures repeated four or more times into percentages of the maximum achieved values. The averages of these percentages were then graphed. Separate graphs were prepared for measures with expected positive and negative trends, as these may have been susceptible to separate biases. Clinical relevance was also assessed by how often actual values agreed with the expected value or range for that year. Only expected values of two or more years in advance were included as often the one-year expected value had already become an actual value by the time of finalisation of the statement of intent or annual plan. The most recent actual values used in the 2020–2021 annual plan were from the 2018–19 reporting period.

Categorical variables were analysed using the Chi-squared test, and linear regression was used to assess significance of trends over time. A p-value of ≤0.05 was considered statistically significant.

Ethics committee approval was not sought.

Results

Between 2010/11 and 2020/21, the DHB published 731 distinct performance measures:

  • Prevention services: 229
  • Early detection and management: 208
  • Intensive assessment and treatment: 160
  • Rehabilitation and support: 134  

Continuity

Three hundred and forty-nine measures (48%) were not repeated, 122 (17%) were repeated once and 102 (14%) repeated four or more times (Figure 1).

Figure 1: Numbers of measures repeated in subsequent years.

Patient-centredness

Five hundred and thirty-two of the 731 measures (72.7%) were assessed as processes and 199 (27.2%) as outcomes. Of the 160 measures where a target value or range was achieved, there were similar proportions of processes (73.1%) and outcomes (26.9%). Measures repeated four or more times were more likely to be outcomes than measures repeated less frequently (21.1% vs 12.7%, p<0.001). MoH and HQSC measures were more likely to be outcomes than non-MoH or non-HQSC measures (59.5% vs 11.7%, p<0.001).

Accuracy

Forty-three measures had one or more accompanying references supporting accuracy. Examples were a reference to a data source or a comment on capture methodology. There were no references to the application of statistical significance methods.

Effectiveness

Through content analysis, five measures referenced a standard, benchmark or national registry. Examples included a standard for screening the hearing of newborns and a MoH dataset. A further 172 corresponded (28 identically) with the MoH’s 2020 performance measures and four with the HQSC inpatient experience survey. There were no references to specific counterbalancing costs, but aggregated financial costs were presented in the annual financial performance measures. There were 14 measures of counterbalancing non-financial costs of which 13 were readmission rates.

Clinical relevance

One or more target ranges or goals were achieved in 160 (21.9%) of the 731 measures. There were 1,025 data pairs where an actual value was available for comparison with an expected value or range. The expected value or range was achieved in 329 data pairs (32.1%). Among measures repeated four or more times, MoH and HQSC measures were not more likely to be achieved than non-MoH or non-HQSC measures (p=0.13). Outcome measures were also no more likely to be achieved than process measures. Fourteen of 42 outcome measures had 50% or more of expected values achieved compared with 29 of 60 process measures (p=0.13). Public health measures were less likely to be achieved than non-public health measures (21.7% vs 48.1%, p=0.024). There was a non-significant trend towards increasing values in measures that were expected to increase (p=0.19) (Figure 2) but a significant decrease in values in measures expected to decline (R[[2]]=0.47, p=0.043) (Figure 3). No measures referenced a specific action being dependent on measurement result. Seventy-five measures referred to a general action such as a DHB smoking action plan or an equity programme.

Figure 2: Time trend of measures expected to increase (by financial year).

Figure 3: Time trend of measures expected to decrease (by financial year).

Table 1: Codes and search terms used in content analysis.

Discussion

This survey of one healthcare organisation’s publicly available performance measures showed that 48% were not followed-up over time and that, where comparison was possible, only 21.9% (32.1% of all data pairs) achieved the expected goal or range. There was little supporting reference to patient-centredness, accuracy, effectiveness or clinical relevance. Data verification methods and counterbalancing measures were infrequently reported; there was no reporting of tests of significance. Most performance measures were process measures without clear links to outcomes. Few measures had a benchmark, published standard or historical series for setting expected goals. However, the 172 measures corresponding to MoH performance measures and the four HQSC patient experience measures allowed comparison with national data. During the 11 years that were reviewed, there was no significant increase in values expected to rise but a significant reduction in values expected to decrease.

There do not appear to be other similar audits of a single healthcare organisation in the literature. Targets and measures used by the UK’s National Health Service (NHS) have been reported as improving some outcomes[[10]] but were not confirmed in a New Zealand setting.[[11,12]] Improved outcomes in other settings might be explained by clinician and patient co-designed indicators acting upon reputation.[[13]]

Multiple concerns have been raised about the use of performance measures in healthcare. In the NHS there are examples of fixation on the target rather than the underlying issue, as well as devaluation of unmeasured performance, short-term focus, difficulty dealing with rapidly changing environments, preference for quantitative evaluation over qualitative, inequity caused by over-rewarding and under-rewarding, oversimplification, acceptance of mediocrity, self-perpetuation of excellence by attraction of funding and staff, data misrepresentation, gaming, incorrect deductions, undermining of trust by patients and staff and the misuse of data by local and national healthcare governance bodies.[[14–17]]

Others have noted difficulties comparing data from different sources and in attributing differences.[[18]] Measurement oversight may be distributed, resulting in no clear line of responsibility. Time and money may be wasted if measures are not effective.[[19]] Confusion may occur from lack of agreement between measures,[[6]] and too much reliance may be put on process measures without enough on outcomes. Furthermore, clinical priorities may not even be suitable subjects for targets.[[20]]

Criteria have been suggested to improve performance measures. Measures should centre on the patient,[[21,22]] include quality-of-life measures and patient-reported outcomes, encompass a variety of care types,[[23]] be clinically relevant and clinically credible,[[4]] balance standardisation with variety[[21]] and address questions contributing to informed consent.[[24]] Measures should also address effective team working, the voice and influence of doctors, compassionate leadership,[[25]] allocative efficiency and aspects of service orientation such availability, affordability, approachability and acceptability.[[26]] There should be control groups, costs should be included,[[23]] the data accurate, appropriate tests of significance applied and measures publicly reported.[[6]] Results should be timely or even real-time,[[27]] be systematically analysed[[21,28]] and used to support local learning and continuous improvement.[[11,15]]

A strength of this study is that it has been able to follow goals and repeated measures systematically over several years in a defined, consistent environment. Despite some changes in management over the study period, there were few infrastructure or care-delivery changes to confound the data. The organisation’s planning teams had access to the data and also the means to apply the results to healthcare strategy.

A limitation to this study is that it only studied publicly available documents and so may have missed more robust data in documents only available to the DHB’s strategic planning department or MoH. References to data accuracy, patient involvement, effectiveness or clinical relevance may have been omitted due to space constraints, although with documents being up to 175 pages in length, size may not have been an issue. Omission may have also been due to a desire to maintain public readability despite evidence that the public’s response to performance measurements[[6]] may be less important than the effect on the organisation’s reputation.[[13]] Although a conservative approach was taken in classification, and since validity might be expected to increase with the deductive approach, classification was still subjective and would be expected to be improved with independent coders. The study was of a single DHB so may not be generalisable to other DHBs.

These results suggest that healthcare organisations may have difficulty in applying what has been learnt about performance measurement. The difficulties in continually measuring data may be due to uncertainty about the value of the measures, the cost of repeated measurement or the scarcity of clinically oriented analytics staff. Healthcare is a complex adaptive system[[21]] requiring whole-of-healthcare measurement[[29]] and subject to competing interests from multiple stakeholders. The failure to achieve expected targets may be due to the wrong choice of measures (not patient focused, clinically relevant or proven to enhance patient outcomes) or due to failure to appropriately analyse and apply the findings to healthcare improvement processes. Improved performance measurement will be necessary in any strategy to improve healthcare productivity[[30]] and to move healthcare organisations from a product-dominant logic to a service-dominant system.[[31]]

Recommendations

Statements of intent and annual plans are important documents for public accountability. The vast range of services and transactions in a DHB cannot be fully captured, so accountability reporting should focus on headline patient-reported outcome measures prepared in partnership with clinicians and management. Because performance measures not only measure benefits and harms, they may cause harm themselves. They therefore need to be specific, measurable, relevant, time-bound and evidence-based, but also fully evaluated to demonstrate benefit against a rubric that includes trustworthiness, patient-centredness and effectiveness. They should be measured consistently until formal review indicates they are no longer useful. Results should be visibly linked to specific actions in the statements of intent and annual plans. There should be a balance of service, quality and cost measures. Healthcare systems are rich in data but poor in critical analysis. Therefore, until further evidence is gained on specific measures, consideration should be made of discontinuation of some measures to avoid harms related to information overload or incorrect conclusions. Specific performance measures using the same appropriateness criteria can be used for individual services or shared services.

Further research should look at barriers to involvement of patients and clinicians in the initiation, management and analysis of performance measures. There should be study on how to improve linkages between measures and actions and continued research into defining which measures directly improve outcomes.

Summary

Abstract

Aim

Performance measurement is central to healthcare management in many countries. The aim of this study was to determine whether performance measurement in a New Zealand healthcare organisation met a range of criteria supported by healthcare management literature.

Method

Performance expectations published in statements of intent and annual plans from an 11-year period were analysed for evidence of continuity, accuracy, effectiveness, patient centredness and clinical relevance.

Results

731 distinct performance measurements were identified. 48% were measured only once. Of those where comparison was possible, 21.9% met at least one expected target or range. In published reports there was limited reference to data verification methods, tests of significance, prospective linkage to actions, counterbalancing measures, application of benchmarks or standards, or patient measure prioritisation.

Conclusion

These findings suggest that healthcare organisations do not find performance measurement easy. This may be due to the wrong choice of measures, inappropriate targets, incomplete analyses or difficulty in linking measurement results to actions.

Author Information

Colin F Thompson: Medical Advisor, Acute and Elective Specialist Services, MidCentral District Health Board, Palmerston North.

Acknowledgements

Correspondence

Diabetes and Endocrinology Service, Palmerston North Hospital, 50 Ruahine Street, Private Bag 11036, Palmerston North 4442, 06 3569169 extension 8823

Correspondence Email

Colin.thompson@midcentraldhb.govt.nz

Competing Interests

Dr Thompson reports they have been a party to discussions with management on improving DHB information quality.

1) Hood, C. The "New Public Management" in the 1980s: Variations on a theme. Accounting Organizations and Society. 1995;20:93-109.

2) McLaughlin K, Osborne S, Ferlie E. New Public Management. Current trends and future prospects. London and New York: Routledge, 2002.

3) New Zealand Government [Internet]. New Zealand Public Health and Disability Act 2000 [cited 2019 Feb 5].  Available from: http://www.legislation.govt.nz/act/public/2000/0091/latest/DLM80051.html

4) Office of the Ombudsman [Internet]. Request for surgical complications data [cited 2021 Apr 23]. Available from: https://www.ombudsman.parliament.nz/resources/request-surgical-complications-data

5) Andrews R, Beynon MJ, McDermott A. Configurations of new public management reforms and the efficiency, effectiveness and equity of public healthcare systems: a fuzzy-set qualitative analysis. Public Management Review. 2019;21:1236-60.

6) Shuker C, Bohm G, Hamblin R, et al. Progress in public reporting in New Zealand since the Ombudsman's reporting, and an invitation. NZ Med J. 2017;130(1457):11-22.

7) Leung L. Validity, reliability, and generalizability in qualitative research. J Family Med Prim Care. 2015;4(3):324-7.

8) Ministry of Health [Internet]. DHB Non-financial Monitoring Framework and Performance Measures 2021/22 [cited 2021 Apr 6].  Available from: https://nsfl.health.govt.nz/system/files/documents/pages/dhb_perf_measures_2122_april_update.docx

9) Health Quality and Safety Commission [Internet]. Adult inpatient experience [cited 2021 Apr 9]. Available from: https://www.hqsc.govt.nz/our-programmes/health-quality-evaluation/projects/patient-experience/adult-inpatient-experience/

10) Mays N. Use of Targets to Improve Health System Performance: English NHS Experience and Implications for New Zealand. The Treasury [cited 2021 Apr 23]. Available from: https://www.treasury.govt.nz/sites/default/files/2007-10/twp06-06.pdf

11) Tenbensel T, Jones P, Chalmers LM, Ameratunga S, Carswell P. Gaming New Zealand's Emergency Department Target: How and Why Did It Vary Over Time and Between Organisations? Int J Health Policy Manag. 2020;9(4):152-62.

12) Lines LM. Games People Play: Lessons on Performance Measure Gaming from New Zealand Comment on "Gaming New Zealand's Emergency Department Target: How and Why Did It Vary Over Time and Between Organisations?". Int J Health Policy Manag. 2021;10(4):224-7.

13) Contandriopoulos D, Champagne F, Denis JL. The multiple causal pathways between performance measures' use and effects. Med Care Res Rev. 2014;71(1):3-20.

14) Mannion R, Braithwaite J. Unintended consequences of performance measurement in healthcare: 20 salutary lessons from the English National Health Service. Intern Med J. 2012;42(5):569-74.

15) Alderwick H, Raleigh V. Yet more performance ratings for the NHS. BMJ. 2017; 358:j3836.

16) Hannay E. We don't have more time to wait to measure how well our healthcare system is doing. N Z Med J. 2019;132(1493):77-8.

17) Trivedy M. If I were minister for health, I would … review the four-hour waiting time in the emergency department. J R Soc Med. 2021;114(4):218-21.

18) Appleby J. The NHS in Wales: faring worse than the rest of the UK? BMJ. 2015;350:h1750.

19) Edwards N. Burdensome regulation of the NHS. BMJ. 2016;353:i3414

20) Nuffield Trust. Rating providers for quality: a policy work pursuing? Nuffield Trust. 2013:16.

21) Braithwaite J. Changing how we think about healthcare improvement. BMJ. 2018;361:k2014

22) Kerr A, Shuker C, Devlin G. Transparency in the year of COVID-19 means tracking and publishing performance in the whole health system: progress on the public reporting of acute coronary syndrome data in New Zealand. N Z Med J. 2020;133(1520):113-9.

23) McShane M, Mitchell E. Person centred coordinated care: where does the QOF point us? BMJ. 2015;350:h2540.

24) Bolsin SN, Colson M. Publishing performance data is an ethical obligation in all specialties. BMJ. 2014;349:g6030.

25) West M, Coia D. Caring for doctors. Caring for patients. How to transform UK healthcare environments to support doctors and medical students to care for patients. [cited 2019 Apr 23]. Available from: https://www.gmc-uk.org/-/media/documents/caring-for-doctors-caring-for-patients_pdf-80706341.pdf

26) Jahangir Y, Neiterman E, Janes CR, Meyer SB. Healthcare access, quality of care and efficiency as healthcare performance measure: A Canadian health service view. J Health Soc Sci. 2020;5(3):309-16.

27) Oliver D. Reducing delays in hospitals. BMJ. 2016;354:i5125.

28) Richards T. Tell us how it was for you. BMJ. 2013;347:f6872.

29) Porter ME. What is value in health care? N Engl J Med. 2010;363(26):2477-81.

30) Dixon J, Street A, Allwood D. Productivity in the NHS: why it matters and what to do next. BMJ. 2018;363:k4301.

31) Batalden P. Getting more health from healthcare: quality improvement must acknowledge patient coproduction - an essay by Paul Batalden. BMJ. 2018;362:k3617

Contact diana@nzma.org.nz
for the PDF of this article

Subscriber Content

The full contents of this pages only available to subscribers.

LOGINSUBSCRIBE