Journal of the New Zealand Medical Association, 08-September-2006, Vol 119 No 1241
What a performance
In their article Quality improvement in New Zealand healthcare. Part 4: achieving effective care through clinical indicators (http://www.nzma.org.nz/journal/119-1240/2131) Buchanan et al promise to tell us about performance indicators, key performance indicators (KPIs), and clinical indicators, defining what they are, how they differ, and how are they used. The authors also mention “continuous quality improvement,” or CQI, a term so fraught that it would be better consigned to oblivion. So how far do we get with the others? KPIs get no further mention in the article, so we can we let that one go.
The authors define performance indicators not in terms of what they are, but of what they do. We learn that they can be either summative mechanisms for external accountability, and formative mechanisms for internal quality improvement. These mechanisms are mutually antagonistic, since they lead to fights over money, with CQI (continuous quality improvement) the helpless bystander.
Clinical indicators, we learn, are a “subset of performance indicators...variously defined...an objective measure of either the process or the outcome of patients’ care in quantitative terms...a powerful means of improving the effectiveness of patient care.”
Sound nice, but it is not as easy as all that. “There are different objectives for clinical indicators depending on who is using [them] and whether the assessment is intended to be summative or formative.” That is to say, CQI falls off when the money dries up, and the doctors have a clear obligation not only to keep spending but also to show that the money does some good.
As I see it, there is no way that clinical indicators are going to be of any help at all if they mean different things to different people. How can they be “objective” when they are “variously defined...do not measure quality directly”, and are used by different people with different objectives? We have a semantic problem here.
What is significant is that it was an initiative launched 20 years ago in the United States that got the summative/formative debate going. The health funds wanted to know what their money was being spent on, a question that has never been asked in this country.
The resolution of the matter lies not in a lot of useless data about performance indicators, KPIs, and clinical indicators. A full investigation of all third-party funding is long overdue. No country has either fully socialised, or fully privatised, its health services. We have chosen to lean well towards the socialised end of the spectrum and the collapse of the waiting lists now tells it own story.
Roger M Ridley-Smith
By way of explanation for the lack of detail on performance indicators in our article (Part 4 in the Series) we had initially intended to discuss performance indicators and clinical indicators in the same article but the topics proved too complex for that and so measurement for performance monitoring and control is described in more detail in the article in this edition (Part 5 in the Series).
Performance indicators are numerical measures of different aspects of organisational performance and a key performance indicator (KPI) simply reflects the concept that some aspects of performance that need to be maintained or improved are more important than others. When a range of these important aspects are selected and suitable measures chosen then a set of KPIs exists. The selection of KPIs is subjective and depends on the setting and circumstances. There are no “automatic” KPIs.1
In our article we state that clinical indicators “are essentially an objective measure of either the process or outcome of patient care in quantitative terms.” Provided that the numerator and denominator of the indicator are clearly defined, and the data is collected and analysed properly, then the measurement is “objective.” Like any clinical measurement the result must then be interpreted by someone with the requisite knowledge and skill who understands the clinical context.
Dr Ridley-Smith has identified a semantic problem and asks how can clinical indicators be “objective” when they “have been variously defined,” “do not measure quality directly” and “are used by different people with different objectives”? As indicated above, the numerical result for a clinical indicator is obtained by objective measurement but there is a subjective element in the interpretation. To maximise the objectivity in measurement and consistency of interpretation of clinical indicators each indicator must be carefully selected for the purpose, be very clearly defined and be accepted, understood and owned by the clinicians and managers who use it. One of the purposes of our article was to explain the attributes of clinical indicators and raise points for consideration in the selection of suitable indicators.
With regard to the tension that may arise when there are different objectives for clinical indicators depending on who is using them and for what purpose Dr Ridley-Smith is perceptive in his comment that “...CQI falls off when the money dries up...” The problem is, however, often one of management rather than funding and forsaking efforts to improve clinical indicators and the usefulness of the data derived from them is likely to hinder rather than help the resolution of such difficulties.
We have no particular quarrel with Dr Ridley-Smith’s plea for an investigation of third-party funding, but believe that it is a separate issue from the assessment of quality and effectiveness of care by means of clinical indicators. Such assessment is important regardless of the arrangements for funding.
Effective Practice Informatics and Quality (EPIQ)
School of Population Health, Faculty of Medicine & Health Sciences
issue | Search journal |
Archived issues | Classifieds
| Hotline (free ads)
Subscribe | Contribute | Advertise | Contact Us | Copyright | Other Journals