No items found.

View Article PDF

Whilst in the UK the government has acknowledged that the regular implementation of patient surveys is one of the most important sources for quality improvement in the NHS1, in New Zealand we lack a uniform, standardised and validated primary care patient survey.The General Practice Assessment Questionnaire (GPAQ) was originally developed in the USA but was later modified and further developed by the National Primary Care Research and Development Centre at the University of Manchester, U.K. After the UK government implemented the incentivized "Quality and Outcomes Framework" in the 2004 GP contract, the GPAQ was introduced as one of two questionnaires that practices and PHOs could use to obtain patient feedback.Two versions of the GPAQ now exist: (a) the "consultation" questionnaire, administered at the practice and (b) a "postal" version. The survey has recently been endorsed and updated by Cambridge University.2In New Zealand, many practices and PHOs make use of the "Better Practice Patient Questionnaire" (BPPQ) made available by the Royal New Zealand College of General Practitioners' (RNZGPC).The questionnaire is available in Mãori, Samoan, Chinese and Korean. Completion of the survey process enables New Zealand GPs to claim 10 MOPS credits3. The BPPQ asks questions about (a) the practice, (b) the doctor and (c) the staff. It has no specific questions about the practice nurse or about an After Hours service. While it asks patients how they felt about the waiting time at the practice, it does not ask how long they usually have to wait thus making it impossible to ascertain what they think is an acceptable time to wait.It also has an unbalanced response rate set which is skewed in favour of a positive response: the only negative response available to the patient is "poor". Moreover, although it asks GPs to reflect on the feedback received from patients, there is no information on how the GP's performance compares to other GPs in the practice or how the practice compares to other practices within the larger PHO.Method Like the GPAQ, the New Zealand General Practice Assessment Questionnaire (NZGPAQ) developed by Health Services Consumer Research (Ltd) with assistance from ProCare Health, focuses mainly on questions about access, inter-personal aspects of care and continuity of care. The basic layout, sequence and formulation of the questions and response categories in the GPAQ are maintained. Three questions concern After Hours services.4,5 An overall satisfaction question is also included. Three open-ended questions allow the patient to put their opinions in their own words. All questionnaires are individually numbered with a code incorporating the PHO, the practice, the doctor and a sequential number for the patient. This generates a performance measure that tracks ratings over time, by GP, by practice and by PHO. The NZGPAQ questionnaire is available in Mãori, Samoan, Tongan, Chinese and Thai languages. It has been approved by the RNZCGP for MOPS CQI purposes and can be used in the accreditation process. The present overview is divided into two separate sections: an assessment of the participation rate, the resulting representativeness of the sample and the reliability and validity of the questionnaire and an analysis of the results of the survey data This overview is concluded with a recommendation regarding future requirements. Results Participation rate—Since 2005 the NZGPAQ has been used by 184 medical practices (812 participants representing 549 individual GPs) and responses have been recorded from close to 50,000 enrolled patients enrolled in five PHOs (see Table 1). Table 1 Number of practices surveyed over last 7 years Year 2005 2006 2007 2008 2009 2010 2011 Total Participating practices 63 24 20 55 53 59 55 184 Participating GPs 132 34 56 166 184 124 116 549 Patients surveyed 6,894 411 2,764 8,180 11,445 9,820 9,719 49,233 Because of the trailblazing role of ProCare Health, 87% of the data in this database comes from the three ProCare PHOs. Other PHOs presently included in the dataset are South Seas Healthcare, Langimalie Tongan Society, Nelson Bay Primary Health and Waihopai PHO in Invercargill. Although each practice had been instructed to give every patient during a specific time period the opportunity to participate in the survey, there is substantial variance in the number of questionnaires returned. Originally, it was stipulated that 50 questionnaires was the minimum number of questionnaires required for an accurate and useful analysis. However, more recently, Cambridge University has amended this number to 35 per GP if "fairly reliable" results are needed and 60 per GP if "very reliable" results are required.2 Of the 812 survey participants (549 individual GPs) who participated during the last seven years (2005-2011) 472 or 58% returned the required minimum number of questionnaires. However, if we accept the lower criterion of 35 per GP, 611 or 75% returned a sufficient number of questionnaires to generate "fairly reliable" results Representativeness—Although detailed data on demographics was not available on all PHOs included in the survey, an analysis of the demographics pertaining to the ProCare patient population (n=671,000) was compared to the sample of patients submitted by their practices (n=41,000). Results showed that older, female and European patients are over-represented (p<.0001). Across the board, patient satisfaction ratings ranged from 3.62 on a scale which asked about affordability (where 1=very expensive and 5=very affordable) to 5.52 on a scale (where 1=very poor and 6=excellent) which asked about the doctor's concern and caring for his/her patient The scores were well distributed and had relatively large standard deviations ranging from 14% to 31%. The relatively smaller standard deviations on items measuring patients' rating of how they were treated by their GP (14% - 16%) and by the practice nurse (16%-17%) suggest the scores are unanimously endorsed whereas, conversely, large standard deviations on items measuring satisfaction with the patients' ability to speak to their GP (31%), the time they had to wait to be seen (28%) and affordability (27%) demonstrate that there was considerable variability across the practices. Reliability of the data—Following the review of the reliability of the nationwide patient questionnaire used by New Zealand District Health Boards6, the reliability of the NZGPAQ questionnaire was also assessed by means of the "Cronbach alpha" statistic. The value of alpha can range between zero and one and it is generally accepted that if a set of items has an alpha above .60, it is usually considered to be internally consistent. If it goes above .80, it signifies a very high reliability. Above .90 suggests an "excellent" internal consistency. Results show that on measures that gauge satisfaction among patients regarding access, a high alpha level of 0.85 was achieved. Satisfaction with the manner in which they were treated by the doctor and nurse achieved very high alpha levels of 0.969 and 0.952 (see Table 2). Table 2 Cronbach alpha statistics on three dimensions Variables Cronbach's Alpha Rating of access 0.858 Rating of doctor 0.969 Rating of nurse 0.952 Another method by which one can assess the reliability of a survey instrument is to perform a test-retest reliability analysis. However, as patients completing the questionnaire are anonymous, we could not ask the same patient to complete the same questionnaire three years later. Therefore we have to contend ourselves with examining the results of replicating the exercise three years later and calculate correlations between the overall satisfaction scores. Of course, the longer the period in between, the less likely it is to obtain the same score. If we compare the scores at practice level, the correlation is +0.84. If we go down to GP level the correlation between the two sets of data is still quite high at +0.72. So this suggests that patients score their overall satisfaction with practices fairly consistently and only a little less so when they score their overall satisfaction with GPs. This is a very important finding as it demonstrates that the survey can be relied upon to give consistent results. Validity—Convergent validity is demonstrated when variables (constructs) that are expected to be related are, in fact, related. An example of this is the very strong positiverelationship that exists between the item that asks the patient about the ease with which they can get to an After Hours service and their satisfaction with the After Hours service (0.724). Similarly, the very strong negative correlation between the item which asks patients how long they usually have to wait at the practice before the consultation and the item which asks them to rate this waiting time (-0.624) also demonstrates very good construct validity. Discriminant validity (or divergent validity) tests that constructs which should have no relationship do, in fact, not have any relationship. For example, one would not expect the patients' response to a question about access, i.e. "How quickly do you usually get seen by any doctor?" to be correlated to questions which ask specifically about the current appointment; i.e. "Thinking about your consultation today, how do you rate the doctor on the following aspects of care?" Results show that the correlations range from r=0.004 to r=0.01. As the survey clearly distinguishes between items that ought to correlate with one another and items between which one would not expect to find a strong association, these findings confirm excellent construct validity. Frequency distributions—As shown in a number of articles7, results show that age is strongly correlated with satisfaction: older patients are more likely to express greater satisfaction than are younger patients. Similarly, patient satisfaction correlates with the patient's sex, which can partly be explained by sociodemographic differences: female patients tend to be younger, less likely to be in full time employment and thus are less likely to find the consultation to be affordable. Results also showed that female patients are more likely than male patients to have to wait more than half an hour before being seen. Consequently, they are also less likely to rate the waiting time in the practice to be acceptable. Satisfaction is also known to vary with respect to the ethnicity of the patient.8,9 Here too, European patients are more likely than non-European patients to reply with "completely satisfied" when asked to say how satisfied they are. Maori, Pacific and Asian patients are more likely to be female and younger than European patients. They are also more likely to be looking after home and family and be much more frequent visitors to the practice. Asked to indicate how long they usually have to wait at the practice before the consultation begins, the percentage of Pacific patients who report waiting more than half an hour is 3.5 times greater than the percentage of European patients waiting more than half an hour (13.5% vs 3.9%). Yet almost a quarter of this Samoan and Tongan group is not unhappy waiting more than half an hour, suggesting that these patients have different expectations of the required wait4. Naturally enough, affordability also played a role in overall satisfaction with the practice: 77% of patients who responded with "very affordable" to the question "How affordable was the consultation for you personally?" also said that they were "completely satisfied" with the practice vs only 26% of patients who felt the consultation was "very expensive". When asked to rate the practice's opening hours 92% of the sample stated that the opening hours were "good", "very good" or "excellent". Pressed to say what additional hours they would like to see the practice open for consultations, close to 60% said there was no need for that. However, one in five patients (20%) suggested the practice could open on the weekend while an additional 16% recommended evening hours. Finally, in response to the question "What After Hours service do you use when your general practice is closed?" close to 60% reported using the Accident and Medical Clinics, while another 15% stated they used the Hospital Emergency Department. The St John ambulance service received the most praise: one in three patients (34%) said they were "very satisfied". International comparison—The item scales used by Ramsay et al,10 and Bower at al11were calculated and compared with those reported by Potiriadis et al12 who reported the results obtained in her Australian GPAQ survey along with those obtained in the UK (see Table 3). Table 3. Inter-country comparison Satisfaction with: n Mean item score New Zealand n Mean Item score Australia n Mean item score UK Receptionist 49,033 85.7% 7,122 81.8% 19,803 69.1% Access to practice 43,689 76.6% 7,111 68.6% 19,302 58.3% Continuity of care 45,272 80.4% 7,080 76.5% 18,586 66.1% Communication 45,599 89.1% 7,104 84.0% 18,528 75.9% Nursing Care 18,703 83.4% 2,142 80.0% 13,740 76.3% Practice overall 47,724 86.3% 7,097 81.6% 19,039 76.5% Results showed that the mean scores for the seven NZGPAQ items ranged from 76.6% for satisfaction with access to the practice to 89.1% for satisfaction with communication between the GP and the patient. The New Zealand results were very much higher than the UK results and, across the board, about 5% higher than the Australian results. Discussion This is not the first time that the GPAQ has been adapted for use in a different country. In 2006 the questionnaire was translated in Thai and tested on

Summary

Abstract

Aim

To determine whether the New Zealand adaptation of the UK developed General Practice Assessment Questionnaire (GPAQ) is a valid and reliable indicator of the quality of care in general practice in New Zealand and what the survey can tell us about patient satisfaction with general practice.

Method

The Health Services Consumer Research Ltd Primary Care Patient Survey database which presently contains data from 184 medical practices (549 GPs) and responses from 50,000 enrolled patients was examined to determine the validity and reliability of the survey instrument. Data was briefly analysed to ascertain how survey results can best be employed to improve the quality of primary care.

Results

A check on representativeness showed that older, female and European patients are over-represented. To determine validity and reliability, the Cronbach alpha statistic was calculated and shown to range between 0.85 - 0.96. Convergent validity was demonstrated by high correlations between items that measured closely related aspects of patient care. Discriminant validity was shown by very low correlations between variables that measured unrelated items. Further analyses show how patients age, sex and ethnic group influence the level of satisfaction experienced.

Conclusion

The NZGPAQ survey can be employed nationwide to improve the quality of primary care because these patient survey results emphasize where service delivery is good/excellent and identify where change is needed to improve patient satisfaction.

Author Information

Gerard Zwier, Managing Director, Health Services Consumer Research Limited, Auckland

Acknowledgements

We are grateful to the 549 GPs, their patients and practice staff for participating in the surveys. We particularly thank Dr Tom Marshall (as the Foundation Chairman of ProCare Health taking an early interest in the survey) and Mr Ron Hooton, CEO ProCare Health, for his permission to publish the data from the survey.

Correspondence

Gerard Zwier PhD, Managing Director, Health Services Consumer Research Limited, PO Box 440, Shortland Street, Auckland 1140. New Zealand.

Correspondence Email

inbox@hscr.co.nz

Competing Interests

Gerard Zwier is Managing Director of Health Services Consumer Research Limited. The NZGPAQ is free to use for New Zealand General Practitioners and can be downloaded from http://www.hscr.co.nz/wp-content/uploads/2011/02/, Practices and PHOs wishing to implement the NZGPAQ with help from HSCR, please contact the author. Commercial companies selling patient evaluation services using the NZGPAQ are required to pay a royalty fee to Cambridge University.

Bower P, Roland M, Campbell J, Mead N. Setting standards based on patients views on access and continuity: secondary analysis of data from the general practice assessment survey. BMJ. 2003;326:258.http://www.gpaq.info/ (accessed 20 June 2012).The Royal New Zealand College of General Practitioners Cornerstone General Practice Accreditation Programme MOPS triennium 2011-2013 CQI Activity (Continuous Quality Improvement).Campbell J, Roland M, Richards S, et al. Users' reports and evaluations of out-of-hours health care and the UK national quality requirements: a cross sectional study. Br J Gen Pract. 2009;59 (558):e8-15.Garratt AM, Danielsen K, Hunskaar S. Patient satisfaction questionnaires for primary care out-of-hours services: a systematic review. Br J Gen Pract. 2007;57:741-7.Zwier G. Patient Satisfaction in New Zealand. N Z Med J.2009;122:38-49.Campbell JL, Ramsay J, Green J. Age, gender, socioeconomic, and ethnic differences in patients assessments of primary health are. Qual Health Care. 2001;10:90-5.Roland M. Understanding why some ethnic minority patients evaluate medical care more negatively than white patients: a cross sectional analysis of a routine patient survey in English general practices. BMJ. 2009; 339:b3450.Lyratzopoulos G, Elliott M, Barbiere JM et al. Understanding ethnic and other socio-demographic differences in patient experience of primary care: evidence from the English General Practice Patient Survey. BMJ Qual Saf. 2012;21:21-9.Ramsay J, Campbell JL, Schroter S, et al. The General Practices Assessment Survey (GPAS): tests of data quality and measurement properties. Fam Pract. 2001; 17:372-79.Bower P, Mead N, Roland M. What dimensions underlie patient responses to the General Assessment Survey? A factor analytic study. Fam Pract. 2002;19:489-95.Potiriadis M, Chondros P, Gilchrist G et al. How do Australian patients rate their general practitioner? A descriptive study using the General Practice Assessment Questionnaire. Med J Aust. 2008; 189:215-9.Jaturapatporn D, Hathirat S, Manataweewat B et al. Reliability and validity of a Thai version of the General Practice Assessment Questionnaire (GPAQ) J Med Assoc Thai. 2006;89:1491-6.Zwier G, Clarke D. How well do we monitor patient satisfaction? Problems with the nation-wide patient survey. N Z Med J.1999;112: 371-5.Greco M., Raising the bar on consumer feedback - improving health services. Australian Health Consumer. 2005;3:11-12.New Zealand Patient Satisfaction Index March 2002 - Dec 2011. Health Services Consumer Research Limited. Auckland.Roland M, Elliott M, Lyratzopoulos G et al. Reliability of patient responses in pay for performance schemes: analysis of national General practitioner Patient Survey data in England. BMJ. 2009; 339-b3851.Campbell SM, Kontopantelis E, Reeves D et al. Changes in patient experiences of primary care during Health Service Reforms in England between 2003 and 2007. Ann Fam Med. 2010;8:499-506.Roland M, Rosen R. Englands NHS embarks on controversial and risky market-style reforms in health care. New Engl J Med. 2011; 364:1360-6.

For the PDF of this article,
contact nzmj@nzma.org.nz

View Article PDF

Whilst in the UK the government has acknowledged that the regular implementation of patient surveys is one of the most important sources for quality improvement in the NHS1, in New Zealand we lack a uniform, standardised and validated primary care patient survey.The General Practice Assessment Questionnaire (GPAQ) was originally developed in the USA but was later modified and further developed by the National Primary Care Research and Development Centre at the University of Manchester, U.K. After the UK government implemented the incentivized "Quality and Outcomes Framework" in the 2004 GP contract, the GPAQ was introduced as one of two questionnaires that practices and PHOs could use to obtain patient feedback.Two versions of the GPAQ now exist: (a) the "consultation" questionnaire, administered at the practice and (b) a "postal" version. The survey has recently been endorsed and updated by Cambridge University.2In New Zealand, many practices and PHOs make use of the "Better Practice Patient Questionnaire" (BPPQ) made available by the Royal New Zealand College of General Practitioners' (RNZGPC).The questionnaire is available in Mãori, Samoan, Chinese and Korean. Completion of the survey process enables New Zealand GPs to claim 10 MOPS credits3. The BPPQ asks questions about (a) the practice, (b) the doctor and (c) the staff. It has no specific questions about the practice nurse or about an After Hours service. While it asks patients how they felt about the waiting time at the practice, it does not ask how long they usually have to wait thus making it impossible to ascertain what they think is an acceptable time to wait.It also has an unbalanced response rate set which is skewed in favour of a positive response: the only negative response available to the patient is "poor". Moreover, although it asks GPs to reflect on the feedback received from patients, there is no information on how the GP's performance compares to other GPs in the practice or how the practice compares to other practices within the larger PHO.Method Like the GPAQ, the New Zealand General Practice Assessment Questionnaire (NZGPAQ) developed by Health Services Consumer Research (Ltd) with assistance from ProCare Health, focuses mainly on questions about access, inter-personal aspects of care and continuity of care. The basic layout, sequence and formulation of the questions and response categories in the GPAQ are maintained. Three questions concern After Hours services.4,5 An overall satisfaction question is also included. Three open-ended questions allow the patient to put their opinions in their own words. All questionnaires are individually numbered with a code incorporating the PHO, the practice, the doctor and a sequential number for the patient. This generates a performance measure that tracks ratings over time, by GP, by practice and by PHO. The NZGPAQ questionnaire is available in Mãori, Samoan, Tongan, Chinese and Thai languages. It has been approved by the RNZCGP for MOPS CQI purposes and can be used in the accreditation process. The present overview is divided into two separate sections: an assessment of the participation rate, the resulting representativeness of the sample and the reliability and validity of the questionnaire and an analysis of the results of the survey data This overview is concluded with a recommendation regarding future requirements. Results Participation rate—Since 2005 the NZGPAQ has been used by 184 medical practices (812 participants representing 549 individual GPs) and responses have been recorded from close to 50,000 enrolled patients enrolled in five PHOs (see Table 1). Table 1 Number of practices surveyed over last 7 years Year 2005 2006 2007 2008 2009 2010 2011 Total Participating practices 63 24 20 55 53 59 55 184 Participating GPs 132 34 56 166 184 124 116 549 Patients surveyed 6,894 411 2,764 8,180 11,445 9,820 9,719 49,233 Because of the trailblazing role of ProCare Health, 87% of the data in this database comes from the three ProCare PHOs. Other PHOs presently included in the dataset are South Seas Healthcare, Langimalie Tongan Society, Nelson Bay Primary Health and Waihopai PHO in Invercargill. Although each practice had been instructed to give every patient during a specific time period the opportunity to participate in the survey, there is substantial variance in the number of questionnaires returned. Originally, it was stipulated that 50 questionnaires was the minimum number of questionnaires required for an accurate and useful analysis. However, more recently, Cambridge University has amended this number to 35 per GP if "fairly reliable" results are needed and 60 per GP if "very reliable" results are required.2 Of the 812 survey participants (549 individual GPs) who participated during the last seven years (2005-2011) 472 or 58% returned the required minimum number of questionnaires. However, if we accept the lower criterion of 35 per GP, 611 or 75% returned a sufficient number of questionnaires to generate "fairly reliable" results Representativeness—Although detailed data on demographics was not available on all PHOs included in the survey, an analysis of the demographics pertaining to the ProCare patient population (n=671,000) was compared to the sample of patients submitted by their practices (n=41,000). Results showed that older, female and European patients are over-represented (p<.0001). Across the board, patient satisfaction ratings ranged from 3.62 on a scale which asked about affordability (where 1=very expensive and 5=very affordable) to 5.52 on a scale (where 1=very poor and 6=excellent) which asked about the doctor's concern and caring for his/her patient The scores were well distributed and had relatively large standard deviations ranging from 14% to 31%. The relatively smaller standard deviations on items measuring patients' rating of how they were treated by their GP (14% - 16%) and by the practice nurse (16%-17%) suggest the scores are unanimously endorsed whereas, conversely, large standard deviations on items measuring satisfaction with the patients' ability to speak to their GP (31%), the time they had to wait to be seen (28%) and affordability (27%) demonstrate that there was considerable variability across the practices. Reliability of the data—Following the review of the reliability of the nationwide patient questionnaire used by New Zealand District Health Boards6, the reliability of the NZGPAQ questionnaire was also assessed by means of the "Cronbach alpha" statistic. The value of alpha can range between zero and one and it is generally accepted that if a set of items has an alpha above .60, it is usually considered to be internally consistent. If it goes above .80, it signifies a very high reliability. Above .90 suggests an "excellent" internal consistency. Results show that on measures that gauge satisfaction among patients regarding access, a high alpha level of 0.85 was achieved. Satisfaction with the manner in which they were treated by the doctor and nurse achieved very high alpha levels of 0.969 and 0.952 (see Table 2). Table 2 Cronbach alpha statistics on three dimensions Variables Cronbach's Alpha Rating of access 0.858 Rating of doctor 0.969 Rating of nurse 0.952 Another method by which one can assess the reliability of a survey instrument is to perform a test-retest reliability analysis. However, as patients completing the questionnaire are anonymous, we could not ask the same patient to complete the same questionnaire three years later. Therefore we have to contend ourselves with examining the results of replicating the exercise three years later and calculate correlations between the overall satisfaction scores. Of course, the longer the period in between, the less likely it is to obtain the same score. If we compare the scores at practice level, the correlation is +0.84. If we go down to GP level the correlation between the two sets of data is still quite high at +0.72. So this suggests that patients score their overall satisfaction with practices fairly consistently and only a little less so when they score their overall satisfaction with GPs. This is a very important finding as it demonstrates that the survey can be relied upon to give consistent results. Validity—Convergent validity is demonstrated when variables (constructs) that are expected to be related are, in fact, related. An example of this is the very strong positiverelationship that exists between the item that asks the patient about the ease with which they can get to an After Hours service and their satisfaction with the After Hours service (0.724). Similarly, the very strong negative correlation between the item which asks patients how long they usually have to wait at the practice before the consultation and the item which asks them to rate this waiting time (-0.624) also demonstrates very good construct validity. Discriminant validity (or divergent validity) tests that constructs which should have no relationship do, in fact, not have any relationship. For example, one would not expect the patients' response to a question about access, i.e. "How quickly do you usually get seen by any doctor?" to be correlated to questions which ask specifically about the current appointment; i.e. "Thinking about your consultation today, how do you rate the doctor on the following aspects of care?" Results show that the correlations range from r=0.004 to r=0.01. As the survey clearly distinguishes between items that ought to correlate with one another and items between which one would not expect to find a strong association, these findings confirm excellent construct validity. Frequency distributions—As shown in a number of articles7, results show that age is strongly correlated with satisfaction: older patients are more likely to express greater satisfaction than are younger patients. Similarly, patient satisfaction correlates with the patient's sex, which can partly be explained by sociodemographic differences: female patients tend to be younger, less likely to be in full time employment and thus are less likely to find the consultation to be affordable. Results also showed that female patients are more likely than male patients to have to wait more than half an hour before being seen. Consequently, they are also less likely to rate the waiting time in the practice to be acceptable. Satisfaction is also known to vary with respect to the ethnicity of the patient.8,9 Here too, European patients are more likely than non-European patients to reply with "completely satisfied" when asked to say how satisfied they are. Maori, Pacific and Asian patients are more likely to be female and younger than European patients. They are also more likely to be looking after home and family and be much more frequent visitors to the practice. Asked to indicate how long they usually have to wait at the practice before the consultation begins, the percentage of Pacific patients who report waiting more than half an hour is 3.5 times greater than the percentage of European patients waiting more than half an hour (13.5% vs 3.9%). Yet almost a quarter of this Samoan and Tongan group is not unhappy waiting more than half an hour, suggesting that these patients have different expectations of the required wait4. Naturally enough, affordability also played a role in overall satisfaction with the practice: 77% of patients who responded with "very affordable" to the question "How affordable was the consultation for you personally?" also said that they were "completely satisfied" with the practice vs only 26% of patients who felt the consultation was "very expensive". When asked to rate the practice's opening hours 92% of the sample stated that the opening hours were "good", "very good" or "excellent". Pressed to say what additional hours they would like to see the practice open for consultations, close to 60% said there was no need for that. However, one in five patients (20%) suggested the practice could open on the weekend while an additional 16% recommended evening hours. Finally, in response to the question "What After Hours service do you use when your general practice is closed?" close to 60% reported using the Accident and Medical Clinics, while another 15% stated they used the Hospital Emergency Department. The St John ambulance service received the most praise: one in three patients (34%) said they were "very satisfied". International comparison—The item scales used by Ramsay et al,10 and Bower at al11were calculated and compared with those reported by Potiriadis et al12 who reported the results obtained in her Australian GPAQ survey along with those obtained in the UK (see Table 3). Table 3. Inter-country comparison Satisfaction with: n Mean item score New Zealand n Mean Item score Australia n Mean item score UK Receptionist 49,033 85.7% 7,122 81.8% 19,803 69.1% Access to practice 43,689 76.6% 7,111 68.6% 19,302 58.3% Continuity of care 45,272 80.4% 7,080 76.5% 18,586 66.1% Communication 45,599 89.1% 7,104 84.0% 18,528 75.9% Nursing Care 18,703 83.4% 2,142 80.0% 13,740 76.3% Practice overall 47,724 86.3% 7,097 81.6% 19,039 76.5% Results showed that the mean scores for the seven NZGPAQ items ranged from 76.6% for satisfaction with access to the practice to 89.1% for satisfaction with communication between the GP and the patient. The New Zealand results were very much higher than the UK results and, across the board, about 5% higher than the Australian results. Discussion This is not the first time that the GPAQ has been adapted for use in a different country. In 2006 the questionnaire was translated in Thai and tested on

Summary

Abstract

Aim

To determine whether the New Zealand adaptation of the UK developed General Practice Assessment Questionnaire (GPAQ) is a valid and reliable indicator of the quality of care in general practice in New Zealand and what the survey can tell us about patient satisfaction with general practice.

Method

The Health Services Consumer Research Ltd Primary Care Patient Survey database which presently contains data from 184 medical practices (549 GPs) and responses from 50,000 enrolled patients was examined to determine the validity and reliability of the survey instrument. Data was briefly analysed to ascertain how survey results can best be employed to improve the quality of primary care.

Results

A check on representativeness showed that older, female and European patients are over-represented. To determine validity and reliability, the Cronbach alpha statistic was calculated and shown to range between 0.85 - 0.96. Convergent validity was demonstrated by high correlations between items that measured closely related aspects of patient care. Discriminant validity was shown by very low correlations between variables that measured unrelated items. Further analyses show how patients age, sex and ethnic group influence the level of satisfaction experienced.

Conclusion

The NZGPAQ survey can be employed nationwide to improve the quality of primary care because these patient survey results emphasize where service delivery is good/excellent and identify where change is needed to improve patient satisfaction.

Author Information

Gerard Zwier, Managing Director, Health Services Consumer Research Limited, Auckland

Acknowledgements

We are grateful to the 549 GPs, their patients and practice staff for participating in the surveys. We particularly thank Dr Tom Marshall (as the Foundation Chairman of ProCare Health taking an early interest in the survey) and Mr Ron Hooton, CEO ProCare Health, for his permission to publish the data from the survey.

Correspondence

Gerard Zwier PhD, Managing Director, Health Services Consumer Research Limited, PO Box 440, Shortland Street, Auckland 1140. New Zealand.

Correspondence Email

inbox@hscr.co.nz

Competing Interests

Gerard Zwier is Managing Director of Health Services Consumer Research Limited. The NZGPAQ is free to use for New Zealand General Practitioners and can be downloaded from http://www.hscr.co.nz/wp-content/uploads/2011/02/, Practices and PHOs wishing to implement the NZGPAQ with help from HSCR, please contact the author. Commercial companies selling patient evaluation services using the NZGPAQ are required to pay a royalty fee to Cambridge University.

Bower P, Roland M, Campbell J, Mead N. Setting standards based on patients views on access and continuity: secondary analysis of data from the general practice assessment survey. BMJ. 2003;326:258.http://www.gpaq.info/ (accessed 20 June 2012).The Royal New Zealand College of General Practitioners Cornerstone General Practice Accreditation Programme MOPS triennium 2011-2013 CQI Activity (Continuous Quality Improvement).Campbell J, Roland M, Richards S, et al. Users' reports and evaluations of out-of-hours health care and the UK national quality requirements: a cross sectional study. Br J Gen Pract. 2009;59 (558):e8-15.Garratt AM, Danielsen K, Hunskaar S. Patient satisfaction questionnaires for primary care out-of-hours services: a systematic review. Br J Gen Pract. 2007;57:741-7.Zwier G. Patient Satisfaction in New Zealand. N Z Med J.2009;122:38-49.Campbell JL, Ramsay J, Green J. Age, gender, socioeconomic, and ethnic differences in patients assessments of primary health are. Qual Health Care. 2001;10:90-5.Roland M. Understanding why some ethnic minority patients evaluate medical care more negatively than white patients: a cross sectional analysis of a routine patient survey in English general practices. BMJ. 2009; 339:b3450.Lyratzopoulos G, Elliott M, Barbiere JM et al. Understanding ethnic and other socio-demographic differences in patient experience of primary care: evidence from the English General Practice Patient Survey. BMJ Qual Saf. 2012;21:21-9.Ramsay J, Campbell JL, Schroter S, et al. The General Practices Assessment Survey (GPAS): tests of data quality and measurement properties. Fam Pract. 2001; 17:372-79.Bower P, Mead N, Roland M. What dimensions underlie patient responses to the General Assessment Survey? A factor analytic study. Fam Pract. 2002;19:489-95.Potiriadis M, Chondros P, Gilchrist G et al. How do Australian patients rate their general practitioner? A descriptive study using the General Practice Assessment Questionnaire. Med J Aust. 2008; 189:215-9.Jaturapatporn D, Hathirat S, Manataweewat B et al. Reliability and validity of a Thai version of the General Practice Assessment Questionnaire (GPAQ) J Med Assoc Thai. 2006;89:1491-6.Zwier G, Clarke D. How well do we monitor patient satisfaction? Problems with the nation-wide patient survey. N Z Med J.1999;112: 371-5.Greco M., Raising the bar on consumer feedback - improving health services. Australian Health Consumer. 2005;3:11-12.New Zealand Patient Satisfaction Index March 2002 - Dec 2011. Health Services Consumer Research Limited. Auckland.Roland M, Elliott M, Lyratzopoulos G et al. Reliability of patient responses in pay for performance schemes: analysis of national General practitioner Patient Survey data in England. BMJ. 2009; 339-b3851.Campbell SM, Kontopantelis E, Reeves D et al. Changes in patient experiences of primary care during Health Service Reforms in England between 2003 and 2007. Ann Fam Med. 2010;8:499-506.Roland M, Rosen R. Englands NHS embarks on controversial and risky market-style reforms in health care. New Engl J Med. 2011; 364:1360-6.

For the PDF of this article,
contact nzmj@nzma.org.nz

View Article PDF

Whilst in the UK the government has acknowledged that the regular implementation of patient surveys is one of the most important sources for quality improvement in the NHS1, in New Zealand we lack a uniform, standardised and validated primary care patient survey.The General Practice Assessment Questionnaire (GPAQ) was originally developed in the USA but was later modified and further developed by the National Primary Care Research and Development Centre at the University of Manchester, U.K. After the UK government implemented the incentivized "Quality and Outcomes Framework" in the 2004 GP contract, the GPAQ was introduced as one of two questionnaires that practices and PHOs could use to obtain patient feedback.Two versions of the GPAQ now exist: (a) the "consultation" questionnaire, administered at the practice and (b) a "postal" version. The survey has recently been endorsed and updated by Cambridge University.2In New Zealand, many practices and PHOs make use of the "Better Practice Patient Questionnaire" (BPPQ) made available by the Royal New Zealand College of General Practitioners' (RNZGPC).The questionnaire is available in Mãori, Samoan, Chinese and Korean. Completion of the survey process enables New Zealand GPs to claim 10 MOPS credits3. The BPPQ asks questions about (a) the practice, (b) the doctor and (c) the staff. It has no specific questions about the practice nurse or about an After Hours service. While it asks patients how they felt about the waiting time at the practice, it does not ask how long they usually have to wait thus making it impossible to ascertain what they think is an acceptable time to wait.It also has an unbalanced response rate set which is skewed in favour of a positive response: the only negative response available to the patient is "poor". Moreover, although it asks GPs to reflect on the feedback received from patients, there is no information on how the GP's performance compares to other GPs in the practice or how the practice compares to other practices within the larger PHO.Method Like the GPAQ, the New Zealand General Practice Assessment Questionnaire (NZGPAQ) developed by Health Services Consumer Research (Ltd) with assistance from ProCare Health, focuses mainly on questions about access, inter-personal aspects of care and continuity of care. The basic layout, sequence and formulation of the questions and response categories in the GPAQ are maintained. Three questions concern After Hours services.4,5 An overall satisfaction question is also included. Three open-ended questions allow the patient to put their opinions in their own words. All questionnaires are individually numbered with a code incorporating the PHO, the practice, the doctor and a sequential number for the patient. This generates a performance measure that tracks ratings over time, by GP, by practice and by PHO. The NZGPAQ questionnaire is available in Mãori, Samoan, Tongan, Chinese and Thai languages. It has been approved by the RNZCGP for MOPS CQI purposes and can be used in the accreditation process. The present overview is divided into two separate sections: an assessment of the participation rate, the resulting representativeness of the sample and the reliability and validity of the questionnaire and an analysis of the results of the survey data This overview is concluded with a recommendation regarding future requirements. Results Participation rate—Since 2005 the NZGPAQ has been used by 184 medical practices (812 participants representing 549 individual GPs) and responses have been recorded from close to 50,000 enrolled patients enrolled in five PHOs (see Table 1). Table 1 Number of practices surveyed over last 7 years Year 2005 2006 2007 2008 2009 2010 2011 Total Participating practices 63 24 20 55 53 59 55 184 Participating GPs 132 34 56 166 184 124 116 549 Patients surveyed 6,894 411 2,764 8,180 11,445 9,820 9,719 49,233 Because of the trailblazing role of ProCare Health, 87% of the data in this database comes from the three ProCare PHOs. Other PHOs presently included in the dataset are South Seas Healthcare, Langimalie Tongan Society, Nelson Bay Primary Health and Waihopai PHO in Invercargill. Although each practice had been instructed to give every patient during a specific time period the opportunity to participate in the survey, there is substantial variance in the number of questionnaires returned. Originally, it was stipulated that 50 questionnaires was the minimum number of questionnaires required for an accurate and useful analysis. However, more recently, Cambridge University has amended this number to 35 per GP if "fairly reliable" results are needed and 60 per GP if "very reliable" results are required.2 Of the 812 survey participants (549 individual GPs) who participated during the last seven years (2005-2011) 472 or 58% returned the required minimum number of questionnaires. However, if we accept the lower criterion of 35 per GP, 611 or 75% returned a sufficient number of questionnaires to generate "fairly reliable" results Representativeness—Although detailed data on demographics was not available on all PHOs included in the survey, an analysis of the demographics pertaining to the ProCare patient population (n=671,000) was compared to the sample of patients submitted by their practices (n=41,000). Results showed that older, female and European patients are over-represented (p<.0001). Across the board, patient satisfaction ratings ranged from 3.62 on a scale which asked about affordability (where 1=very expensive and 5=very affordable) to 5.52 on a scale (where 1=very poor and 6=excellent) which asked about the doctor's concern and caring for his/her patient The scores were well distributed and had relatively large standard deviations ranging from 14% to 31%. The relatively smaller standard deviations on items measuring patients' rating of how they were treated by their GP (14% - 16%) and by the practice nurse (16%-17%) suggest the scores are unanimously endorsed whereas, conversely, large standard deviations on items measuring satisfaction with the patients' ability to speak to their GP (31%), the time they had to wait to be seen (28%) and affordability (27%) demonstrate that there was considerable variability across the practices. Reliability of the data—Following the review of the reliability of the nationwide patient questionnaire used by New Zealand District Health Boards6, the reliability of the NZGPAQ questionnaire was also assessed by means of the "Cronbach alpha" statistic. The value of alpha can range between zero and one and it is generally accepted that if a set of items has an alpha above .60, it is usually considered to be internally consistent. If it goes above .80, it signifies a very high reliability. Above .90 suggests an "excellent" internal consistency. Results show that on measures that gauge satisfaction among patients regarding access, a high alpha level of 0.85 was achieved. Satisfaction with the manner in which they were treated by the doctor and nurse achieved very high alpha levels of 0.969 and 0.952 (see Table 2). Table 2 Cronbach alpha statistics on three dimensions Variables Cronbach's Alpha Rating of access 0.858 Rating of doctor 0.969 Rating of nurse 0.952 Another method by which one can assess the reliability of a survey instrument is to perform a test-retest reliability analysis. However, as patients completing the questionnaire are anonymous, we could not ask the same patient to complete the same questionnaire three years later. Therefore we have to contend ourselves with examining the results of replicating the exercise three years later and calculate correlations between the overall satisfaction scores. Of course, the longer the period in between, the less likely it is to obtain the same score. If we compare the scores at practice level, the correlation is +0.84. If we go down to GP level the correlation between the two sets of data is still quite high at +0.72. So this suggests that patients score their overall satisfaction with practices fairly consistently and only a little less so when they score their overall satisfaction with GPs. This is a very important finding as it demonstrates that the survey can be relied upon to give consistent results. Validity—Convergent validity is demonstrated when variables (constructs) that are expected to be related are, in fact, related. An example of this is the very strong positiverelationship that exists between the item that asks the patient about the ease with which they can get to an After Hours service and their satisfaction with the After Hours service (0.724). Similarly, the very strong negative correlation between the item which asks patients how long they usually have to wait at the practice before the consultation and the item which asks them to rate this waiting time (-0.624) also demonstrates very good construct validity. Discriminant validity (or divergent validity) tests that constructs which should have no relationship do, in fact, not have any relationship. For example, one would not expect the patients' response to a question about access, i.e. "How quickly do you usually get seen by any doctor?" to be correlated to questions which ask specifically about the current appointment; i.e. "Thinking about your consultation today, how do you rate the doctor on the following aspects of care?" Results show that the correlations range from r=0.004 to r=0.01. As the survey clearly distinguishes between items that ought to correlate with one another and items between which one would not expect to find a strong association, these findings confirm excellent construct validity. Frequency distributions—As shown in a number of articles7, results show that age is strongly correlated with satisfaction: older patients are more likely to express greater satisfaction than are younger patients. Similarly, patient satisfaction correlates with the patient's sex, which can partly be explained by sociodemographic differences: female patients tend to be younger, less likely to be in full time employment and thus are less likely to find the consultation to be affordable. Results also showed that female patients are more likely than male patients to have to wait more than half an hour before being seen. Consequently, they are also less likely to rate the waiting time in the practice to be acceptable. Satisfaction is also known to vary with respect to the ethnicity of the patient.8,9 Here too, European patients are more likely than non-European patients to reply with "completely satisfied" when asked to say how satisfied they are. Maori, Pacific and Asian patients are more likely to be female and younger than European patients. They are also more likely to be looking after home and family and be much more frequent visitors to the practice. Asked to indicate how long they usually have to wait at the practice before the consultation begins, the percentage of Pacific patients who report waiting more than half an hour is 3.5 times greater than the percentage of European patients waiting more than half an hour (13.5% vs 3.9%). Yet almost a quarter of this Samoan and Tongan group is not unhappy waiting more than half an hour, suggesting that these patients have different expectations of the required wait4. Naturally enough, affordability also played a role in overall satisfaction with the practice: 77% of patients who responded with "very affordable" to the question "How affordable was the consultation for you personally?" also said that they were "completely satisfied" with the practice vs only 26% of patients who felt the consultation was "very expensive". When asked to rate the practice's opening hours 92% of the sample stated that the opening hours were "good", "very good" or "excellent". Pressed to say what additional hours they would like to see the practice open for consultations, close to 60% said there was no need for that. However, one in five patients (20%) suggested the practice could open on the weekend while an additional 16% recommended evening hours. Finally, in response to the question "What After Hours service do you use when your general practice is closed?" close to 60% reported using the Accident and Medical Clinics, while another 15% stated they used the Hospital Emergency Department. The St John ambulance service received the most praise: one in three patients (34%) said they were "very satisfied". International comparison—The item scales used by Ramsay et al,10 and Bower at al11were calculated and compared with those reported by Potiriadis et al12 who reported the results obtained in her Australian GPAQ survey along with those obtained in the UK (see Table 3). Table 3. Inter-country comparison Satisfaction with: n Mean item score New Zealand n Mean Item score Australia n Mean item score UK Receptionist 49,033 85.7% 7,122 81.8% 19,803 69.1% Access to practice 43,689 76.6% 7,111 68.6% 19,302 58.3% Continuity of care 45,272 80.4% 7,080 76.5% 18,586 66.1% Communication 45,599 89.1% 7,104 84.0% 18,528 75.9% Nursing Care 18,703 83.4% 2,142 80.0% 13,740 76.3% Practice overall 47,724 86.3% 7,097 81.6% 19,039 76.5% Results showed that the mean scores for the seven NZGPAQ items ranged from 76.6% for satisfaction with access to the practice to 89.1% for satisfaction with communication between the GP and the patient. The New Zealand results were very much higher than the UK results and, across the board, about 5% higher than the Australian results. Discussion This is not the first time that the GPAQ has been adapted for use in a different country. In 2006 the questionnaire was translated in Thai and tested on

Summary

Abstract

Aim

To determine whether the New Zealand adaptation of the UK developed General Practice Assessment Questionnaire (GPAQ) is a valid and reliable indicator of the quality of care in general practice in New Zealand and what the survey can tell us about patient satisfaction with general practice.

Method

The Health Services Consumer Research Ltd Primary Care Patient Survey database which presently contains data from 184 medical practices (549 GPs) and responses from 50,000 enrolled patients was examined to determine the validity and reliability of the survey instrument. Data was briefly analysed to ascertain how survey results can best be employed to improve the quality of primary care.

Results

A check on representativeness showed that older, female and European patients are over-represented. To determine validity and reliability, the Cronbach alpha statistic was calculated and shown to range between 0.85 - 0.96. Convergent validity was demonstrated by high correlations between items that measured closely related aspects of patient care. Discriminant validity was shown by very low correlations between variables that measured unrelated items. Further analyses show how patients age, sex and ethnic group influence the level of satisfaction experienced.

Conclusion

The NZGPAQ survey can be employed nationwide to improve the quality of primary care because these patient survey results emphasize where service delivery is good/excellent and identify where change is needed to improve patient satisfaction.

Author Information

Gerard Zwier, Managing Director, Health Services Consumer Research Limited, Auckland

Acknowledgements

We are grateful to the 549 GPs, their patients and practice staff for participating in the surveys. We particularly thank Dr Tom Marshall (as the Foundation Chairman of ProCare Health taking an early interest in the survey) and Mr Ron Hooton, CEO ProCare Health, for his permission to publish the data from the survey.

Correspondence

Gerard Zwier PhD, Managing Director, Health Services Consumer Research Limited, PO Box 440, Shortland Street, Auckland 1140. New Zealand.

Correspondence Email

inbox@hscr.co.nz

Competing Interests

Gerard Zwier is Managing Director of Health Services Consumer Research Limited. The NZGPAQ is free to use for New Zealand General Practitioners and can be downloaded from http://www.hscr.co.nz/wp-content/uploads/2011/02/, Practices and PHOs wishing to implement the NZGPAQ with help from HSCR, please contact the author. Commercial companies selling patient evaluation services using the NZGPAQ are required to pay a royalty fee to Cambridge University.

Bower P, Roland M, Campbell J, Mead N. Setting standards based on patients views on access and continuity: secondary analysis of data from the general practice assessment survey. BMJ. 2003;326:258.http://www.gpaq.info/ (accessed 20 June 2012).The Royal New Zealand College of General Practitioners Cornerstone General Practice Accreditation Programme MOPS triennium 2011-2013 CQI Activity (Continuous Quality Improvement).Campbell J, Roland M, Richards S, et al. Users' reports and evaluations of out-of-hours health care and the UK national quality requirements: a cross sectional study. Br J Gen Pract. 2009;59 (558):e8-15.Garratt AM, Danielsen K, Hunskaar S. Patient satisfaction questionnaires for primary care out-of-hours services: a systematic review. Br J Gen Pract. 2007;57:741-7.Zwier G. Patient Satisfaction in New Zealand. N Z Med J.2009;122:38-49.Campbell JL, Ramsay J, Green J. Age, gender, socioeconomic, and ethnic differences in patients assessments of primary health are. Qual Health Care. 2001;10:90-5.Roland M. Understanding why some ethnic minority patients evaluate medical care more negatively than white patients: a cross sectional analysis of a routine patient survey in English general practices. BMJ. 2009; 339:b3450.Lyratzopoulos G, Elliott M, Barbiere JM et al. Understanding ethnic and other socio-demographic differences in patient experience of primary care: evidence from the English General Practice Patient Survey. BMJ Qual Saf. 2012;21:21-9.Ramsay J, Campbell JL, Schroter S, et al. The General Practices Assessment Survey (GPAS): tests of data quality and measurement properties. Fam Pract. 2001; 17:372-79.Bower P, Mead N, Roland M. What dimensions underlie patient responses to the General Assessment Survey? A factor analytic study. Fam Pract. 2002;19:489-95.Potiriadis M, Chondros P, Gilchrist G et al. How do Australian patients rate their general practitioner? A descriptive study using the General Practice Assessment Questionnaire. Med J Aust. 2008; 189:215-9.Jaturapatporn D, Hathirat S, Manataweewat B et al. Reliability and validity of a Thai version of the General Practice Assessment Questionnaire (GPAQ) J Med Assoc Thai. 2006;89:1491-6.Zwier G, Clarke D. How well do we monitor patient satisfaction? Problems with the nation-wide patient survey. N Z Med J.1999;112: 371-5.Greco M., Raising the bar on consumer feedback - improving health services. Australian Health Consumer. 2005;3:11-12.New Zealand Patient Satisfaction Index March 2002 - Dec 2011. Health Services Consumer Research Limited. Auckland.Roland M, Elliott M, Lyratzopoulos G et al. Reliability of patient responses in pay for performance schemes: analysis of national General practitioner Patient Survey data in England. BMJ. 2009; 339-b3851.Campbell SM, Kontopantelis E, Reeves D et al. Changes in patient experiences of primary care during Health Service Reforms in England between 2003 and 2007. Ann Fam Med. 2010;8:499-506.Roland M, Rosen R. Englands NHS embarks on controversial and risky market-style reforms in health care. New Engl J Med. 2011; 364:1360-6.

Contact diana@nzma.org.nz
for the PDF of this article

View Article PDF

Whilst in the UK the government has acknowledged that the regular implementation of patient surveys is one of the most important sources for quality improvement in the NHS1, in New Zealand we lack a uniform, standardised and validated primary care patient survey.The General Practice Assessment Questionnaire (GPAQ) was originally developed in the USA but was later modified and further developed by the National Primary Care Research and Development Centre at the University of Manchester, U.K. After the UK government implemented the incentivized "Quality and Outcomes Framework" in the 2004 GP contract, the GPAQ was introduced as one of two questionnaires that practices and PHOs could use to obtain patient feedback.Two versions of the GPAQ now exist: (a) the "consultation" questionnaire, administered at the practice and (b) a "postal" version. The survey has recently been endorsed and updated by Cambridge University.2In New Zealand, many practices and PHOs make use of the "Better Practice Patient Questionnaire" (BPPQ) made available by the Royal New Zealand College of General Practitioners' (RNZGPC).The questionnaire is available in Mãori, Samoan, Chinese and Korean. Completion of the survey process enables New Zealand GPs to claim 10 MOPS credits3. The BPPQ asks questions about (a) the practice, (b) the doctor and (c) the staff. It has no specific questions about the practice nurse or about an After Hours service. While it asks patients how they felt about the waiting time at the practice, it does not ask how long they usually have to wait thus making it impossible to ascertain what they think is an acceptable time to wait.It also has an unbalanced response rate set which is skewed in favour of a positive response: the only negative response available to the patient is "poor". Moreover, although it asks GPs to reflect on the feedback received from patients, there is no information on how the GP's performance compares to other GPs in the practice or how the practice compares to other practices within the larger PHO.Method Like the GPAQ, the New Zealand General Practice Assessment Questionnaire (NZGPAQ) developed by Health Services Consumer Research (Ltd) with assistance from ProCare Health, focuses mainly on questions about access, inter-personal aspects of care and continuity of care. The basic layout, sequence and formulation of the questions and response categories in the GPAQ are maintained. Three questions concern After Hours services.4,5 An overall satisfaction question is also included. Three open-ended questions allow the patient to put their opinions in their own words. All questionnaires are individually numbered with a code incorporating the PHO, the practice, the doctor and a sequential number for the patient. This generates a performance measure that tracks ratings over time, by GP, by practice and by PHO. The NZGPAQ questionnaire is available in Mãori, Samoan, Tongan, Chinese and Thai languages. It has been approved by the RNZCGP for MOPS CQI purposes and can be used in the accreditation process. The present overview is divided into two separate sections: an assessment of the participation rate, the resulting representativeness of the sample and the reliability and validity of the questionnaire and an analysis of the results of the survey data This overview is concluded with a recommendation regarding future requirements. Results Participation rate—Since 2005 the NZGPAQ has been used by 184 medical practices (812 participants representing 549 individual GPs) and responses have been recorded from close to 50,000 enrolled patients enrolled in five PHOs (see Table 1). Table 1 Number of practices surveyed over last 7 years Year 2005 2006 2007 2008 2009 2010 2011 Total Participating practices 63 24 20 55 53 59 55 184 Participating GPs 132 34 56 166 184 124 116 549 Patients surveyed 6,894 411 2,764 8,180 11,445 9,820 9,719 49,233 Because of the trailblazing role of ProCare Health, 87% of the data in this database comes from the three ProCare PHOs. Other PHOs presently included in the dataset are South Seas Healthcare, Langimalie Tongan Society, Nelson Bay Primary Health and Waihopai PHO in Invercargill. Although each practice had been instructed to give every patient during a specific time period the opportunity to participate in the survey, there is substantial variance in the number of questionnaires returned. Originally, it was stipulated that 50 questionnaires was the minimum number of questionnaires required for an accurate and useful analysis. However, more recently, Cambridge University has amended this number to 35 per GP if "fairly reliable" results are needed and 60 per GP if "very reliable" results are required.2 Of the 812 survey participants (549 individual GPs) who participated during the last seven years (2005-2011) 472 or 58% returned the required minimum number of questionnaires. However, if we accept the lower criterion of 35 per GP, 611 or 75% returned a sufficient number of questionnaires to generate "fairly reliable" results Representativeness—Although detailed data on demographics was not available on all PHOs included in the survey, an analysis of the demographics pertaining to the ProCare patient population (n=671,000) was compared to the sample of patients submitted by their practices (n=41,000). Results showed that older, female and European patients are over-represented (p<.0001). Across the board, patient satisfaction ratings ranged from 3.62 on a scale which asked about affordability (where 1=very expensive and 5=very affordable) to 5.52 on a scale (where 1=very poor and 6=excellent) which asked about the doctor's concern and caring for his/her patient The scores were well distributed and had relatively large standard deviations ranging from 14% to 31%. The relatively smaller standard deviations on items measuring patients' rating of how they were treated by their GP (14% - 16%) and by the practice nurse (16%-17%) suggest the scores are unanimously endorsed whereas, conversely, large standard deviations on items measuring satisfaction with the patients' ability to speak to their GP (31%), the time they had to wait to be seen (28%) and affordability (27%) demonstrate that there was considerable variability across the practices. Reliability of the data—Following the review of the reliability of the nationwide patient questionnaire used by New Zealand District Health Boards6, the reliability of the NZGPAQ questionnaire was also assessed by means of the "Cronbach alpha" statistic. The value of alpha can range between zero and one and it is generally accepted that if a set of items has an alpha above .60, it is usually considered to be internally consistent. If it goes above .80, it signifies a very high reliability. Above .90 suggests an "excellent" internal consistency. Results show that on measures that gauge satisfaction among patients regarding access, a high alpha level of 0.85 was achieved. Satisfaction with the manner in which they were treated by the doctor and nurse achieved very high alpha levels of 0.969 and 0.952 (see Table 2). Table 2 Cronbach alpha statistics on three dimensions Variables Cronbach's Alpha Rating of access 0.858 Rating of doctor 0.969 Rating of nurse 0.952 Another method by which one can assess the reliability of a survey instrument is to perform a test-retest reliability analysis. However, as patients completing the questionnaire are anonymous, we could not ask the same patient to complete the same questionnaire three years later. Therefore we have to contend ourselves with examining the results of replicating the exercise three years later and calculate correlations between the overall satisfaction scores. Of course, the longer the period in between, the less likely it is to obtain the same score. If we compare the scores at practice level, the correlation is +0.84. If we go down to GP level the correlation between the two sets of data is still quite high at +0.72. So this suggests that patients score their overall satisfaction with practices fairly consistently and only a little less so when they score their overall satisfaction with GPs. This is a very important finding as it demonstrates that the survey can be relied upon to give consistent results. Validity—Convergent validity is demonstrated when variables (constructs) that are expected to be related are, in fact, related. An example of this is the very strong positiverelationship that exists between the item that asks the patient about the ease with which they can get to an After Hours service and their satisfaction with the After Hours service (0.724). Similarly, the very strong negative correlation between the item which asks patients how long they usually have to wait at the practice before the consultation and the item which asks them to rate this waiting time (-0.624) also demonstrates very good construct validity. Discriminant validity (or divergent validity) tests that constructs which should have no relationship do, in fact, not have any relationship. For example, one would not expect the patients' response to a question about access, i.e. "How quickly do you usually get seen by any doctor?" to be correlated to questions which ask specifically about the current appointment; i.e. "Thinking about your consultation today, how do you rate the doctor on the following aspects of care?" Results show that the correlations range from r=0.004 to r=0.01. As the survey clearly distinguishes between items that ought to correlate with one another and items between which one would not expect to find a strong association, these findings confirm excellent construct validity. Frequency distributions—As shown in a number of articles7, results show that age is strongly correlated with satisfaction: older patients are more likely to express greater satisfaction than are younger patients. Similarly, patient satisfaction correlates with the patient's sex, which can partly be explained by sociodemographic differences: female patients tend to be younger, less likely to be in full time employment and thus are less likely to find the consultation to be affordable. Results also showed that female patients are more likely than male patients to have to wait more than half an hour before being seen. Consequently, they are also less likely to rate the waiting time in the practice to be acceptable. Satisfaction is also known to vary with respect to the ethnicity of the patient.8,9 Here too, European patients are more likely than non-European patients to reply with "completely satisfied" when asked to say how satisfied they are. Maori, Pacific and Asian patients are more likely to be female and younger than European patients. They are also more likely to be looking after home and family and be much more frequent visitors to the practice. Asked to indicate how long they usually have to wait at the practice before the consultation begins, the percentage of Pacific patients who report waiting more than half an hour is 3.5 times greater than the percentage of European patients waiting more than half an hour (13.5% vs 3.9%). Yet almost a quarter of this Samoan and Tongan group is not unhappy waiting more than half an hour, suggesting that these patients have different expectations of the required wait4. Naturally enough, affordability also played a role in overall satisfaction with the practice: 77% of patients who responded with "very affordable" to the question "How affordable was the consultation for you personally?" also said that they were "completely satisfied" with the practice vs only 26% of patients who felt the consultation was "very expensive". When asked to rate the practice's opening hours 92% of the sample stated that the opening hours were "good", "very good" or "excellent". Pressed to say what additional hours they would like to see the practice open for consultations, close to 60% said there was no need for that. However, one in five patients (20%) suggested the practice could open on the weekend while an additional 16% recommended evening hours. Finally, in response to the question "What After Hours service do you use when your general practice is closed?" close to 60% reported using the Accident and Medical Clinics, while another 15% stated they used the Hospital Emergency Department. The St John ambulance service received the most praise: one in three patients (34%) said they were "very satisfied". International comparison—The item scales used by Ramsay et al,10 and Bower at al11were calculated and compared with those reported by Potiriadis et al12 who reported the results obtained in her Australian GPAQ survey along with those obtained in the UK (see Table 3). Table 3. Inter-country comparison Satisfaction with: n Mean item score New Zealand n Mean Item score Australia n Mean item score UK Receptionist 49,033 85.7% 7,122 81.8% 19,803 69.1% Access to practice 43,689 76.6% 7,111 68.6% 19,302 58.3% Continuity of care 45,272 80.4% 7,080 76.5% 18,586 66.1% Communication 45,599 89.1% 7,104 84.0% 18,528 75.9% Nursing Care 18,703 83.4% 2,142 80.0% 13,740 76.3% Practice overall 47,724 86.3% 7,097 81.6% 19,039 76.5% Results showed that the mean scores for the seven NZGPAQ items ranged from 76.6% for satisfaction with access to the practice to 89.1% for satisfaction with communication between the GP and the patient. The New Zealand results were very much higher than the UK results and, across the board, about 5% higher than the Australian results. Discussion This is not the first time that the GPAQ has been adapted for use in a different country. In 2006 the questionnaire was translated in Thai and tested on

Summary

Abstract

Aim

To determine whether the New Zealand adaptation of the UK developed General Practice Assessment Questionnaire (GPAQ) is a valid and reliable indicator of the quality of care in general practice in New Zealand and what the survey can tell us about patient satisfaction with general practice.

Method

The Health Services Consumer Research Ltd Primary Care Patient Survey database which presently contains data from 184 medical practices (549 GPs) and responses from 50,000 enrolled patients was examined to determine the validity and reliability of the survey instrument. Data was briefly analysed to ascertain how survey results can best be employed to improve the quality of primary care.

Results

A check on representativeness showed that older, female and European patients are over-represented. To determine validity and reliability, the Cronbach alpha statistic was calculated and shown to range between 0.85 - 0.96. Convergent validity was demonstrated by high correlations between items that measured closely related aspects of patient care. Discriminant validity was shown by very low correlations between variables that measured unrelated items. Further analyses show how patients age, sex and ethnic group influence the level of satisfaction experienced.

Conclusion

The NZGPAQ survey can be employed nationwide to improve the quality of primary care because these patient survey results emphasize where service delivery is good/excellent and identify where change is needed to improve patient satisfaction.

Author Information

Gerard Zwier, Managing Director, Health Services Consumer Research Limited, Auckland

Acknowledgements

We are grateful to the 549 GPs, their patients and practice staff for participating in the surveys. We particularly thank Dr Tom Marshall (as the Foundation Chairman of ProCare Health taking an early interest in the survey) and Mr Ron Hooton, CEO ProCare Health, for his permission to publish the data from the survey.

Correspondence

Gerard Zwier PhD, Managing Director, Health Services Consumer Research Limited, PO Box 440, Shortland Street, Auckland 1140. New Zealand.

Correspondence Email

inbox@hscr.co.nz

Competing Interests

Gerard Zwier is Managing Director of Health Services Consumer Research Limited. The NZGPAQ is free to use for New Zealand General Practitioners and can be downloaded from http://www.hscr.co.nz/wp-content/uploads/2011/02/, Practices and PHOs wishing to implement the NZGPAQ with help from HSCR, please contact the author. Commercial companies selling patient evaluation services using the NZGPAQ are required to pay a royalty fee to Cambridge University.

Bower P, Roland M, Campbell J, Mead N. Setting standards based on patients views on access and continuity: secondary analysis of data from the general practice assessment survey. BMJ. 2003;326:258.http://www.gpaq.info/ (accessed 20 June 2012).The Royal New Zealand College of General Practitioners Cornerstone General Practice Accreditation Programme MOPS triennium 2011-2013 CQI Activity (Continuous Quality Improvement).Campbell J, Roland M, Richards S, et al. Users' reports and evaluations of out-of-hours health care and the UK national quality requirements: a cross sectional study. Br J Gen Pract. 2009;59 (558):e8-15.Garratt AM, Danielsen K, Hunskaar S. Patient satisfaction questionnaires for primary care out-of-hours services: a systematic review. Br J Gen Pract. 2007;57:741-7.Zwier G. Patient Satisfaction in New Zealand. N Z Med J.2009;122:38-49.Campbell JL, Ramsay J, Green J. Age, gender, socioeconomic, and ethnic differences in patients assessments of primary health are. Qual Health Care. 2001;10:90-5.Roland M. Understanding why some ethnic minority patients evaluate medical care more negatively than white patients: a cross sectional analysis of a routine patient survey in English general practices. BMJ. 2009; 339:b3450.Lyratzopoulos G, Elliott M, Barbiere JM et al. Understanding ethnic and other socio-demographic differences in patient experience of primary care: evidence from the English General Practice Patient Survey. BMJ Qual Saf. 2012;21:21-9.Ramsay J, Campbell JL, Schroter S, et al. The General Practices Assessment Survey (GPAS): tests of data quality and measurement properties. Fam Pract. 2001; 17:372-79.Bower P, Mead N, Roland M. What dimensions underlie patient responses to the General Assessment Survey? A factor analytic study. Fam Pract. 2002;19:489-95.Potiriadis M, Chondros P, Gilchrist G et al. How do Australian patients rate their general practitioner? A descriptive study using the General Practice Assessment Questionnaire. Med J Aust. 2008; 189:215-9.Jaturapatporn D, Hathirat S, Manataweewat B et al. Reliability and validity of a Thai version of the General Practice Assessment Questionnaire (GPAQ) J Med Assoc Thai. 2006;89:1491-6.Zwier G, Clarke D. How well do we monitor patient satisfaction? Problems with the nation-wide patient survey. N Z Med J.1999;112: 371-5.Greco M., Raising the bar on consumer feedback - improving health services. Australian Health Consumer. 2005;3:11-12.New Zealand Patient Satisfaction Index March 2002 - Dec 2011. Health Services Consumer Research Limited. Auckland.Roland M, Elliott M, Lyratzopoulos G et al. Reliability of patient responses in pay for performance schemes: analysis of national General practitioner Patient Survey data in England. BMJ. 2009; 339-b3851.Campbell SM, Kontopantelis E, Reeves D et al. Changes in patient experiences of primary care during Health Service Reforms in England between 2003 and 2007. Ann Fam Med. 2010;8:499-506.Roland M, Rosen R. Englands NHS embarks on controversial and risky market-style reforms in health care. New Engl J Med. 2011; 364:1360-6.

Contact diana@nzma.org.nz
for the PDF of this article

Subscriber Content

The full contents of this pages only available to subscribers.
Login, subscribe or email nzmj@nzma.org.nz to purchase this article.

LOGINSUBSCRIBE
No items found.