No items found.

View Article PDF

A growing body of evidence suggests that improved consumer engagement (CE) can lead to better health outcomes, and contribute to improvements in health service quality and patient safety.[[1,2]] CE has been recognised globally as one of the key priorities within health systems’ continuous development and a requirement for patient-centred care.[[1–5]] The Health Quality & Safety Commission (HQSC) and Ministry of Health (MoH) of New Zealand identifies CE as one of their key priorities and recognises it as central to improving quality across the national healthcare system.[[2,6]]

CE in health focuses on consumers and care providers working together to promote and facilitate active patient, whānau (family) and public involvement at all levels of health systems.[[1,7]] An important part of CE, recognised as a right of all people by the World Health Organisation (WHO),[[8]] is engaging patients in health systems governance to inform the design and implementation of healthcare services.[[1]] Health systems governance level engagement may include, for example, being a member of a project team, steering group, consumer group or board.[[9]] Specifically, CE at governance level is characterised by bi-directional flow of information and shared power and responsibility, with consumers being active partners in defining agendas and making decisions.[[1]]

To facilitate CE many healthcare organisations have established consumer groups. Within the New Zealand health context these are typically called consumer councils, consumer advisory groups or consumer boards. HQSC describes consumer councils as:

key mechanisms through which consumers can participate in how health and disability services are delivered in different communities. In this way, consumer representatives can provide feedback on current services and tell providers what is important to them. They can give advice and input into strategic direction and planning of services. Consumer councils are made up entirely of consumer representatives and have slightly different ways of working, with some having a strong relationship with clinical governance and reporting to the board.[[1]]

The increased commitment to improving CE in New Zealand and globally has necessitated the need for robust CE evaluations.[[11]] This includes the recently announced reforms of health services within New Zealand which signals a priority outcome as ‘partnership at all levels of the system and empowering consumers of care to design services which work for them’, and a strong focus on partnering with Indigenous Māori community.[[6]]

An effective evaluation tool enables assessing outcomes of CE, learning from current practices, and demonstrating the impact of new policies and investments. However, a recent systematic review of questionnaires to measure CE at governance level[[11]] found that most of the identified tools lacked scientific rigour, were not proven to be reliable, and were not easy to read or understand. Many of the tools were developed for a single project or not made publicly available. In light of these findings, there is an urgent need to develop a psychometrically sound questionnaire to measure CE at governance level.

The overall aim of the current project was to develop and validate a questionnaire to measure health consumer representatives’ CE at governance level named the Middlemore Consumer Engagement Questionnaire (MCE-Q). This mixed methods study used a range of qualitative and quantitative methods and consisted of two phases. The aims for each phase were:

1. To develop an instrument to measure CE at governance level (Phase 1).

2. To demonstrate the reliability and validity of this instrument (Phase 2).

We aimed to explore if consumers felt enabled and supported to contribute to improving healthcare systems. We partnered with the Counties Manukau (CM) Health Consumer Council (the Consumer Council) to bring together a team of health researchers, consumers, practitioners and statisticians, with expertise in consumer experience, psychometrics, co-design and Indigenous issues across a wide array of settings. The questionnaire we planned to develop and validate aimed to measure the self-perceived level of engagement of consumers contributing at governance level, and to facilitate continuous healthcare systems improvement, decision-making processes and international comparisons relating to CE.

The purpose of this paper is to describe the development of the MCE-Q. In the next section, methods and findings from Phase 1 are reported, as they informed the subsequent data collection and analysis in Phase 2. This is followed by a section reporting methods and findings from Phase 2. Finally, an integrated discussion of the projects findings, limitations and conclusion are provided.

PHASE 1

Phase 1 focused on generating candidate items relevant to CE and developing the questionnaire. We first established an advisory group, which supported the project team, providing expertise in areas including CE, Māori health and Pasifika health.

Phase 1 methods

Study design

Phase 1 was guided by recommendations by Churchill[[12]] and Streiner et al,[[13]] for developing outcome measures. It consisted of multiple steps, including domain specification, item generation, a focus group, cognitive interviews, and an in-depth review of the proposed questionnaire. Figure 1 presents the steps of Phase 1.

Figure 1: Phase 1 steps.

Setting and location

The study was conducted in Auckland, New Zealand, between July and October 2020. This time scale included a range of disruptions caused by the global COVID-19 pandemic, but the conduct of this study was not interrupted.

Data collection

Content domain specification

The first step was to define the content domain of the proposed questionnaire. This process was based on published literature relating to CE, previously completed work of the Consumer Council and project team, and the team’s expertise in consumer experience, and measurement. Our focus was also on aligning our working definition with the CE-related components identified by the HQSC and WHO.[[2,8]] We also aimed to identify any potential subdomains which could then be psychometrically assessed in Phase 2.

Item generation

We included multiple data sources to generate potential items for the MCE-Q. First, a list of initial items was formulated during a workshop with the Consumer Council. Next, a literature review was conducted to identify any relevant scientific publications and existing tools. As a result, a further set of candidate items were identified and included in the item list. Finally, the item list was reviewed and refined by the project team, who focused on deleting any duplicate or otherwise redundant items, and on item readability.

Focus group with health consumer representatives

We conducted a workshop-style focus group including participants who were current or former Consumer Council members. The Consumer Council was established to represent the interests of consumers and bring an inpatient and ambulatory consumer and family perspective to development of the Counties Manukau Health plans, policies, publications, and operational decisions and to raise issues being identified in the community. It includes people from a variety of backgrounds who have a strong consumer understanding of the healthcare system and represent the voices of their communities. Potential participants were invited to take part via an invitation email sent out by the Consumer Council’s secretariat. There were no exclusion criteria. The focus group lasted approximately two hours, was facilitated by three members of the project team (LM, TA, KC),  and was audio-recorded. The purpose of the focus group was to review the questionnaire instructions, proposed items, recall period and response format, and potentially generate further items. Recognition of time and expertise, in the form of koha (gift), and support with transportation was provided to all consumer participants of the focus groups. Basic demographic data were collected.

Cognitive interviews

Following analysis of the focus group data, two members of the project team conducted cognitive interviews[[14]] with a purposively selected sample of current and former members of the Consumer Council. We used cognitive interviewing to evaluate whether the survey respondents interpreted the survey instructions and items as they were intended, and whether the survey format enabled the respondents to select responses that matched their answers.[[14]]

Consumer representatives were invited to take part via an invitation email. Our sampling strategy focused on ensuring gender, ethnicity and length of Consumer Council service representation. There were no exclusion criteria.

Consumer participants were interviewed individually, face-to-face. They were asked to ‘think-aloud’[[15]] as they completed a refined version of the proposed questionnaire. The interviewer explored any potential issues as participants responded to items. All interviews were audio-recorded. Basic demographic data were collected.

In-depth review

Our project team met regularly throughout the data collection period to review the transcripts and refine the questionnaire. The questionnaire instructions and items were reviewed for clarity and redundancy. Any issues were resolved by discussion.

Data analysis

The focus group discussion was transcribed verbatim and analysed using Directed Content Analysis,[[16]] focusing specifically on defining CE, any items with perceived lack of clarity, and on generating new candidate items. The proposed items and instructions were refined to improve comprehension by participants and to elicit experiences related to CE at governance level.

Cognitive interviews were transcribed, and analysed using Directed Content Analysis, focusing specifically on identifying items that were not easily understood, and on the acceptability of the proposed response categories.

We used the Flesch Reading Ease score[[17]] to test the readability of the questionnaire instructions and items.

Ethics

Ethical approval for the study (Phase 1 and 2) was received from the Auckland Health Research Committee (AH3350).

Phase 1 findings

Content domain specification

The content domain of the proposed questionnaire is health CE at governance level. For the purpose of this study, we employed the following definition of CE at governance level (adapted from Abelson et al[[18]] and Baker et al[[19]]):

Consumer engagement at governance level is characterised by shared power and responsibility, with consumers being active partners in defining agendas and making decisions. Information flows bi-directionally throughout the process of engagement, and decision-making responsibility is shared.

This definition suggests there may be some subdomains within the overall domain of CE, for example, shared power, responsibility, active participation and decision-making. We planned to explore any potential subdomains in Phase 2.

Item generation

In our prior work which initiated the current project, the Consumer Council and project team generated a set of 27 candidate items relating to CE that were included in the initial item bank for the proposed CE questionnaire. These items considered consumers’ experiences of being involved in governance groups, for example, I feel that my views are heard and I feel confident when challenging views expressed by other members of the group. Next, a literature review conducted by a trained academic librarian, generated a further set of items. In total, the initial list included 112 candidate items.

The project team iteratively reviewed the initial list of items and selected 36 that appeared to represent the content domain of CE most strongly. All items were then reviewed for readability, ensuring they used brief and plain language and had consistent item valence (positive versus negative wording).

We intended to use a Likert-type scale to indicate the level of agreement with each of the items. The proposed response categories ranged from ‘strongly disagree’ (scored ‘1’) to ‘strongly agree’ (scored ‘5’). We planned to explore the preference for using the middle response category (‘neither agree, nor disagree) with the focus group and interview participants. Scores for each item would be summated to give the total score.

The list of 36 items was then formatted into a prototype draft of the questionnaire. This included questionnaire instructions (formulated by the project team) and the proposed response categories. This draft was then discussed with consumer representatives during a focus group.

Focus group with the Consumer Council members

Six participants took part in the focus group (Table 1).

Participants found the questionnaire instructions to be generally easy to understand. However, they thought more clarity was needed around the meaning of ‘a health consumer in general’ versus ‘a health consumer at governance level’. Some participants noted that the difference between the two referred to the level of responsibility and argued that a health consumer at governance level represents not only their own lived experience, but also their community’s. Participants also argued that it was important to set the context as clear as possible in the instructions, for example: Rate each item thinking about your engagement in [group] over the last [number] months.

Next, participants reviewed all 36 candidate items. Overall, participants all agreed that the questions were relevant and that most should be included in a measure of CE at governance level. They noted similarities between some items (for example, ‘My opinions are listened to and valued’ and ‘I feel that my views are heard’), and argued for rewording and/or clarification of some of them (for example, replacing barriers with challenges in ‘There are barriers that impact my ability to contribute in meetings’). Furthermore, participants argued that the questionnaire must consider respondents’ cultural background, with one of the participants stating that ‘cultural sensitivity is universal’. Finally, as most participants thought that the use of a five-point Likert-type response scale was appropriate, we decided to include the middle response category ‘neither agree, nor disagree’.

The project team read and discussed the focus group transcripts, and iteratively reviewed the questionnaire draft. A number of refinements were made, that included clarifying the instructions and item wording, providing examples where appropriate, incorporating the principle of partnership into some of the items, and further improving the readability of the questionnaire. No items were deleted following the focus group.

View Table 1 and Table 2.

Cognitive interviews

Next, the prototype questionnaire was tested through cognitive interviews with five participants (Table 2).

Participants found the questionnaire instructions and majority of items easy to understand. They suggested rephrasing some of the items to avoid unnecessary ambiguity, which resulted in further improvements to the questionnaires readability. Overall, participants thought that the questionnaire was easy to complete and that it covered a broad spectrum of areas relating to CE at governance level.

Drafting the questionnaire

After a number of revisions incorporating findings from the focus group and cognitive interviews, the project team prepared a further questionnaire draft for psychometric performance testing in Phase 2. The questionnaire included 36 CE items using a five-point Likert-type response format (Supplementary Table 1) and nine demographic questions (Supplementary Table 2). The Flesch Reading Ease score was 61, suggesting the questionnaire was written in Plain English and easily understood, on average, by a student aged 13–15 years.

The proposed questionnaire was then uploaded to REDCap database[[20]] to enable an anonymous, online distribution to health consumer representatives in Phase 2.

PHASE 2

Phase 2 focused on testing the following psychometric properties of the proposed questionnaire: construct and concurrent validity, internal consistency and test-retest reliability.

Phase 2 methods

Study design

Phase 2 consisted of a main CE survey study with health consumer representatives and a qualitative interview study with CE leaders conducted concurrently. This was followed by an additional test-retest survey study.

Setting and location

The project team was based in Auckland, New Zealand. The survey was conducted online with participants from New Zealand, Australia and Canada between December 2020 and July 2021.

Data collection

Main CE survey

The proposed questionnaire was administered via the REDCap database[[20]] and completed anonymously. The work of Comrey and Lee[[21]] and Hair et al[[22]] suggests that having a sample size of 200 and above would be sufficient for carrying out a reliability analysis. The survey was distributed by invitation via district health boards Consumer Council chairpersons from around New Zealand, the HQSC, the Consumer Health Forum of Australia, and the British Columbia Patient Safety & Quality Council in Canada.

To test the proposed questionnaires concurrent validity, we selected a similar questionnaire, the Patient and Public Engagement Evaluation Tool (PPEET).[[18]] PPEET was developed at McMaster University (Canada) by public and patient engagement experts and is widely used in Canada and other countries by healthcare organisations.[[23]] PPEET includes 13 items and takes about two to three minutes to complete. A consecutive sub-sample of participants were invited to complete the validation measure, PPEET.

CE leaders’ interviews

We interviewed New Zealand CE leaders (for example, chairs, managers) of organisations/groups formally involving health consumer representative at governance level, with at least three years of experience in a leadership role. They were purposively selected from within the project lead’s (LM) professional network and invited via email to take part. There were no exclusion criteria.

CE leaders were interviewed individually, face-to-face. The interviewer (LW) used an interview guide to explore participants’ perspectives on measuring CE and how such data could be used in the future. The interviews were audio-recorded and transcribed verbatim. We expected to interview between 5–10 people, depending on the depth and richness of the collected data.[[24]]

Test-retest CE survey

Following the initial survey, the proposed questionnaire was refined based on statistical analysis and then underwent an evaluation of its test-retest reliability. We aimed to recruit a sample of n=30 participants to complete the refined version of the proposed questionnaire on two occasions, approximately one week apart.

Data analysis

All statistical analyses were performed using R,[[25]] SAS/STAT software version 9.4[[26]] and SPSS version 26.0 (SPSS Inc., Chicago, IL). Respondents with over 10% missing values were removed from the analysis dataset. The data entries were double checked to ensure accuracy.

The demographics of the respondents and the response profiles were presented descriptively in terms of counts and proportions.

Principal Component Analysis (PCA)[[27]] was performed to confirm construct validity. PCA is a method for factor extraction and a variable-reduction technique. It is used to reduce the number of variables (ie questionnaire items) while retaining as much of the original variance as possible.[[27]] It was also used to test whether the underlying construct (ie CE) loads onto all or only some of the variables. Pearson’s correlations were produced for all the 36 items. Both Kaiser–Meyer–Olkin (KMO) Test and Bartlett’s Test of Sphericity were conducted to confirm the appropriateness of conducting the PCA. The KMO statistic varies between 0 and 1.0. Values >0.5 are considered ‘barely acceptable,’ and >0.9 are deemed most suitable.[[28]] For Bartlett’s Test, a significant statistic (P≤0.05) means it can efficiently perform a PCA on the dataset.[[28]] For the PCA, an oblique rotation was chosen as the underlying items are related. The number of components to be retained was determined using a scree plot with parallel analysis. Items that were strongly correlated (above 0.7) with the other items were removed from the survey.

Concurrent validity was evaluated using Pearson’s correlation coefficient to assess the correlation between the proposed questionnaire and PPEET. For both test-retest reliability and construct validity, the agreement at the individual item level was assessed. The relative reliability was determined by calculating the two-way random Intraclass correlation coefficient (ICC) for absolute agreement of single measures. The 95% confidence interval (CI) was calculated for each ICC. Reliability was considered poor for ICC values <0.40, fair for values between 0.40–0.59, good for values between 0.60–0.74, and excellent for values between 0.75–1.00.[[29]] ICC values above 0.75 were considered acceptable for test-retest reliability.[[30]] Cronbach’s alpha coefficient was utilised to test internal consistency, which ranges from 0–1.0. Streiner et al considered an alpha value of >0.7 as acceptable.[[13]]

Interviews with CE leaders were analysed using Directed Content Analysis, focusing specifically on participants’ perceptions of what constitutes CE at governance level, and the usefulness of the proposed questionnaire in measuring and improving CE.

Phase 2 findings

Main CE survey results

Two hundred and twenty-nine participants from three countries completed the anonymous CE survey (Table 3 and Table 4). Most participants were 45 years or older (84.3%), and approximately two thirds identified as female. The highest scored items were item 3 (‘I am able to express my views freely’), 4 (‘participation in this group is important to me’), and 10 (‘I feel safe to speak from my personal perspective, for example, my cultural perspective, my community's perspective’, etc). Items with the lowest scores were item 22 (‘I was well oriented to the work of this group’), 24 (‘the work achieved by this group has met my expectations’), 33 (‘I would not change anything about this group’), and the reverse-scored item 12 (‘there are things that reduce my ability to contribute in meetings, for example, related to my cultural background or use of jargon’). View Table 3 and Table 4.

Construct validity

Out of the 229 participants, there were 208 responses that had all the items completed; hence factor analysis was carried out on the 208 sample. Based on principal component analysis (Supplementary Table 3 and Figure 2), all items fitted under one dimension, which explained 53% of the total variance. All items with correlations above 0.75 were reviewed for potential redundancy. As a result, 11 items were removed (Supplementary Table 4). The KMO and Bartlett’s test confirmed that all items were intercorrelated (r=0.96, P<0.0001) and the sample size was adequate.

Figure 2: Scree plot of the number of components in the principal components analysis.

Concurrent validity

A sample of 87 participants completed both the proposed survey and PPEET survey. Pearson’s correlation coefficient between total scores from the two surveys was high (0.93).

Internal consistency

Cronbach’s alpha for the initial 36-item scale was 0.97. For the final 25 items Cronbach’s alpha was 0.96 and all corrected item-total correlations ranged from 0.42 to 0.85, suggesting satisfactory internal consistency.

Test-retest reliability

Thirty-four participants took part in the test-retest evaluation. The results for both ICC (0.84) and Cronbach’s alpha (0.91) met the criterion, indicating that the proposed tool has high test-retest reliability (Supplementary Table 5).

CE leaders’ interviews

We interviewed five CE leaders (Table 5).

Consumer engagement was unanimously viewed as a ‘unique partnership’ with an organisation to ‘amplify the voice of the communities’, especially for populations who experience health inequities such as Māori, Pasifika and those living with disabilities. One participant argued it was important to engage consumers ‘in a way that meets their needs [and the community’s]’; the community should be ‘part of the solution, or [part of] the process to getting a solution’. There appeared to be a strong desire for consumer engagement to be ‘part of [the] organisational structure ... built in [to processes] and in everything we do.’ Participants thought that health consumers have the potential to be involved in strategic decision-making, but currently had little involvement from the start and throughout any such initiatives.

Participants argued that there is currently limited exploration into the experience of consumers at governance level beyond regular group meetings/hui or individual reflection and feedback sessions with their managers. Reportedly, there was no ‘formal evaluation’ process used to consistently review consumer’s experiences of working at governance levels in their organisations. However, all managers acknowledged that monitoring consumer experience was a necessary ‘mechanism for improvement’ and thought that the proposed questionnaire would be useful in facilitating this on an annual or bi-annual basis.

The managers felt that the tool could help to identify gaps in understanding, relating to orientation and organisational expectations and highlight whether consumers were working in the most appropriate spaces within an organisation. It also provided a ‘platform’ for less vocal members of the group to share their opinions and made ‘[the consumer’s] needs better known to [the managers] … and therefore the [consumer] contribution is more effective’. Gathering feedback from consumers was seen as important, with one participant proposing that feedback from any survey tool should be ‘shared openly with consumers,’ and that an ‘action plan’ should be formed and then enacted appropriately.

I think with anything, you can do a survey, but it’s about what you do with it... what sort of action plan will come from those results?

DISCUSSION

In this paper we report findings from a study developing and validating a novel questionnaire to measure CE at governance level. We built and expanded on the strengths of previously published CE-related measures by working closely with consumer representatives and CE leaders from a wide range of backgrounds, and focusing on psychometric performance of the proposed tool. The MCE-Q comprises 25 items (Supplementary Table 6) representing one domain, uses a five-point Likert-type response format, and can be completed in approximately 10 minutes. It can be downloaded from [https://koawatea.countiesmanukau.health.nz/co-design/tools-and-resources]. The MCE-Q showed face, construct and concurrent validity, and excellent internal consistency and test-retest reliability. It can be used by healthcare organisations to monitor how well they engage their consumer representatives at governance level, identify areas for improvement and make national and international comparisons.

Healthcare providers’ focus, relating to health consumers’ engagement, has been primarily on consultation.[[1]] The mounting evidence showing that healthcare outcomes (including patient outcomes) can be improved by greater CE[[2,5]] made many providers realise the need to create partnerships with the consumers and engage them across all levels of healthcare systems, including at the governance level.[[1]] The results of our survey, specifically the relatively low ratings for two items relating to consumer group orientation/onboarding and consumers’ expectations, suggest that the current processes for creating consumer–provider partnerships may be insufficient. The proposed questionnaire can serve as a tool to better understand the processes of developing and maintaining the consumer-provider partnerships, and to monitor how well healthcare organisations are engaging with their consumers at governance level. This questionnaire could also supplement existing organisational performance quality and safety indicators such as the New Zealand HQSC’s Quality Safety Marker for Consumer Engagement, as it provides the consumers at governance level perspective on how well healthcare organisations perform in this area.

Limitations and future work

In this project, we developed a questionnaire with and for health consumers and groups that form the general population. We did not focus on the preferences of any specific groups or communities, but rather on developing a tool that can be used by all for benchmarking and making national and international comparisons. Inadvertently, the proposed questionnaire may not be sensitive to the needs and preferences of such groups or communities, some of whom experience relentless health inequities and whose voices are pertinent to healthcare improvement. The MCE-Q can highlight a need for improvements around cultural safety for a particular group. If such need was identified, we recommend a more nuanced exploration of the issue for the specific group using methods that offer high cultural responsiveness and are informed, for example, by Talanoa or kaupapa Māori methodology. One example of such a group are the Indigenous Māori peoples of New Zealand. Indeed, the legal obligations of Te Tiriti o Waitangi reinforce the necessity to develop and validate a CE at governance level tool specific to Māori. The undertaking of an Indigenous tool would be best led and developed in the New Zealand context by Māori. We recommend that future research be conducted to enable Māori to exercise their rights as Indigenous peoples and as partners through Te Tiriti o Waitangi.

Another limitation is that only New Zealand based CE leaders were interviewed. We interviewed people in senior management roles who are currently involved in a range of CE initiatives in New Zealand. The dialogue quality during the interviews was high and we found that participants’ views aligned with the current international CE research: the improvement of CE being one of the key priorities for healthcare systems, the lack of a psychometrically sound CE measure, and the need to better understand how to effectively engage consumers in the development and delivery of care services.[[3,5,31]] As we were engaging with Australian and Canadian health consumer organisations, we found there was a clear recognition of the role of CE in healthcare systems. CE organisations from both countries supported us with the distribution of the proposed survey. While there are undoubtedly differences between the New Zealand and those two (and likely other) countries’ healthcare systems, the role of CE in the delivery and quality improvement of these systems is recognised globally. Thus, we believe that this sample provided sufficient information power[[24]] for understanding participant’s perspectives on measuring CE and the proposed tool could be used in the future in New Zealand and other countries.

Notably, our focus was on recruiting a sample size sufficient to carry out the necessary psychometric analysis of the proposed questionnaire and not on measuring CE per se.  As such, the Phase 2 survey was not powered to produce generalisable results relating to the state of CE at governance level in the three participating countries. Nevertheless, the questionnaire we developed can now be used for monitoring CE by individual organisations, and also at national and international level.

Finally, we only used Classical Test Theory methods to develop the MCE-Q. We are planning to apply Item response theory and use Rasch Analysis to further improve the psychometric performance of the questionnaire.

CONCLUSION

The MCE-Q is a novel instrument to measure CE at governance level. It showed sound psychometric properties and its value, and relevance was recognised by both health consumer representatives and decision-makers representing healthcare organisations in New Zealand. It can be used by healthcare organisations around the world for benchmarking, making national and international comparisons, and to drive the quality of health services to better meet the needs of the people they serve.

Summary

Abstract

Aim

To develop and validate a questionnaire to measure health CE at governance level.

Method

This study used qualitative and quantitative methods (including focus groups, cognitive interviews and an international survey), and consisted of two phases. In Phase 1, an initial list of items was generated and refined with feedback from health consumer representatives. In Phase 2, a draft survey was distributed to n=227 consumers from New Zealand, Australia and Canada. The benefit and relevance of using the questionnaire was explored through face-to-face interviews with five CE leaders from New Zealand healthcare organisations.

Results

The proposed questionnaire comprises 25 statements relating to CE. Respondents indicate their level of agreement with the statements on a five-point Likert-type scale. Focus group and cognitive interview participants found the questionnaire relevant and easy to understand. The questionnaire scores correlated with the PPEET, another instrument measuring consumer engagement, and showed excellent internal consistency (Cronbach’s alpha=0.97), unidimensionality and test-retest reliability (r=0.84).

Conclusion

The proposed questionnaire measures CE at governance level and can be used for international comparisons and benchmarking. It showed sound psychometric properties and its value and relevance was recognised by health consumer representatives and leaders with CE roles in New Zealand healthcare organisations.

Author Information

Karol J Czuba: Senior Evaluation Officer, Ko Awatea; Counties Manukau Health, Auckland. Christin Coomarasamy: Biostatistician, Ko Awatea; Counties Manukau Health, Auckland. Richard J Siegert: Professor of Psychology and Rehabilitation, Department of Psychology and Neuroscience, Auckland University of Technology, Auckland. Renee Greaves: Experience and Engagement Advisor; Counties Manukau Health, Auckland. Lucy Wong: Improvement Advisor, Ko Awatea; Counties Manukau Health, Auckland. Te Hao S Apaapa-Timu: Māori Research Advisor; Ko Awatea; Counties Manukau Health, Auckland. Lynne Maher: Principle of Co-Design; Ko Awatea; Counties Manukau Health, Auckland.

Acknowledgements

We are grateful to the following individuals and organisations for their support: Sai Panat (Data Manager), members of the CM Health Consumer Council, the Health Quality & Safety Commission in New Zealand, the Consumer Health Forum of Australia, and the British Columbia Patient Safety & Quality Council in Canada.

Correspondence

Lynne Maher, Ko Awatea, Middlemore Hospital, 100 Hospital Road, Otahuhu, Private Bag 93311, Auckland 1640

Correspondence Email

Lynne.Maher@middlemore.co.nz

Competing Interests

Nil.

1) Carman KL, Dardess P, Maurer M, Sofaer S, Adams K, Bechtel C, et al. Patient and family engagement: a framework for understanding the elements and developing interventions and policies. Health affairs. 2013;32(2):223-31.

2) Health Quality and Safety Commision. Progressing consumer engagement in primary care. Wellington, New Zealand: Health Quality and Safety Commision; 2019.

3) Jacobs LM, Brindis CD, Hughes D, Kennedy CE, Schmidt LA. Measuring consumer engagement: a review of tools and findings. The Journal for Healthcare Quality (JHQ). 2018;40(3):139-46.

4) Dukhanin V, Topazian R, DeCamp M. Metrics and evaluation tools for patient engagement in healthcare organization-and system-level decision-making: a systematic review. International journal of health policy and management. 2018;7(10):889.

5) Bath J, Wakerman J. Impact of community participation in primary health care: what is the evidence? Australian Journal of Primary Health. 2015;21(1):2-8.

6) Ministry of Health. The Health and Disability System Review: Proposals for Reform. In: Department of the Prime Minister and Cabinet, editor. Wellington, New Zealand; 2021.

7) Coulter A. Engaging patients in healthcare: McGraw-Hill Education (UK); 2011.

8) Alma-Ata U. Declaration of Alma-Ata. Geneva: WHO. 1978.

9) Bechtel C, Sweeney J, Carman K, Dardess P, Maurer M, Sofaer S, et al. Developing Interventions And Policies Patient And Family Engagement : A Framework For Understanding The Elements And. 2013.

10) Health Quality and Safety Commision. Consumer councils 2020 [updated 24/07/2020. Available from: https://www.hqsc.govt.nz/our-programmes/partners-in-care/work-programmes/consumer-representation/consumer-councils/.

11) Boivin A, L'Espérance A, Gauvin FP, Dumez V, Macaulay AC, Lehoux P, et al. Patient and public engagement in research and health system decision making: a systematic review of evaluation tools. Health Expectations. 2018;21(6):1075-84.

12) Churchill Jr GA. A paradigm for developing better measures of marketing constructs. Journal of marketing research. 1979;16(1):64-73.

13) Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use: Oxford University Press, USA; 2015.

14) Conrad F, Blair J, Tracy E, editors. Verbal reports are data! A theoretical approach to cognitive interviews. Proceedings of the Federal Committee on Statistical Methodology Research Conference; 1999: Citeseer.

15) Jaspers MW, Steen T, Van Den Bos C, Geenen M. The think aloud method: a guide to user interface design. International journal of medical informatics. 2004;73(11-12):781-95.

16) Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qualitative health research. 2005;15(9):1277-88.

17) Flesch RF. How to test readability. New York: Harper & Brothers; 1951.

18) Abelson J, Tripp L, Kandasamy S, Burrows K, PPEET Implementation Study Team. Supporting the evaluation of public and patient engagement in health system organizations: results from an implementation research study. Health Expectations. 2019;22(5): 1132-43.

19) Baker GR, Fancott C, Judd M, O'Connor P. Expanding patient engagement in quality improvement and health system redesign: Three Canadian case studies. Healthcare management forum. 2016;29(5): 176-82.

20) Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. Journal of biomedical informatics. 2009;42(2): 377-81.

21) Comrey AL, Lee HB. A first course in factor analysis: Psychology press; 2013.

22) Hair JF, Black WC, Babin BJ, Anderson RE. Multivariate data analysis. 2009.

23) Abelson J, Tripp L, Kandasamy S, Burrows K, Team PIS. Supporting the evaluation of public and patient engagement in health system organizations: results from an implementation research study. Health Expectations. 2019;22(5):1132-43.

24) Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qualitative health research. 2016;26(13): 1753-60.

25) R Core Team. R: A language and environment for statistical computing. Vienna, Austria 2013.

26) SAS Intitute. SAS/STAT Version 9.4. Cary, US: SAS Institute; 2015.

27) Conway JM, Huffcutt AI. A review and evaluation of exploratory factor analysis practices in organizational research. Organizational Research Methods. 2003;6(2):147-68.Field A, Miles J, Field Z.

28) Discovering statistics using R: Sage publications; 2012.

29) Cicchetti D, Bronen R, Spencer S, Haut S, Berg A, Oliver P, et al. Rating scales, scales of measurement, issues of reliability: resolving some critical issues for clinicians and researchers. The Journal of nervous and mental disease. 2006;194(8): 557-64.

30) Portney LG, Watkins MP. Foundations of clinical research: applications to practice: Pearson/Prentice Hall Upper Saddle River, NJ; 2009.

31) Manafo E, Petermann L, Mason-Lai P, Vandall-Walker V. Patient engagement in Canada: a scoping review of the ‘how’ and ‘what’ of patient engagement in health research. Health Research Policy and Systems. 2018;16(1): 5.

For the PDF of this article,
contact nzmj@nzma.org.nz

View Article PDF

A growing body of evidence suggests that improved consumer engagement (CE) can lead to better health outcomes, and contribute to improvements in health service quality and patient safety.[[1,2]] CE has been recognised globally as one of the key priorities within health systems’ continuous development and a requirement for patient-centred care.[[1–5]] The Health Quality & Safety Commission (HQSC) and Ministry of Health (MoH) of New Zealand identifies CE as one of their key priorities and recognises it as central to improving quality across the national healthcare system.[[2,6]]

CE in health focuses on consumers and care providers working together to promote and facilitate active patient, whānau (family) and public involvement at all levels of health systems.[[1,7]] An important part of CE, recognised as a right of all people by the World Health Organisation (WHO),[[8]] is engaging patients in health systems governance to inform the design and implementation of healthcare services.[[1]] Health systems governance level engagement may include, for example, being a member of a project team, steering group, consumer group or board.[[9]] Specifically, CE at governance level is characterised by bi-directional flow of information and shared power and responsibility, with consumers being active partners in defining agendas and making decisions.[[1]]

To facilitate CE many healthcare organisations have established consumer groups. Within the New Zealand health context these are typically called consumer councils, consumer advisory groups or consumer boards. HQSC describes consumer councils as:

key mechanisms through which consumers can participate in how health and disability services are delivered in different communities. In this way, consumer representatives can provide feedback on current services and tell providers what is important to them. They can give advice and input into strategic direction and planning of services. Consumer councils are made up entirely of consumer representatives and have slightly different ways of working, with some having a strong relationship with clinical governance and reporting to the board.[[1]]

The increased commitment to improving CE in New Zealand and globally has necessitated the need for robust CE evaluations.[[11]] This includes the recently announced reforms of health services within New Zealand which signals a priority outcome as ‘partnership at all levels of the system and empowering consumers of care to design services which work for them’, and a strong focus on partnering with Indigenous Māori community.[[6]]

An effective evaluation tool enables assessing outcomes of CE, learning from current practices, and demonstrating the impact of new policies and investments. However, a recent systematic review of questionnaires to measure CE at governance level[[11]] found that most of the identified tools lacked scientific rigour, were not proven to be reliable, and were not easy to read or understand. Many of the tools were developed for a single project or not made publicly available. In light of these findings, there is an urgent need to develop a psychometrically sound questionnaire to measure CE at governance level.

The overall aim of the current project was to develop and validate a questionnaire to measure health consumer representatives’ CE at governance level named the Middlemore Consumer Engagement Questionnaire (MCE-Q). This mixed methods study used a range of qualitative and quantitative methods and consisted of two phases. The aims for each phase were:

1. To develop an instrument to measure CE at governance level (Phase 1).

2. To demonstrate the reliability and validity of this instrument (Phase 2).

We aimed to explore if consumers felt enabled and supported to contribute to improving healthcare systems. We partnered with the Counties Manukau (CM) Health Consumer Council (the Consumer Council) to bring together a team of health researchers, consumers, practitioners and statisticians, with expertise in consumer experience, psychometrics, co-design and Indigenous issues across a wide array of settings. The questionnaire we planned to develop and validate aimed to measure the self-perceived level of engagement of consumers contributing at governance level, and to facilitate continuous healthcare systems improvement, decision-making processes and international comparisons relating to CE.

The purpose of this paper is to describe the development of the MCE-Q. In the next section, methods and findings from Phase 1 are reported, as they informed the subsequent data collection and analysis in Phase 2. This is followed by a section reporting methods and findings from Phase 2. Finally, an integrated discussion of the projects findings, limitations and conclusion are provided.

PHASE 1

Phase 1 focused on generating candidate items relevant to CE and developing the questionnaire. We first established an advisory group, which supported the project team, providing expertise in areas including CE, Māori health and Pasifika health.

Phase 1 methods

Study design

Phase 1 was guided by recommendations by Churchill[[12]] and Streiner et al,[[13]] for developing outcome measures. It consisted of multiple steps, including domain specification, item generation, a focus group, cognitive interviews, and an in-depth review of the proposed questionnaire. Figure 1 presents the steps of Phase 1.

Figure 1: Phase 1 steps.

Setting and location

The study was conducted in Auckland, New Zealand, between July and October 2020. This time scale included a range of disruptions caused by the global COVID-19 pandemic, but the conduct of this study was not interrupted.

Data collection

Content domain specification

The first step was to define the content domain of the proposed questionnaire. This process was based on published literature relating to CE, previously completed work of the Consumer Council and project team, and the team’s expertise in consumer experience, and measurement. Our focus was also on aligning our working definition with the CE-related components identified by the HQSC and WHO.[[2,8]] We also aimed to identify any potential subdomains which could then be psychometrically assessed in Phase 2.

Item generation

We included multiple data sources to generate potential items for the MCE-Q. First, a list of initial items was formulated during a workshop with the Consumer Council. Next, a literature review was conducted to identify any relevant scientific publications and existing tools. As a result, a further set of candidate items were identified and included in the item list. Finally, the item list was reviewed and refined by the project team, who focused on deleting any duplicate or otherwise redundant items, and on item readability.

Focus group with health consumer representatives

We conducted a workshop-style focus group including participants who were current or former Consumer Council members. The Consumer Council was established to represent the interests of consumers and bring an inpatient and ambulatory consumer and family perspective to development of the Counties Manukau Health plans, policies, publications, and operational decisions and to raise issues being identified in the community. It includes people from a variety of backgrounds who have a strong consumer understanding of the healthcare system and represent the voices of their communities. Potential participants were invited to take part via an invitation email sent out by the Consumer Council’s secretariat. There were no exclusion criteria. The focus group lasted approximately two hours, was facilitated by three members of the project team (LM, TA, KC),  and was audio-recorded. The purpose of the focus group was to review the questionnaire instructions, proposed items, recall period and response format, and potentially generate further items. Recognition of time and expertise, in the form of koha (gift), and support with transportation was provided to all consumer participants of the focus groups. Basic demographic data were collected.

Cognitive interviews

Following analysis of the focus group data, two members of the project team conducted cognitive interviews[[14]] with a purposively selected sample of current and former members of the Consumer Council. We used cognitive interviewing to evaluate whether the survey respondents interpreted the survey instructions and items as they were intended, and whether the survey format enabled the respondents to select responses that matched their answers.[[14]]

Consumer representatives were invited to take part via an invitation email. Our sampling strategy focused on ensuring gender, ethnicity and length of Consumer Council service representation. There were no exclusion criteria.

Consumer participants were interviewed individually, face-to-face. They were asked to ‘think-aloud’[[15]] as they completed a refined version of the proposed questionnaire. The interviewer explored any potential issues as participants responded to items. All interviews were audio-recorded. Basic demographic data were collected.

In-depth review

Our project team met regularly throughout the data collection period to review the transcripts and refine the questionnaire. The questionnaire instructions and items were reviewed for clarity and redundancy. Any issues were resolved by discussion.

Data analysis

The focus group discussion was transcribed verbatim and analysed using Directed Content Analysis,[[16]] focusing specifically on defining CE, any items with perceived lack of clarity, and on generating new candidate items. The proposed items and instructions were refined to improve comprehension by participants and to elicit experiences related to CE at governance level.

Cognitive interviews were transcribed, and analysed using Directed Content Analysis, focusing specifically on identifying items that were not easily understood, and on the acceptability of the proposed response categories.

We used the Flesch Reading Ease score[[17]] to test the readability of the questionnaire instructions and items.

Ethics

Ethical approval for the study (Phase 1 and 2) was received from the Auckland Health Research Committee (AH3350).

Phase 1 findings

Content domain specification

The content domain of the proposed questionnaire is health CE at governance level. For the purpose of this study, we employed the following definition of CE at governance level (adapted from Abelson et al[[18]] and Baker et al[[19]]):

Consumer engagement at governance level is characterised by shared power and responsibility, with consumers being active partners in defining agendas and making decisions. Information flows bi-directionally throughout the process of engagement, and decision-making responsibility is shared.

This definition suggests there may be some subdomains within the overall domain of CE, for example, shared power, responsibility, active participation and decision-making. We planned to explore any potential subdomains in Phase 2.

Item generation

In our prior work which initiated the current project, the Consumer Council and project team generated a set of 27 candidate items relating to CE that were included in the initial item bank for the proposed CE questionnaire. These items considered consumers’ experiences of being involved in governance groups, for example, I feel that my views are heard and I feel confident when challenging views expressed by other members of the group. Next, a literature review conducted by a trained academic librarian, generated a further set of items. In total, the initial list included 112 candidate items.

The project team iteratively reviewed the initial list of items and selected 36 that appeared to represent the content domain of CE most strongly. All items were then reviewed for readability, ensuring they used brief and plain language and had consistent item valence (positive versus negative wording).

We intended to use a Likert-type scale to indicate the level of agreement with each of the items. The proposed response categories ranged from ‘strongly disagree’ (scored ‘1’) to ‘strongly agree’ (scored ‘5’). We planned to explore the preference for using the middle response category (‘neither agree, nor disagree) with the focus group and interview participants. Scores for each item would be summated to give the total score.

The list of 36 items was then formatted into a prototype draft of the questionnaire. This included questionnaire instructions (formulated by the project team) and the proposed response categories. This draft was then discussed with consumer representatives during a focus group.

Focus group with the Consumer Council members

Six participants took part in the focus group (Table 1).

Participants found the questionnaire instructions to be generally easy to understand. However, they thought more clarity was needed around the meaning of ‘a health consumer in general’ versus ‘a health consumer at governance level’. Some participants noted that the difference between the two referred to the level of responsibility and argued that a health consumer at governance level represents not only their own lived experience, but also their community’s. Participants also argued that it was important to set the context as clear as possible in the instructions, for example: Rate each item thinking about your engagement in [group] over the last [number] months.

Next, participants reviewed all 36 candidate items. Overall, participants all agreed that the questions were relevant and that most should be included in a measure of CE at governance level. They noted similarities between some items (for example, ‘My opinions are listened to and valued’ and ‘I feel that my views are heard’), and argued for rewording and/or clarification of some of them (for example, replacing barriers with challenges in ‘There are barriers that impact my ability to contribute in meetings’). Furthermore, participants argued that the questionnaire must consider respondents’ cultural background, with one of the participants stating that ‘cultural sensitivity is universal’. Finally, as most participants thought that the use of a five-point Likert-type response scale was appropriate, we decided to include the middle response category ‘neither agree, nor disagree’.

The project team read and discussed the focus group transcripts, and iteratively reviewed the questionnaire draft. A number of refinements were made, that included clarifying the instructions and item wording, providing examples where appropriate, incorporating the principle of partnership into some of the items, and further improving the readability of the questionnaire. No items were deleted following the focus group.

View Table 1 and Table 2.

Cognitive interviews

Next, the prototype questionnaire was tested through cognitive interviews with five participants (Table 2).

Participants found the questionnaire instructions and majority of items easy to understand. They suggested rephrasing some of the items to avoid unnecessary ambiguity, which resulted in further improvements to the questionnaires readability. Overall, participants thought that the questionnaire was easy to complete and that it covered a broad spectrum of areas relating to CE at governance level.

Drafting the questionnaire

After a number of revisions incorporating findings from the focus group and cognitive interviews, the project team prepared a further questionnaire draft for psychometric performance testing in Phase 2. The questionnaire included 36 CE items using a five-point Likert-type response format (Supplementary Table 1) and nine demographic questions (Supplementary Table 2). The Flesch Reading Ease score was 61, suggesting the questionnaire was written in Plain English and easily understood, on average, by a student aged 13–15 years.

The proposed questionnaire was then uploaded to REDCap database[[20]] to enable an anonymous, online distribution to health consumer representatives in Phase 2.

PHASE 2

Phase 2 focused on testing the following psychometric properties of the proposed questionnaire: construct and concurrent validity, internal consistency and test-retest reliability.

Phase 2 methods

Study design

Phase 2 consisted of a main CE survey study with health consumer representatives and a qualitative interview study with CE leaders conducted concurrently. This was followed by an additional test-retest survey study.

Setting and location

The project team was based in Auckland, New Zealand. The survey was conducted online with participants from New Zealand, Australia and Canada between December 2020 and July 2021.

Data collection

Main CE survey

The proposed questionnaire was administered via the REDCap database[[20]] and completed anonymously. The work of Comrey and Lee[[21]] and Hair et al[[22]] suggests that having a sample size of 200 and above would be sufficient for carrying out a reliability analysis. The survey was distributed by invitation via district health boards Consumer Council chairpersons from around New Zealand, the HQSC, the Consumer Health Forum of Australia, and the British Columbia Patient Safety & Quality Council in Canada.

To test the proposed questionnaires concurrent validity, we selected a similar questionnaire, the Patient and Public Engagement Evaluation Tool (PPEET).[[18]] PPEET was developed at McMaster University (Canada) by public and patient engagement experts and is widely used in Canada and other countries by healthcare organisations.[[23]] PPEET includes 13 items and takes about two to three minutes to complete. A consecutive sub-sample of participants were invited to complete the validation measure, PPEET.

CE leaders’ interviews

We interviewed New Zealand CE leaders (for example, chairs, managers) of organisations/groups formally involving health consumer representative at governance level, with at least three years of experience in a leadership role. They were purposively selected from within the project lead’s (LM) professional network and invited via email to take part. There were no exclusion criteria.

CE leaders were interviewed individually, face-to-face. The interviewer (LW) used an interview guide to explore participants’ perspectives on measuring CE and how such data could be used in the future. The interviews were audio-recorded and transcribed verbatim. We expected to interview between 5–10 people, depending on the depth and richness of the collected data.[[24]]

Test-retest CE survey

Following the initial survey, the proposed questionnaire was refined based on statistical analysis and then underwent an evaluation of its test-retest reliability. We aimed to recruit a sample of n=30 participants to complete the refined version of the proposed questionnaire on two occasions, approximately one week apart.

Data analysis

All statistical analyses were performed using R,[[25]] SAS/STAT software version 9.4[[26]] and SPSS version 26.0 (SPSS Inc., Chicago, IL). Respondents with over 10% missing values were removed from the analysis dataset. The data entries were double checked to ensure accuracy.

The demographics of the respondents and the response profiles were presented descriptively in terms of counts and proportions.

Principal Component Analysis (PCA)[[27]] was performed to confirm construct validity. PCA is a method for factor extraction and a variable-reduction technique. It is used to reduce the number of variables (ie questionnaire items) while retaining as much of the original variance as possible.[[27]] It was also used to test whether the underlying construct (ie CE) loads onto all or only some of the variables. Pearson’s correlations were produced for all the 36 items. Both Kaiser–Meyer–Olkin (KMO) Test and Bartlett’s Test of Sphericity were conducted to confirm the appropriateness of conducting the PCA. The KMO statistic varies between 0 and 1.0. Values >0.5 are considered ‘barely acceptable,’ and >0.9 are deemed most suitable.[[28]] For Bartlett’s Test, a significant statistic (P≤0.05) means it can efficiently perform a PCA on the dataset.[[28]] For the PCA, an oblique rotation was chosen as the underlying items are related. The number of components to be retained was determined using a scree plot with parallel analysis. Items that were strongly correlated (above 0.7) with the other items were removed from the survey.

Concurrent validity was evaluated using Pearson’s correlation coefficient to assess the correlation between the proposed questionnaire and PPEET. For both test-retest reliability and construct validity, the agreement at the individual item level was assessed. The relative reliability was determined by calculating the two-way random Intraclass correlation coefficient (ICC) for absolute agreement of single measures. The 95% confidence interval (CI) was calculated for each ICC. Reliability was considered poor for ICC values <0.40, fair for values between 0.40–0.59, good for values between 0.60–0.74, and excellent for values between 0.75–1.00.[[29]] ICC values above 0.75 were considered acceptable for test-retest reliability.[[30]] Cronbach’s alpha coefficient was utilised to test internal consistency, which ranges from 0–1.0. Streiner et al considered an alpha value of >0.7 as acceptable.[[13]]

Interviews with CE leaders were analysed using Directed Content Analysis, focusing specifically on participants’ perceptions of what constitutes CE at governance level, and the usefulness of the proposed questionnaire in measuring and improving CE.

Phase 2 findings

Main CE survey results

Two hundred and twenty-nine participants from three countries completed the anonymous CE survey (Table 3 and Table 4). Most participants were 45 years or older (84.3%), and approximately two thirds identified as female. The highest scored items were item 3 (‘I am able to express my views freely’), 4 (‘participation in this group is important to me’), and 10 (‘I feel safe to speak from my personal perspective, for example, my cultural perspective, my community's perspective’, etc). Items with the lowest scores were item 22 (‘I was well oriented to the work of this group’), 24 (‘the work achieved by this group has met my expectations’), 33 (‘I would not change anything about this group’), and the reverse-scored item 12 (‘there are things that reduce my ability to contribute in meetings, for example, related to my cultural background or use of jargon’). View Table 3 and Table 4.

Construct validity

Out of the 229 participants, there were 208 responses that had all the items completed; hence factor analysis was carried out on the 208 sample. Based on principal component analysis (Supplementary Table 3 and Figure 2), all items fitted under one dimension, which explained 53% of the total variance. All items with correlations above 0.75 were reviewed for potential redundancy. As a result, 11 items were removed (Supplementary Table 4). The KMO and Bartlett’s test confirmed that all items were intercorrelated (r=0.96, P<0.0001) and the sample size was adequate.

Figure 2: Scree plot of the number of components in the principal components analysis.

Concurrent validity

A sample of 87 participants completed both the proposed survey and PPEET survey. Pearson’s correlation coefficient between total scores from the two surveys was high (0.93).

Internal consistency

Cronbach’s alpha for the initial 36-item scale was 0.97. For the final 25 items Cronbach’s alpha was 0.96 and all corrected item-total correlations ranged from 0.42 to 0.85, suggesting satisfactory internal consistency.

Test-retest reliability

Thirty-four participants took part in the test-retest evaluation. The results for both ICC (0.84) and Cronbach’s alpha (0.91) met the criterion, indicating that the proposed tool has high test-retest reliability (Supplementary Table 5).

CE leaders’ interviews

We interviewed five CE leaders (Table 5).

Consumer engagement was unanimously viewed as a ‘unique partnership’ with an organisation to ‘amplify the voice of the communities’, especially for populations who experience health inequities such as Māori, Pasifika and those living with disabilities. One participant argued it was important to engage consumers ‘in a way that meets their needs [and the community’s]’; the community should be ‘part of the solution, or [part of] the process to getting a solution’. There appeared to be a strong desire for consumer engagement to be ‘part of [the] organisational structure ... built in [to processes] and in everything we do.’ Participants thought that health consumers have the potential to be involved in strategic decision-making, but currently had little involvement from the start and throughout any such initiatives.

Participants argued that there is currently limited exploration into the experience of consumers at governance level beyond regular group meetings/hui or individual reflection and feedback sessions with their managers. Reportedly, there was no ‘formal evaluation’ process used to consistently review consumer’s experiences of working at governance levels in their organisations. However, all managers acknowledged that monitoring consumer experience was a necessary ‘mechanism for improvement’ and thought that the proposed questionnaire would be useful in facilitating this on an annual or bi-annual basis.

The managers felt that the tool could help to identify gaps in understanding, relating to orientation and organisational expectations and highlight whether consumers were working in the most appropriate spaces within an organisation. It also provided a ‘platform’ for less vocal members of the group to share their opinions and made ‘[the consumer’s] needs better known to [the managers] … and therefore the [consumer] contribution is more effective’. Gathering feedback from consumers was seen as important, with one participant proposing that feedback from any survey tool should be ‘shared openly with consumers,’ and that an ‘action plan’ should be formed and then enacted appropriately.

I think with anything, you can do a survey, but it’s about what you do with it... what sort of action plan will come from those results?

DISCUSSION

In this paper we report findings from a study developing and validating a novel questionnaire to measure CE at governance level. We built and expanded on the strengths of previously published CE-related measures by working closely with consumer representatives and CE leaders from a wide range of backgrounds, and focusing on psychometric performance of the proposed tool. The MCE-Q comprises 25 items (Supplementary Table 6) representing one domain, uses a five-point Likert-type response format, and can be completed in approximately 10 minutes. It can be downloaded from [https://koawatea.countiesmanukau.health.nz/co-design/tools-and-resources]. The MCE-Q showed face, construct and concurrent validity, and excellent internal consistency and test-retest reliability. It can be used by healthcare organisations to monitor how well they engage their consumer representatives at governance level, identify areas for improvement and make national and international comparisons.

Healthcare providers’ focus, relating to health consumers’ engagement, has been primarily on consultation.[[1]] The mounting evidence showing that healthcare outcomes (including patient outcomes) can be improved by greater CE[[2,5]] made many providers realise the need to create partnerships with the consumers and engage them across all levels of healthcare systems, including at the governance level.[[1]] The results of our survey, specifically the relatively low ratings for two items relating to consumer group orientation/onboarding and consumers’ expectations, suggest that the current processes for creating consumer–provider partnerships may be insufficient. The proposed questionnaire can serve as a tool to better understand the processes of developing and maintaining the consumer-provider partnerships, and to monitor how well healthcare organisations are engaging with their consumers at governance level. This questionnaire could also supplement existing organisational performance quality and safety indicators such as the New Zealand HQSC’s Quality Safety Marker for Consumer Engagement, as it provides the consumers at governance level perspective on how well healthcare organisations perform in this area.

Limitations and future work

In this project, we developed a questionnaire with and for health consumers and groups that form the general population. We did not focus on the preferences of any specific groups or communities, but rather on developing a tool that can be used by all for benchmarking and making national and international comparisons. Inadvertently, the proposed questionnaire may not be sensitive to the needs and preferences of such groups or communities, some of whom experience relentless health inequities and whose voices are pertinent to healthcare improvement. The MCE-Q can highlight a need for improvements around cultural safety for a particular group. If such need was identified, we recommend a more nuanced exploration of the issue for the specific group using methods that offer high cultural responsiveness and are informed, for example, by Talanoa or kaupapa Māori methodology. One example of such a group are the Indigenous Māori peoples of New Zealand. Indeed, the legal obligations of Te Tiriti o Waitangi reinforce the necessity to develop and validate a CE at governance level tool specific to Māori. The undertaking of an Indigenous tool would be best led and developed in the New Zealand context by Māori. We recommend that future research be conducted to enable Māori to exercise their rights as Indigenous peoples and as partners through Te Tiriti o Waitangi.

Another limitation is that only New Zealand based CE leaders were interviewed. We interviewed people in senior management roles who are currently involved in a range of CE initiatives in New Zealand. The dialogue quality during the interviews was high and we found that participants’ views aligned with the current international CE research: the improvement of CE being one of the key priorities for healthcare systems, the lack of a psychometrically sound CE measure, and the need to better understand how to effectively engage consumers in the development and delivery of care services.[[3,5,31]] As we were engaging with Australian and Canadian health consumer organisations, we found there was a clear recognition of the role of CE in healthcare systems. CE organisations from both countries supported us with the distribution of the proposed survey. While there are undoubtedly differences between the New Zealand and those two (and likely other) countries’ healthcare systems, the role of CE in the delivery and quality improvement of these systems is recognised globally. Thus, we believe that this sample provided sufficient information power[[24]] for understanding participant’s perspectives on measuring CE and the proposed tool could be used in the future in New Zealand and other countries.

Notably, our focus was on recruiting a sample size sufficient to carry out the necessary psychometric analysis of the proposed questionnaire and not on measuring CE per se.  As such, the Phase 2 survey was not powered to produce generalisable results relating to the state of CE at governance level in the three participating countries. Nevertheless, the questionnaire we developed can now be used for monitoring CE by individual organisations, and also at national and international level.

Finally, we only used Classical Test Theory methods to develop the MCE-Q. We are planning to apply Item response theory and use Rasch Analysis to further improve the psychometric performance of the questionnaire.

CONCLUSION

The MCE-Q is a novel instrument to measure CE at governance level. It showed sound psychometric properties and its value, and relevance was recognised by both health consumer representatives and decision-makers representing healthcare organisations in New Zealand. It can be used by healthcare organisations around the world for benchmarking, making national and international comparisons, and to drive the quality of health services to better meet the needs of the people they serve.

Summary

Abstract

Aim

To develop and validate a questionnaire to measure health CE at governance level.

Method

This study used qualitative and quantitative methods (including focus groups, cognitive interviews and an international survey), and consisted of two phases. In Phase 1, an initial list of items was generated and refined with feedback from health consumer representatives. In Phase 2, a draft survey was distributed to n=227 consumers from New Zealand, Australia and Canada. The benefit and relevance of using the questionnaire was explored through face-to-face interviews with five CE leaders from New Zealand healthcare organisations.

Results

The proposed questionnaire comprises 25 statements relating to CE. Respondents indicate their level of agreement with the statements on a five-point Likert-type scale. Focus group and cognitive interview participants found the questionnaire relevant and easy to understand. The questionnaire scores correlated with the PPEET, another instrument measuring consumer engagement, and showed excellent internal consistency (Cronbach’s alpha=0.97), unidimensionality and test-retest reliability (r=0.84).

Conclusion

The proposed questionnaire measures CE at governance level and can be used for international comparisons and benchmarking. It showed sound psychometric properties and its value and relevance was recognised by health consumer representatives and leaders with CE roles in New Zealand healthcare organisations.

Author Information

Karol J Czuba: Senior Evaluation Officer, Ko Awatea; Counties Manukau Health, Auckland. Christin Coomarasamy: Biostatistician, Ko Awatea; Counties Manukau Health, Auckland. Richard J Siegert: Professor of Psychology and Rehabilitation, Department of Psychology and Neuroscience, Auckland University of Technology, Auckland. Renee Greaves: Experience and Engagement Advisor; Counties Manukau Health, Auckland. Lucy Wong: Improvement Advisor, Ko Awatea; Counties Manukau Health, Auckland. Te Hao S Apaapa-Timu: Māori Research Advisor; Ko Awatea; Counties Manukau Health, Auckland. Lynne Maher: Principle of Co-Design; Ko Awatea; Counties Manukau Health, Auckland.

Acknowledgements

We are grateful to the following individuals and organisations for their support: Sai Panat (Data Manager), members of the CM Health Consumer Council, the Health Quality & Safety Commission in New Zealand, the Consumer Health Forum of Australia, and the British Columbia Patient Safety & Quality Council in Canada.

Correspondence

Lynne Maher, Ko Awatea, Middlemore Hospital, 100 Hospital Road, Otahuhu, Private Bag 93311, Auckland 1640

Correspondence Email

Lynne.Maher@middlemore.co.nz

Competing Interests

Nil.

1) Carman KL, Dardess P, Maurer M, Sofaer S, Adams K, Bechtel C, et al. Patient and family engagement: a framework for understanding the elements and developing interventions and policies. Health affairs. 2013;32(2):223-31.

2) Health Quality and Safety Commision. Progressing consumer engagement in primary care. Wellington, New Zealand: Health Quality and Safety Commision; 2019.

3) Jacobs LM, Brindis CD, Hughes D, Kennedy CE, Schmidt LA. Measuring consumer engagement: a review of tools and findings. The Journal for Healthcare Quality (JHQ). 2018;40(3):139-46.

4) Dukhanin V, Topazian R, DeCamp M. Metrics and evaluation tools for patient engagement in healthcare organization-and system-level decision-making: a systematic review. International journal of health policy and management. 2018;7(10):889.

5) Bath J, Wakerman J. Impact of community participation in primary health care: what is the evidence? Australian Journal of Primary Health. 2015;21(1):2-8.

6) Ministry of Health. The Health and Disability System Review: Proposals for Reform. In: Department of the Prime Minister and Cabinet, editor. Wellington, New Zealand; 2021.

7) Coulter A. Engaging patients in healthcare: McGraw-Hill Education (UK); 2011.

8) Alma-Ata U. Declaration of Alma-Ata. Geneva: WHO. 1978.

9) Bechtel C, Sweeney J, Carman K, Dardess P, Maurer M, Sofaer S, et al. Developing Interventions And Policies Patient And Family Engagement : A Framework For Understanding The Elements And. 2013.

10) Health Quality and Safety Commision. Consumer councils 2020 [updated 24/07/2020. Available from: https://www.hqsc.govt.nz/our-programmes/partners-in-care/work-programmes/consumer-representation/consumer-councils/.

11) Boivin A, L'Espérance A, Gauvin FP, Dumez V, Macaulay AC, Lehoux P, et al. Patient and public engagement in research and health system decision making: a systematic review of evaluation tools. Health Expectations. 2018;21(6):1075-84.

12) Churchill Jr GA. A paradigm for developing better measures of marketing constructs. Journal of marketing research. 1979;16(1):64-73.

13) Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use: Oxford University Press, USA; 2015.

14) Conrad F, Blair J, Tracy E, editors. Verbal reports are data! A theoretical approach to cognitive interviews. Proceedings of the Federal Committee on Statistical Methodology Research Conference; 1999: Citeseer.

15) Jaspers MW, Steen T, Van Den Bos C, Geenen M. The think aloud method: a guide to user interface design. International journal of medical informatics. 2004;73(11-12):781-95.

16) Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qualitative health research. 2005;15(9):1277-88.

17) Flesch RF. How to test readability. New York: Harper & Brothers; 1951.

18) Abelson J, Tripp L, Kandasamy S, Burrows K, PPEET Implementation Study Team. Supporting the evaluation of public and patient engagement in health system organizations: results from an implementation research study. Health Expectations. 2019;22(5): 1132-43.

19) Baker GR, Fancott C, Judd M, O'Connor P. Expanding patient engagement in quality improvement and health system redesign: Three Canadian case studies. Healthcare management forum. 2016;29(5): 176-82.

20) Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. Journal of biomedical informatics. 2009;42(2): 377-81.

21) Comrey AL, Lee HB. A first course in factor analysis: Psychology press; 2013.

22) Hair JF, Black WC, Babin BJ, Anderson RE. Multivariate data analysis. 2009.

23) Abelson J, Tripp L, Kandasamy S, Burrows K, Team PIS. Supporting the evaluation of public and patient engagement in health system organizations: results from an implementation research study. Health Expectations. 2019;22(5):1132-43.

24) Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qualitative health research. 2016;26(13): 1753-60.

25) R Core Team. R: A language and environment for statistical computing. Vienna, Austria 2013.

26) SAS Intitute. SAS/STAT Version 9.4. Cary, US: SAS Institute; 2015.

27) Conway JM, Huffcutt AI. A review and evaluation of exploratory factor analysis practices in organizational research. Organizational Research Methods. 2003;6(2):147-68.Field A, Miles J, Field Z.

28) Discovering statistics using R: Sage publications; 2012.

29) Cicchetti D, Bronen R, Spencer S, Haut S, Berg A, Oliver P, et al. Rating scales, scales of measurement, issues of reliability: resolving some critical issues for clinicians and researchers. The Journal of nervous and mental disease. 2006;194(8): 557-64.

30) Portney LG, Watkins MP. Foundations of clinical research: applications to practice: Pearson/Prentice Hall Upper Saddle River, NJ; 2009.

31) Manafo E, Petermann L, Mason-Lai P, Vandall-Walker V. Patient engagement in Canada: a scoping review of the ‘how’ and ‘what’ of patient engagement in health research. Health Research Policy and Systems. 2018;16(1): 5.

For the PDF of this article,
contact nzmj@nzma.org.nz

View Article PDF

A growing body of evidence suggests that improved consumer engagement (CE) can lead to better health outcomes, and contribute to improvements in health service quality and patient safety.[[1,2]] CE has been recognised globally as one of the key priorities within health systems’ continuous development and a requirement for patient-centred care.[[1–5]] The Health Quality & Safety Commission (HQSC) and Ministry of Health (MoH) of New Zealand identifies CE as one of their key priorities and recognises it as central to improving quality across the national healthcare system.[[2,6]]

CE in health focuses on consumers and care providers working together to promote and facilitate active patient, whānau (family) and public involvement at all levels of health systems.[[1,7]] An important part of CE, recognised as a right of all people by the World Health Organisation (WHO),[[8]] is engaging patients in health systems governance to inform the design and implementation of healthcare services.[[1]] Health systems governance level engagement may include, for example, being a member of a project team, steering group, consumer group or board.[[9]] Specifically, CE at governance level is characterised by bi-directional flow of information and shared power and responsibility, with consumers being active partners in defining agendas and making decisions.[[1]]

To facilitate CE many healthcare organisations have established consumer groups. Within the New Zealand health context these are typically called consumer councils, consumer advisory groups or consumer boards. HQSC describes consumer councils as:

key mechanisms through which consumers can participate in how health and disability services are delivered in different communities. In this way, consumer representatives can provide feedback on current services and tell providers what is important to them. They can give advice and input into strategic direction and planning of services. Consumer councils are made up entirely of consumer representatives and have slightly different ways of working, with some having a strong relationship with clinical governance and reporting to the board.[[1]]

The increased commitment to improving CE in New Zealand and globally has necessitated the need for robust CE evaluations.[[11]] This includes the recently announced reforms of health services within New Zealand which signals a priority outcome as ‘partnership at all levels of the system and empowering consumers of care to design services which work for them’, and a strong focus on partnering with Indigenous Māori community.[[6]]

An effective evaluation tool enables assessing outcomes of CE, learning from current practices, and demonstrating the impact of new policies and investments. However, a recent systematic review of questionnaires to measure CE at governance level[[11]] found that most of the identified tools lacked scientific rigour, were not proven to be reliable, and were not easy to read or understand. Many of the tools were developed for a single project or not made publicly available. In light of these findings, there is an urgent need to develop a psychometrically sound questionnaire to measure CE at governance level.

The overall aim of the current project was to develop and validate a questionnaire to measure health consumer representatives’ CE at governance level named the Middlemore Consumer Engagement Questionnaire (MCE-Q). This mixed methods study used a range of qualitative and quantitative methods and consisted of two phases. The aims for each phase were:

1. To develop an instrument to measure CE at governance level (Phase 1).

2. To demonstrate the reliability and validity of this instrument (Phase 2).

We aimed to explore if consumers felt enabled and supported to contribute to improving healthcare systems. We partnered with the Counties Manukau (CM) Health Consumer Council (the Consumer Council) to bring together a team of health researchers, consumers, practitioners and statisticians, with expertise in consumer experience, psychometrics, co-design and Indigenous issues across a wide array of settings. The questionnaire we planned to develop and validate aimed to measure the self-perceived level of engagement of consumers contributing at governance level, and to facilitate continuous healthcare systems improvement, decision-making processes and international comparisons relating to CE.

The purpose of this paper is to describe the development of the MCE-Q. In the next section, methods and findings from Phase 1 are reported, as they informed the subsequent data collection and analysis in Phase 2. This is followed by a section reporting methods and findings from Phase 2. Finally, an integrated discussion of the projects findings, limitations and conclusion are provided.

PHASE 1

Phase 1 focused on generating candidate items relevant to CE and developing the questionnaire. We first established an advisory group, which supported the project team, providing expertise in areas including CE, Māori health and Pasifika health.

Phase 1 methods

Study design

Phase 1 was guided by recommendations by Churchill[[12]] and Streiner et al,[[13]] for developing outcome measures. It consisted of multiple steps, including domain specification, item generation, a focus group, cognitive interviews, and an in-depth review of the proposed questionnaire. Figure 1 presents the steps of Phase 1.

Figure 1: Phase 1 steps.

Setting and location

The study was conducted in Auckland, New Zealand, between July and October 2020. This time scale included a range of disruptions caused by the global COVID-19 pandemic, but the conduct of this study was not interrupted.

Data collection

Content domain specification

The first step was to define the content domain of the proposed questionnaire. This process was based on published literature relating to CE, previously completed work of the Consumer Council and project team, and the team’s expertise in consumer experience, and measurement. Our focus was also on aligning our working definition with the CE-related components identified by the HQSC and WHO.[[2,8]] We also aimed to identify any potential subdomains which could then be psychometrically assessed in Phase 2.

Item generation

We included multiple data sources to generate potential items for the MCE-Q. First, a list of initial items was formulated during a workshop with the Consumer Council. Next, a literature review was conducted to identify any relevant scientific publications and existing tools. As a result, a further set of candidate items were identified and included in the item list. Finally, the item list was reviewed and refined by the project team, who focused on deleting any duplicate or otherwise redundant items, and on item readability.

Focus group with health consumer representatives

We conducted a workshop-style focus group including participants who were current or former Consumer Council members. The Consumer Council was established to represent the interests of consumers and bring an inpatient and ambulatory consumer and family perspective to development of the Counties Manukau Health plans, policies, publications, and operational decisions and to raise issues being identified in the community. It includes people from a variety of backgrounds who have a strong consumer understanding of the healthcare system and represent the voices of their communities. Potential participants were invited to take part via an invitation email sent out by the Consumer Council’s secretariat. There were no exclusion criteria. The focus group lasted approximately two hours, was facilitated by three members of the project team (LM, TA, KC),  and was audio-recorded. The purpose of the focus group was to review the questionnaire instructions, proposed items, recall period and response format, and potentially generate further items. Recognition of time and expertise, in the form of koha (gift), and support with transportation was provided to all consumer participants of the focus groups. Basic demographic data were collected.

Cognitive interviews

Following analysis of the focus group data, two members of the project team conducted cognitive interviews[[14]] with a purposively selected sample of current and former members of the Consumer Council. We used cognitive interviewing to evaluate whether the survey respondents interpreted the survey instructions and items as they were intended, and whether the survey format enabled the respondents to select responses that matched their answers.[[14]]

Consumer representatives were invited to take part via an invitation email. Our sampling strategy focused on ensuring gender, ethnicity and length of Consumer Council service representation. There were no exclusion criteria.

Consumer participants were interviewed individually, face-to-face. They were asked to ‘think-aloud’[[15]] as they completed a refined version of the proposed questionnaire. The interviewer explored any potential issues as participants responded to items. All interviews were audio-recorded. Basic demographic data were collected.

In-depth review

Our project team met regularly throughout the data collection period to review the transcripts and refine the questionnaire. The questionnaire instructions and items were reviewed for clarity and redundancy. Any issues were resolved by discussion.

Data analysis

The focus group discussion was transcribed verbatim and analysed using Directed Content Analysis,[[16]] focusing specifically on defining CE, any items with perceived lack of clarity, and on generating new candidate items. The proposed items and instructions were refined to improve comprehension by participants and to elicit experiences related to CE at governance level.

Cognitive interviews were transcribed, and analysed using Directed Content Analysis, focusing specifically on identifying items that were not easily understood, and on the acceptability of the proposed response categories.

We used the Flesch Reading Ease score[[17]] to test the readability of the questionnaire instructions and items.

Ethics

Ethical approval for the study (Phase 1 and 2) was received from the Auckland Health Research Committee (AH3350).

Phase 1 findings

Content domain specification

The content domain of the proposed questionnaire is health CE at governance level. For the purpose of this study, we employed the following definition of CE at governance level (adapted from Abelson et al[[18]] and Baker et al[[19]]):

Consumer engagement at governance level is characterised by shared power and responsibility, with consumers being active partners in defining agendas and making decisions. Information flows bi-directionally throughout the process of engagement, and decision-making responsibility is shared.

This definition suggests there may be some subdomains within the overall domain of CE, for example, shared power, responsibility, active participation and decision-making. We planned to explore any potential subdomains in Phase 2.

Item generation

In our prior work which initiated the current project, the Consumer Council and project team generated a set of 27 candidate items relating to CE that were included in the initial item bank for the proposed CE questionnaire. These items considered consumers’ experiences of being involved in governance groups, for example, I feel that my views are heard and I feel confident when challenging views expressed by other members of the group. Next, a literature review conducted by a trained academic librarian, generated a further set of items. In total, the initial list included 112 candidate items.

The project team iteratively reviewed the initial list of items and selected 36 that appeared to represent the content domain of CE most strongly. All items were then reviewed for readability, ensuring they used brief and plain language and had consistent item valence (positive versus negative wording).

We intended to use a Likert-type scale to indicate the level of agreement with each of the items. The proposed response categories ranged from ‘strongly disagree’ (scored ‘1’) to ‘strongly agree’ (scored ‘5’). We planned to explore the preference for using the middle response category (‘neither agree, nor disagree) with the focus group and interview participants. Scores for each item would be summated to give the total score.

The list of 36 items was then formatted into a prototype draft of the questionnaire. This included questionnaire instructions (formulated by the project team) and the proposed response categories. This draft was then discussed with consumer representatives during a focus group.

Focus group with the Consumer Council members

Six participants took part in the focus group (Table 1).

Participants found the questionnaire instructions to be generally easy to understand. However, they thought more clarity was needed around the meaning of ‘a health consumer in general’ versus ‘a health consumer at governance level’. Some participants noted that the difference between the two referred to the level of responsibility and argued that a health consumer at governance level represents not only their own lived experience, but also their community’s. Participants also argued that it was important to set the context as clear as possible in the instructions, for example: Rate each item thinking about your engagement in [group] over the last [number] months.

Next, participants reviewed all 36 candidate items. Overall, participants all agreed that the questions were relevant and that most should be included in a measure of CE at governance level. They noted similarities between some items (for example, ‘My opinions are listened to and valued’ and ‘I feel that my views are heard’), and argued for rewording and/or clarification of some of them (for example, replacing barriers with challenges in ‘There are barriers that impact my ability to contribute in meetings’). Furthermore, participants argued that the questionnaire must consider respondents’ cultural background, with one of the participants stating that ‘cultural sensitivity is universal’. Finally, as most participants thought that the use of a five-point Likert-type response scale was appropriate, we decided to include the middle response category ‘neither agree, nor disagree’.

The project team read and discussed the focus group transcripts, and iteratively reviewed the questionnaire draft. A number of refinements were made, that included clarifying the instructions and item wording, providing examples where appropriate, incorporating the principle of partnership into some of the items, and further improving the readability of the questionnaire. No items were deleted following the focus group.

View Table 1 and Table 2.

Cognitive interviews

Next, the prototype questionnaire was tested through cognitive interviews with five participants (Table 2).

Participants found the questionnaire instructions and majority of items easy to understand. They suggested rephrasing some of the items to avoid unnecessary ambiguity, which resulted in further improvements to the questionnaires readability. Overall, participants thought that the questionnaire was easy to complete and that it covered a broad spectrum of areas relating to CE at governance level.

Drafting the questionnaire

After a number of revisions incorporating findings from the focus group and cognitive interviews, the project team prepared a further questionnaire draft for psychometric performance testing in Phase 2. The questionnaire included 36 CE items using a five-point Likert-type response format (Supplementary Table 1) and nine demographic questions (Supplementary Table 2). The Flesch Reading Ease score was 61, suggesting the questionnaire was written in Plain English and easily understood, on average, by a student aged 13–15 years.

The proposed questionnaire was then uploaded to REDCap database[[20]] to enable an anonymous, online distribution to health consumer representatives in Phase 2.

PHASE 2

Phase 2 focused on testing the following psychometric properties of the proposed questionnaire: construct and concurrent validity, internal consistency and test-retest reliability.

Phase 2 methods

Study design

Phase 2 consisted of a main CE survey study with health consumer representatives and a qualitative interview study with CE leaders conducted concurrently. This was followed by an additional test-retest survey study.

Setting and location

The project team was based in Auckland, New Zealand. The survey was conducted online with participants from New Zealand, Australia and Canada between December 2020 and July 2021.

Data collection

Main CE survey

The proposed questionnaire was administered via the REDCap database[[20]] and completed anonymously. The work of Comrey and Lee[[21]] and Hair et al[[22]] suggests that having a sample size of 200 and above would be sufficient for carrying out a reliability analysis. The survey was distributed by invitation via district health boards Consumer Council chairpersons from around New Zealand, the HQSC, the Consumer Health Forum of Australia, and the British Columbia Patient Safety & Quality Council in Canada.

To test the proposed questionnaires concurrent validity, we selected a similar questionnaire, the Patient and Public Engagement Evaluation Tool (PPEET).[[18]] PPEET was developed at McMaster University (Canada) by public and patient engagement experts and is widely used in Canada and other countries by healthcare organisations.[[23]] PPEET includes 13 items and takes about two to three minutes to complete. A consecutive sub-sample of participants were invited to complete the validation measure, PPEET.

CE leaders’ interviews

We interviewed New Zealand CE leaders (for example, chairs, managers) of organisations/groups formally involving health consumer representative at governance level, with at least three years of experience in a leadership role. They were purposively selected from within the project lead’s (LM) professional network and invited via email to take part. There were no exclusion criteria.

CE leaders were interviewed individually, face-to-face. The interviewer (LW) used an interview guide to explore participants’ perspectives on measuring CE and how such data could be used in the future. The interviews were audio-recorded and transcribed verbatim. We expected to interview between 5–10 people, depending on the depth and richness of the collected data.[[24]]

Test-retest CE survey

Following the initial survey, the proposed questionnaire was refined based on statistical analysis and then underwent an evaluation of its test-retest reliability. We aimed to recruit a sample of n=30 participants to complete the refined version of the proposed questionnaire on two occasions, approximately one week apart.

Data analysis

All statistical analyses were performed using R,[[25]] SAS/STAT software version 9.4[[26]] and SPSS version 26.0 (SPSS Inc., Chicago, IL). Respondents with over 10% missing values were removed from the analysis dataset. The data entries were double checked to ensure accuracy.

The demographics of the respondents and the response profiles were presented descriptively in terms of counts and proportions.

Principal Component Analysis (PCA)[[27]] was performed to confirm construct validity. PCA is a method for factor extraction and a variable-reduction technique. It is used to reduce the number of variables (ie questionnaire items) while retaining as much of the original variance as possible.[[27]] It was also used to test whether the underlying construct (ie CE) loads onto all or only some of the variables. Pearson’s correlations were produced for all the 36 items. Both Kaiser–Meyer–Olkin (KMO) Test and Bartlett’s Test of Sphericity were conducted to confirm the appropriateness of conducting the PCA. The KMO statistic varies between 0 and 1.0. Values >0.5 are considered ‘barely acceptable,’ and >0.9 are deemed most suitable.[[28]] For Bartlett’s Test, a significant statistic (P≤0.05) means it can efficiently perform a PCA on the dataset.[[28]] For the PCA, an oblique rotation was chosen as the underlying items are related. The number of components to be retained was determined using a scree plot with parallel analysis. Items that were strongly correlated (above 0.7) with the other items were removed from the survey.

Concurrent validity was evaluated using Pearson’s correlation coefficient to assess the correlation between the proposed questionnaire and PPEET. For both test-retest reliability and construct validity, the agreement at the individual item level was assessed. The relative reliability was determined by calculating the two-way random Intraclass correlation coefficient (ICC) for absolute agreement of single measures. The 95% confidence interval (CI) was calculated for each ICC. Reliability was considered poor for ICC values <0.40, fair for values between 0.40–0.59, good for values between 0.60–0.74, and excellent for values between 0.75–1.00.[[29]] ICC values above 0.75 were considered acceptable for test-retest reliability.[[30]] Cronbach’s alpha coefficient was utilised to test internal consistency, which ranges from 0–1.0. Streiner et al considered an alpha value of >0.7 as acceptable.[[13]]

Interviews with CE leaders were analysed using Directed Content Analysis, focusing specifically on participants’ perceptions of what constitutes CE at governance level, and the usefulness of the proposed questionnaire in measuring and improving CE.

Phase 2 findings

Main CE survey results

Two hundred and twenty-nine participants from three countries completed the anonymous CE survey (Table 3 and Table 4). Most participants were 45 years or older (84.3%), and approximately two thirds identified as female. The highest scored items were item 3 (‘I am able to express my views freely’), 4 (‘participation in this group is important to me’), and 10 (‘I feel safe to speak from my personal perspective, for example, my cultural perspective, my community's perspective’, etc). Items with the lowest scores were item 22 (‘I was well oriented to the work of this group’), 24 (‘the work achieved by this group has met my expectations’), 33 (‘I would not change anything about this group’), and the reverse-scored item 12 (‘there are things that reduce my ability to contribute in meetings, for example, related to my cultural background or use of jargon’). View Table 3 and Table 4.

Construct validity

Out of the 229 participants, there were 208 responses that had all the items completed; hence factor analysis was carried out on the 208 sample. Based on principal component analysis (Supplementary Table 3 and Figure 2), all items fitted under one dimension, which explained 53% of the total variance. All items with correlations above 0.75 were reviewed for potential redundancy. As a result, 11 items were removed (Supplementary Table 4). The KMO and Bartlett’s test confirmed that all items were intercorrelated (r=0.96, P<0.0001) and the sample size was adequate.

Figure 2: Scree plot of the number of components in the principal components analysis.

Concurrent validity

A sample of 87 participants completed both the proposed survey and PPEET survey. Pearson’s correlation coefficient between total scores from the two surveys was high (0.93).

Internal consistency

Cronbach’s alpha for the initial 36-item scale was 0.97. For the final 25 items Cronbach’s alpha was 0.96 and all corrected item-total correlations ranged from 0.42 to 0.85, suggesting satisfactory internal consistency.

Test-retest reliability

Thirty-four participants took part in the test-retest evaluation. The results for both ICC (0.84) and Cronbach’s alpha (0.91) met the criterion, indicating that the proposed tool has high test-retest reliability (Supplementary Table 5).

CE leaders’ interviews

We interviewed five CE leaders (Table 5).

Consumer engagement was unanimously viewed as a ‘unique partnership’ with an organisation to ‘amplify the voice of the communities’, especially for populations who experience health inequities such as Māori, Pasifika and those living with disabilities. One participant argued it was important to engage consumers ‘in a way that meets their needs [and the community’s]’; the community should be ‘part of the solution, or [part of] the process to getting a solution’. There appeared to be a strong desire for consumer engagement to be ‘part of [the] organisational structure ... built in [to processes] and in everything we do.’ Participants thought that health consumers have the potential to be involved in strategic decision-making, but currently had little involvement from the start and throughout any such initiatives.

Participants argued that there is currently limited exploration into the experience of consumers at governance level beyond regular group meetings/hui or individual reflection and feedback sessions with their managers. Reportedly, there was no ‘formal evaluation’ process used to consistently review consumer’s experiences of working at governance levels in their organisations. However, all managers acknowledged that monitoring consumer experience was a necessary ‘mechanism for improvement’ and thought that the proposed questionnaire would be useful in facilitating this on an annual or bi-annual basis.

The managers felt that the tool could help to identify gaps in understanding, relating to orientation and organisational expectations and highlight whether consumers were working in the most appropriate spaces within an organisation. It also provided a ‘platform’ for less vocal members of the group to share their opinions and made ‘[the consumer’s] needs better known to [the managers] … and therefore the [consumer] contribution is more effective’. Gathering feedback from consumers was seen as important, with one participant proposing that feedback from any survey tool should be ‘shared openly with consumers,’ and that an ‘action plan’ should be formed and then enacted appropriately.

I think with anything, you can do a survey, but it’s about what you do with it... what sort of action plan will come from those results?

DISCUSSION

In this paper we report findings from a study developing and validating a novel questionnaire to measure CE at governance level. We built and expanded on the strengths of previously published CE-related measures by working closely with consumer representatives and CE leaders from a wide range of backgrounds, and focusing on psychometric performance of the proposed tool. The MCE-Q comprises 25 items (Supplementary Table 6) representing one domain, uses a five-point Likert-type response format, and can be completed in approximately 10 minutes. It can be downloaded from [https://koawatea.countiesmanukau.health.nz/co-design/tools-and-resources]. The MCE-Q showed face, construct and concurrent validity, and excellent internal consistency and test-retest reliability. It can be used by healthcare organisations to monitor how well they engage their consumer representatives at governance level, identify areas for improvement and make national and international comparisons.

Healthcare providers’ focus, relating to health consumers’ engagement, has been primarily on consultation.[[1]] The mounting evidence showing that healthcare outcomes (including patient outcomes) can be improved by greater CE[[2,5]] made many providers realise the need to create partnerships with the consumers and engage them across all levels of healthcare systems, including at the governance level.[[1]] The results of our survey, specifically the relatively low ratings for two items relating to consumer group orientation/onboarding and consumers’ expectations, suggest that the current processes for creating consumer–provider partnerships may be insufficient. The proposed questionnaire can serve as a tool to better understand the processes of developing and maintaining the consumer-provider partnerships, and to monitor how well healthcare organisations are engaging with their consumers at governance level. This questionnaire could also supplement existing organisational performance quality and safety indicators such as the New Zealand HQSC’s Quality Safety Marker for Consumer Engagement, as it provides the consumers at governance level perspective on how well healthcare organisations perform in this area.

Limitations and future work

In this project, we developed a questionnaire with and for health consumers and groups that form the general population. We did not focus on the preferences of any specific groups or communities, but rather on developing a tool that can be used by all for benchmarking and making national and international comparisons. Inadvertently, the proposed questionnaire may not be sensitive to the needs and preferences of such groups or communities, some of whom experience relentless health inequities and whose voices are pertinent to healthcare improvement. The MCE-Q can highlight a need for improvements around cultural safety for a particular group. If such need was identified, we recommend a more nuanced exploration of the issue for the specific group using methods that offer high cultural responsiveness and are informed, for example, by Talanoa or kaupapa Māori methodology. One example of such a group are the Indigenous Māori peoples of New Zealand. Indeed, the legal obligations of Te Tiriti o Waitangi reinforce the necessity to develop and validate a CE at governance level tool specific to Māori. The undertaking of an Indigenous tool would be best led and developed in the New Zealand context by Māori. We recommend that future research be conducted to enable Māori to exercise their rights as Indigenous peoples and as partners through Te Tiriti o Waitangi.

Another limitation is that only New Zealand based CE leaders were interviewed. We interviewed people in senior management roles who are currently involved in a range of CE initiatives in New Zealand. The dialogue quality during the interviews was high and we found that participants’ views aligned with the current international CE research: the improvement of CE being one of the key priorities for healthcare systems, the lack of a psychometrically sound CE measure, and the need to better understand how to effectively engage consumers in the development and delivery of care services.[[3,5,31]] As we were engaging with Australian and Canadian health consumer organisations, we found there was a clear recognition of the role of CE in healthcare systems. CE organisations from both countries supported us with the distribution of the proposed survey. While there are undoubtedly differences between the New Zealand and those two (and likely other) countries’ healthcare systems, the role of CE in the delivery and quality improvement of these systems is recognised globally. Thus, we believe that this sample provided sufficient information power[[24]] for understanding participant’s perspectives on measuring CE and the proposed tool could be used in the future in New Zealand and other countries.

Notably, our focus was on recruiting a sample size sufficient to carry out the necessary psychometric analysis of the proposed questionnaire and not on measuring CE per se.  As such, the Phase 2 survey was not powered to produce generalisable results relating to the state of CE at governance level in the three participating countries. Nevertheless, the questionnaire we developed can now be used for monitoring CE by individual organisations, and also at national and international level.

Finally, we only used Classical Test Theory methods to develop the MCE-Q. We are planning to apply Item response theory and use Rasch Analysis to further improve the psychometric performance of the questionnaire.

CONCLUSION

The MCE-Q is a novel instrument to measure CE at governance level. It showed sound psychometric properties and its value, and relevance was recognised by both health consumer representatives and decision-makers representing healthcare organisations in New Zealand. It can be used by healthcare organisations around the world for benchmarking, making national and international comparisons, and to drive the quality of health services to better meet the needs of the people they serve.

Summary

Abstract

Aim

To develop and validate a questionnaire to measure health CE at governance level.

Method

This study used qualitative and quantitative methods (including focus groups, cognitive interviews and an international survey), and consisted of two phases. In Phase 1, an initial list of items was generated and refined with feedback from health consumer representatives. In Phase 2, a draft survey was distributed to n=227 consumers from New Zealand, Australia and Canada. The benefit and relevance of using the questionnaire was explored through face-to-face interviews with five CE leaders from New Zealand healthcare organisations.

Results

The proposed questionnaire comprises 25 statements relating to CE. Respondents indicate their level of agreement with the statements on a five-point Likert-type scale. Focus group and cognitive interview participants found the questionnaire relevant and easy to understand. The questionnaire scores correlated with the PPEET, another instrument measuring consumer engagement, and showed excellent internal consistency (Cronbach’s alpha=0.97), unidimensionality and test-retest reliability (r=0.84).

Conclusion

The proposed questionnaire measures CE at governance level and can be used for international comparisons and benchmarking. It showed sound psychometric properties and its value and relevance was recognised by health consumer representatives and leaders with CE roles in New Zealand healthcare organisations.

Author Information

Karol J Czuba: Senior Evaluation Officer, Ko Awatea; Counties Manukau Health, Auckland. Christin Coomarasamy: Biostatistician, Ko Awatea; Counties Manukau Health, Auckland. Richard J Siegert: Professor of Psychology and Rehabilitation, Department of Psychology and Neuroscience, Auckland University of Technology, Auckland. Renee Greaves: Experience and Engagement Advisor; Counties Manukau Health, Auckland. Lucy Wong: Improvement Advisor, Ko Awatea; Counties Manukau Health, Auckland. Te Hao S Apaapa-Timu: Māori Research Advisor; Ko Awatea; Counties Manukau Health, Auckland. Lynne Maher: Principle of Co-Design; Ko Awatea; Counties Manukau Health, Auckland.

Acknowledgements

We are grateful to the following individuals and organisations for their support: Sai Panat (Data Manager), members of the CM Health Consumer Council, the Health Quality & Safety Commission in New Zealand, the Consumer Health Forum of Australia, and the British Columbia Patient Safety & Quality Council in Canada.

Correspondence

Lynne Maher, Ko Awatea, Middlemore Hospital, 100 Hospital Road, Otahuhu, Private Bag 93311, Auckland 1640

Correspondence Email

Lynne.Maher@middlemore.co.nz

Competing Interests

Nil.

1) Carman KL, Dardess P, Maurer M, Sofaer S, Adams K, Bechtel C, et al. Patient and family engagement: a framework for understanding the elements and developing interventions and policies. Health affairs. 2013;32(2):223-31.

2) Health Quality and Safety Commision. Progressing consumer engagement in primary care. Wellington, New Zealand: Health Quality and Safety Commision; 2019.

3) Jacobs LM, Brindis CD, Hughes D, Kennedy CE, Schmidt LA. Measuring consumer engagement: a review of tools and findings. The Journal for Healthcare Quality (JHQ). 2018;40(3):139-46.

4) Dukhanin V, Topazian R, DeCamp M. Metrics and evaluation tools for patient engagement in healthcare organization-and system-level decision-making: a systematic review. International journal of health policy and management. 2018;7(10):889.

5) Bath J, Wakerman J. Impact of community participation in primary health care: what is the evidence? Australian Journal of Primary Health. 2015;21(1):2-8.

6) Ministry of Health. The Health and Disability System Review: Proposals for Reform. In: Department of the Prime Minister and Cabinet, editor. Wellington, New Zealand; 2021.

7) Coulter A. Engaging patients in healthcare: McGraw-Hill Education (UK); 2011.

8) Alma-Ata U. Declaration of Alma-Ata. Geneva: WHO. 1978.

9) Bechtel C, Sweeney J, Carman K, Dardess P, Maurer M, Sofaer S, et al. Developing Interventions And Policies Patient And Family Engagement : A Framework For Understanding The Elements And. 2013.

10) Health Quality and Safety Commision. Consumer councils 2020 [updated 24/07/2020. Available from: https://www.hqsc.govt.nz/our-programmes/partners-in-care/work-programmes/consumer-representation/consumer-councils/.

11) Boivin A, L'Espérance A, Gauvin FP, Dumez V, Macaulay AC, Lehoux P, et al. Patient and public engagement in research and health system decision making: a systematic review of evaluation tools. Health Expectations. 2018;21(6):1075-84.

12) Churchill Jr GA. A paradigm for developing better measures of marketing constructs. Journal of marketing research. 1979;16(1):64-73.

13) Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use: Oxford University Press, USA; 2015.

14) Conrad F, Blair J, Tracy E, editors. Verbal reports are data! A theoretical approach to cognitive interviews. Proceedings of the Federal Committee on Statistical Methodology Research Conference; 1999: Citeseer.

15) Jaspers MW, Steen T, Van Den Bos C, Geenen M. The think aloud method: a guide to user interface design. International journal of medical informatics. 2004;73(11-12):781-95.

16) Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qualitative health research. 2005;15(9):1277-88.

17) Flesch RF. How to test readability. New York: Harper & Brothers; 1951.

18) Abelson J, Tripp L, Kandasamy S, Burrows K, PPEET Implementation Study Team. Supporting the evaluation of public and patient engagement in health system organizations: results from an implementation research study. Health Expectations. 2019;22(5): 1132-43.

19) Baker GR, Fancott C, Judd M, O'Connor P. Expanding patient engagement in quality improvement and health system redesign: Three Canadian case studies. Healthcare management forum. 2016;29(5): 176-82.

20) Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. Journal of biomedical informatics. 2009;42(2): 377-81.

21) Comrey AL, Lee HB. A first course in factor analysis: Psychology press; 2013.

22) Hair JF, Black WC, Babin BJ, Anderson RE. Multivariate data analysis. 2009.

23) Abelson J, Tripp L, Kandasamy S, Burrows K, Team PIS. Supporting the evaluation of public and patient engagement in health system organizations: results from an implementation research study. Health Expectations. 2019;22(5):1132-43.

24) Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qualitative health research. 2016;26(13): 1753-60.

25) R Core Team. R: A language and environment for statistical computing. Vienna, Austria 2013.

26) SAS Intitute. SAS/STAT Version 9.4. Cary, US: SAS Institute; 2015.

27) Conway JM, Huffcutt AI. A review and evaluation of exploratory factor analysis practices in organizational research. Organizational Research Methods. 2003;6(2):147-68.Field A, Miles J, Field Z.

28) Discovering statistics using R: Sage publications; 2012.

29) Cicchetti D, Bronen R, Spencer S, Haut S, Berg A, Oliver P, et al. Rating scales, scales of measurement, issues of reliability: resolving some critical issues for clinicians and researchers. The Journal of nervous and mental disease. 2006;194(8): 557-64.

30) Portney LG, Watkins MP. Foundations of clinical research: applications to practice: Pearson/Prentice Hall Upper Saddle River, NJ; 2009.

31) Manafo E, Petermann L, Mason-Lai P, Vandall-Walker V. Patient engagement in Canada: a scoping review of the ‘how’ and ‘what’ of patient engagement in health research. Health Research Policy and Systems. 2018;16(1): 5.

Contact diana@nzma.org.nz
for the PDF of this article

Subscriber Content

The full contents of this pages only available to subscribers.
Login, subscribe or email nzmj@nzma.org.nz to purchase this article.

LOGINSUBSCRIBE
No items found.