No items found.

View Article PDF

It is widely accepted that the education and training environment within a healthcare system can influence learning outcomes for health professionals.1–3 Balancing education and training with service and clinical demands is a constant tension in today’s health systems in New Zealand. Therefore the importance of monitoring the educational environment to ensure it remains positive cannot be understated. Quality and feedback processes are required to ensure overall standards are maintained and to observe the effects of any changes that may impact either positively or negatively on training.

Biggs4 states that a “quality institution is one that has high-level aims that it intends to meet, that teaches accordingly, and that continually upgrades its practice in order to adapt to changing conditions, within resource limitations”. One aspect of the quality assurance process he describes is the need for continual review and enhancement. Similarly, an approach to quality assurance that is used at the Maastricht Medical School is described by Dolmans et al.5 It comprises three characteristics (systematic, structural and integral) which they assert should be completed in a cyclical process to meet the conditions of continuous improvement. Systematic in that it should be related to all educational activities and stakeholders involved. Structural implies that evaluations take place on a regular basis and that the data is assessed against predetermined standards. And lastly, integral relates to responsibilities being defined and procedures established.

For any teaching organisation it would seem pertinent to make every effort to adhere to the principles of continuous improvement. This will ensure that those under its guidance are receiving the most up-to-date information, the methodologies used align with relevant and current teaching theory, and time is spent reflecting on the nuances of the changing groups of learners.

One vital characteristic described above, being regular evaluations, should enlist the use of a tool with demonstrated validity.6 The Postgraduate Hospital Educational Environment Measure (PHEEM),7 and adaptations thereof are widely used around the world for assessing the learning environment within hospitals and making subsequent recommendations for improvements based on outcomes.8 Section 5 of the Medical Council of New Zealand’s (MCNZ) Accreditation Standards for Training Providers9 requires processes to be in place for interns to provide anonymous feedback on each clinical attachment and for that feedback to be used for quality improvement processes. To assist those district health boards who had yet to develop a formal feedback system, in September 2017 the MCNZ made an adapted version of the PHEEM available for voluntary use. The PHEEM has also been found to be appropriate for monitoring changes when administered before and after the implementation of a new teaching initiative.10

There are many studies6,11,12 which focus on providing feedback to individuals teaching in the clinical environment and initiatives designed to improve their teaching. However, there appear to be few which have a focus on changes made at the departmental level and the subsequent effect on intern experience. To create a positive learning environment for an intern, both the supervisor and the department need to provide the support necessary. This would seem to be an interdependent relationship, because if one aspect is adverse then the impact on the experience as a whole will be affected; having a supportive supervisor within a negative environment, or an unsupportive supervisor within a positive environment, may result in little constructive learning for the intern. It would seem reasonable, therefore, that to provide regular feedback on a department’s environment and then subsequently investigate the impact of any changes made as a result of this, would also seem to be an important facet of ensuring the provision of a comprehensive intern learning environment.

Methods

Background information on quarterly intern evaluations

At the end of each quarter, all interns are invited to complete an online, modified version of the PHEEM (using SurveyMonkeyTM); all data collected is anonymous. Although the use of evaluations have been in place for a number of years, the reports that are sent to departments have recently undergone a substantial change. The new reports comprise two spreadsheets; the first shows:

  • the weighted mean for each question for all respondents,
  • the weighted mean for each question for respondents from their department, and
  • the percent difference between the above.

The second spreadsheet shows:

  • the weighted mean for each question for respondents from their department, quarter by quarter, and
  • the percent difference from one quarter to the next.

The spreadsheets utilise colour coding to highlight data of significance. Colour is used to identify where:

  • the mean indicates a weighting towards a negative response,
  • the mean response from a department deviates more than 20% higher or lower than the mean from all respondents, and
  • the difference between the means for a department from one quarter to the next is greater than 20%.

In addition, each department receives all relevant free-text comments.

In the 18 months prior to this study the importance of the quarterly evaluations was emphasised to interns in order to improve the completion rate and therefore the robustness of the data we provide departments. As a result, the average completion rate rose from 50% during the four quarters of 2017 to 70% by the end of the second quarter of 2018.

Current survey of departments

An invitation to complete an anonymous online survey using SurveyMonkeyTM was sent to the Directors of Training, Clinical Directors and Service Managers of the 16 departments who had received at least one of the reformatted reports; this was a total of 39 staff. Although this project was undertaken as an improvement and audit activity, included in the survey was a disclosure that the resulting data may be used for publication.

The survey covered topics such as the clarity and usefulness of the reports, with whom the reports are discussed, any changes implemented as a result of the reports they have received, and the outcome of these changes.

Results

Respondents

Eighteen responses (46%) were received. Of these, 44% were directors of training, 33% were clinical directors, and 28% were service managers (one person reported they were responsible for two roles). Collectively, these roles have overall responsibility for the education and training environment within each service.

Although some respondents made it known they were answering on behalf of the whole department, for other departments there were responses from two or potentially even three of the above named roles. However, given the anonymous nature of the responses, determining the exact number of different departments the respondents represent is not possible.

Survey

Seventy-eight percent responded that the reports they receive are “somewhat easy” or “very easy” to understand; the remainder responded with “neutral”. One person commented that due to the amount of data, some “information can get lost within the masses”.

Where 83% found the reports “useful” or “very useful”, two responded that they found the reports “of little use”. Interestingly however, both these respondents still reported that they discuss the reports with others in their department.

The aspects of the report that were noted to be of use are summarised below in Table 1.

Table 1: Useful aspects of the report.

All aspects of the report were stated as being “useful” by at least half of all respondents, with the data comparing interns’ responses from a department quarter by quarter reported as “useful” by nearly three-quarters.

Respondents were asked if they discuss the reports with anyone else and if so, with whom. Their responses are shown in Table 2.

Table 2: With whom the reports are discussed.

Just over half of the respondents reported that they discuss the reports with the clinical director and other consultants/SMOs within the department, and just over a third with the director of training. Of note, two responded that they discuss the data with the interns. In addition, although one clinical director and one director of training reported that they do not discuss the reports with anyone else, both had responded that they find the reports useful.

Fifty percent of respondents believed that the reports “fairly” or “very fairly” reflected the education and training that was available to the interns in their department; one responded with “unfair”, and the remainder were “neutral”. Note was made concerning the small numbers that can occur within some departments and therefore raise the question as to whether the data fairly reflects the whole department or the opinion of just a few. However, of the four comments made, two noted that they are still valid observations for the interns who made them and are therefore still important to know. The one respondent that felt the reports unfairly represented their department made service specific comments for why they felt this was the case.

One area of particular interest was whether the information contained within the reports is having an impact on the departments and is subsequently being used as a catalyst for change. Forty-four percent responded that changes had been implemented as a result of the feedback received in the reports, and a further 11% were currently in the process of planning changes. The most common changes noted were related to the clinical orientation provided by the department to the interns at the start of the attachment (six out of 10 comments), followed by improved teaching opportunities and attendance (three comments). Of particular significance was one department tackling a real culture change of “Improvement [to] the access of support available, [and] encourage utilisation of that access (contact us rather than trying to ‘cope’ when faced with difficulties)”.

When asked if the changes have had a positive impact within the department, 100% responded with “yes”, and two reported that this change could be seen though an improvement in their evaluation data over subsequent quarters.

Discussion

In this study, respondents reported that the quarterly reports they receive are both easy to understand and useful, and the two most useful pieces of information are the data which track how their department is performing over time along with the free text comments (which have also shown to be highly valued in other studies6). As one of the goals at CDHB was to ensure appropriate information is being provided to departments to engage in the processes of review and enhancement,4 and reflection,2 we see these outcomes as a positive step towards achieving this goal.

Results from this survey suggest consistency with Dolmans et al’s5 approach for systematic, structural and integral continuous improvement. Data also provides support for the assertion CDHB is providing a mechanism for achieving the goal of continual improvement through regular and systematic review, as discussed by Wilkinson,2 Biggs4 and Dolmans et al.5 Additionally, as a positive effect was reported by all those who had implemented change, further evidence is established for the value of this regular feedback.

We speculate whether completing the feedback loop may have also had a positive effect on the completion rate of the evaluations; we have seen an increase from 50% of interns completing the evaluations to 70% over the past 18 months. If departments are using the results to effect positive change and interns observe that positive change in action, then we propose that this may also help motivate them to provide more regular feedback. In addition, when departments see high numbers of interns responding to the evaluations, we believe this may further add to the potential to influence change.

The use of regular evaluations has now been extended to also include our post-graduate year 3 and above house officers, non-training registrars and vocational training registrars; evaluations are completed by them every six months. Long-term tracking of feedback across the CDHB, within various departments and across multiple levels of training, are now embedded in CDHB education and training culture.

We acknowledge that the number of responses involved in this survey were small, but as they were representative of a number of large departments, we feel encouraged by the results. Ensuring the anonymity of respondents resulted in not being able to investigate differences in responses from medical and surgical departments, for example, or determining from which departments we had multiple respondents versus only one. However, allowing for unreserved responding was deemed more important.

The goal of this project was to investigate whether the new evaluation reports that are provided to departments are valued, used to facilitate change and consequently enhance the education and training experiences of the interns; results would indicate that we have taken the first step towards completing the feedback loop with a positive outcome. The finding that one department has taken up the significant challenge of changing the culture of support for their interns is particularly promising.

Although changing the culture to one of valuing feedback and using it to implement change takes time and resourcing, its importance cannot be overemphasised. For those departments who implemented changes as a result of the evaluation data, all reported a positive impact in their respective services—if this can be replicated throughout the entire organisation, it should contribute to an improved educational environment across the whole DHB. Although the process of collecting anonymous feedback is required as part of the MCNZ accreditation standards,9 it is important that the feedback is not only presented to departments in a user-friendly format, but that the outcomes of such feedback is audited to ensure the information is being used to support a process of continuous improvement. Optimisation of the feedback loop is critical to positive change management for training in busy clinical environments.

Summary

Abstract

Aim

Collating quality feedback from interns on their training and educational experiences allows training institutions to identify issues or concerns within the learning environment and provides an opportunity for continuous quality improvement. This study aimed to investigate whether feedback obtained from a modified version of the Postgraduate Hospital Educational Environment Measure (PHEEM) was used by departments to facilitate change and enhance the education and training experiences of interns at Canterbury District Health Board (CDHB).

Method

Data from intern evaluations is collated and sent to departments at the end of each three-month period. A survey was undertaken to assess how much departments valued the reports they received and how, if at all, they utilise the information. The Directors of Training, Clinical Directors, and Service Managers of 16 departments were invited to respond to the survey.

Results

The response rate was 46%. Eighty-three percent responded that the reports they receive are useful, and 78% responded that they are easy to understand. Data which tracks the performance of their department over time was reported by 71% as being of particular use. Fifty-five percent of the respondents had implemented, or were in the process of implementing, change based on the information in the reports. A positive outcome was reported by 100% of those who had implemented change.

Conclusion

Evaluations of clinical attachments by interns positively influences change in clinical departments if the information is presented to departments in an accessible and meaningful format.

Author Information

Karyn Dunn, Medical Education Coordinator, Medical Education and Training Unit, Canterbury District Health Board, Christchurch; John Thwaites, Director Medical Clinical Training and Consultant Physician, Medical Education and Training Unit, Canterbury District Health Board, Christchurch.

Acknowledgements

Correspondence

Karyn Dunn, Medical Education and Training Unit, Christchurch Hospital, Private Bag 4710, Christchurch 8140.

Correspondence Email

karyn.dunn@cdhb.health.nz

Competing Interests

Nil.

  1. Wilkinson T. Get the learning right and the facts will look after themselves. NZMJ. 2007; 120(1253):5–7.
  2. Wilkinson T. Medical education – the next 40 years. NZMJ. 2013; 126(1371):82–90.
  3. Clarke N. Workplace learning environment and its relationship with learning outcomes in healthcare organizations. Journal of Human Resource Development International. 2007; 8(2):185–205.
  4. Biggs J. The reflective institution: Assuring and enhancing the quality of teaching and learning. Higher Educ. 2001; 41:221–238.
  5. Dolmans DH, Wolfhagen HA, Scherpbier AJ. From quality assurance to total quality management: How can quality assurance result in continuous improvement in health professions education? Educ Health (Abingdon). 2003; 16(2):210–217.
  6. Boerboom TB, Stalmeijer RE, Dolmans DH, Jaarsma DA. How feedback can foster professional growth of teachers in the clinical workplace: A review of the literature. Stud Ed Eval. 2015; 46:47–52.
  7. Roff S, McAleer S, Skinner A. Development and validation of an instrument to measure the postgraduate clinical learning and teaching educational environment for hospital-based junior doctors in the UK. Med Teach. 2005; 27:326–331.
  8. Chan CY, Sum MY, Lim WS, et al. Adoption and correlates of Postgraduate Hospital Educational Environment Measure (PHEEM) in the evaluation of learning environments – A systematic review. Medical Teacher. 2016; 38(12):1248–1255.
  9. Medical Council of New Zealand. Prevocational medical training for doctors in New Zealand: Accreditation standards for training providers. Wellington: Medical Council of New Zealand; 2018.
  10. Khan JS. Evaluation of the educational environment of postgraduate surgical teaching. J Ayub Med Coll Abbottabad. 2008; 20:104–107.
  11. Steinert Y, Mann K, Anderson B, et al. A systematic review of faculty development initiatives designed to enhance teaching effectiveness: A 10-year update: BEME Guide No. 40. Medical Teacher. 2016; 38(8):769–786.
  12. Steinert Y, Mann K, Canteno A, et al. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide No. 8. Medical Teacher. 2006; 28(6):497–526.

For the PDF of this article,
contact nzmj@nzma.org.nz

View Article PDF

It is widely accepted that the education and training environment within a healthcare system can influence learning outcomes for health professionals.1–3 Balancing education and training with service and clinical demands is a constant tension in today’s health systems in New Zealand. Therefore the importance of monitoring the educational environment to ensure it remains positive cannot be understated. Quality and feedback processes are required to ensure overall standards are maintained and to observe the effects of any changes that may impact either positively or negatively on training.

Biggs4 states that a “quality institution is one that has high-level aims that it intends to meet, that teaches accordingly, and that continually upgrades its practice in order to adapt to changing conditions, within resource limitations”. One aspect of the quality assurance process he describes is the need for continual review and enhancement. Similarly, an approach to quality assurance that is used at the Maastricht Medical School is described by Dolmans et al.5 It comprises three characteristics (systematic, structural and integral) which they assert should be completed in a cyclical process to meet the conditions of continuous improvement. Systematic in that it should be related to all educational activities and stakeholders involved. Structural implies that evaluations take place on a regular basis and that the data is assessed against predetermined standards. And lastly, integral relates to responsibilities being defined and procedures established.

For any teaching organisation it would seem pertinent to make every effort to adhere to the principles of continuous improvement. This will ensure that those under its guidance are receiving the most up-to-date information, the methodologies used align with relevant and current teaching theory, and time is spent reflecting on the nuances of the changing groups of learners.

One vital characteristic described above, being regular evaluations, should enlist the use of a tool with demonstrated validity.6 The Postgraduate Hospital Educational Environment Measure (PHEEM),7 and adaptations thereof are widely used around the world for assessing the learning environment within hospitals and making subsequent recommendations for improvements based on outcomes.8 Section 5 of the Medical Council of New Zealand’s (MCNZ) Accreditation Standards for Training Providers9 requires processes to be in place for interns to provide anonymous feedback on each clinical attachment and for that feedback to be used for quality improvement processes. To assist those district health boards who had yet to develop a formal feedback system, in September 2017 the MCNZ made an adapted version of the PHEEM available for voluntary use. The PHEEM has also been found to be appropriate for monitoring changes when administered before and after the implementation of a new teaching initiative.10

There are many studies6,11,12 which focus on providing feedback to individuals teaching in the clinical environment and initiatives designed to improve their teaching. However, there appear to be few which have a focus on changes made at the departmental level and the subsequent effect on intern experience. To create a positive learning environment for an intern, both the supervisor and the department need to provide the support necessary. This would seem to be an interdependent relationship, because if one aspect is adverse then the impact on the experience as a whole will be affected; having a supportive supervisor within a negative environment, or an unsupportive supervisor within a positive environment, may result in little constructive learning for the intern. It would seem reasonable, therefore, that to provide regular feedback on a department’s environment and then subsequently investigate the impact of any changes made as a result of this, would also seem to be an important facet of ensuring the provision of a comprehensive intern learning environment.

Methods

Background information on quarterly intern evaluations

At the end of each quarter, all interns are invited to complete an online, modified version of the PHEEM (using SurveyMonkeyTM); all data collected is anonymous. Although the use of evaluations have been in place for a number of years, the reports that are sent to departments have recently undergone a substantial change. The new reports comprise two spreadsheets; the first shows:

  • the weighted mean for each question for all respondents,
  • the weighted mean for each question for respondents from their department, and
  • the percent difference between the above.

The second spreadsheet shows:

  • the weighted mean for each question for respondents from their department, quarter by quarter, and
  • the percent difference from one quarter to the next.

The spreadsheets utilise colour coding to highlight data of significance. Colour is used to identify where:

  • the mean indicates a weighting towards a negative response,
  • the mean response from a department deviates more than 20% higher or lower than the mean from all respondents, and
  • the difference between the means for a department from one quarter to the next is greater than 20%.

In addition, each department receives all relevant free-text comments.

In the 18 months prior to this study the importance of the quarterly evaluations was emphasised to interns in order to improve the completion rate and therefore the robustness of the data we provide departments. As a result, the average completion rate rose from 50% during the four quarters of 2017 to 70% by the end of the second quarter of 2018.

Current survey of departments

An invitation to complete an anonymous online survey using SurveyMonkeyTM was sent to the Directors of Training, Clinical Directors and Service Managers of the 16 departments who had received at least one of the reformatted reports; this was a total of 39 staff. Although this project was undertaken as an improvement and audit activity, included in the survey was a disclosure that the resulting data may be used for publication.

The survey covered topics such as the clarity and usefulness of the reports, with whom the reports are discussed, any changes implemented as a result of the reports they have received, and the outcome of these changes.

Results

Respondents

Eighteen responses (46%) were received. Of these, 44% were directors of training, 33% were clinical directors, and 28% were service managers (one person reported they were responsible for two roles). Collectively, these roles have overall responsibility for the education and training environment within each service.

Although some respondents made it known they were answering on behalf of the whole department, for other departments there were responses from two or potentially even three of the above named roles. However, given the anonymous nature of the responses, determining the exact number of different departments the respondents represent is not possible.

Survey

Seventy-eight percent responded that the reports they receive are “somewhat easy” or “very easy” to understand; the remainder responded with “neutral”. One person commented that due to the amount of data, some “information can get lost within the masses”.

Where 83% found the reports “useful” or “very useful”, two responded that they found the reports “of little use”. Interestingly however, both these respondents still reported that they discuss the reports with others in their department.

The aspects of the report that were noted to be of use are summarised below in Table 1.

Table 1: Useful aspects of the report.

All aspects of the report were stated as being “useful” by at least half of all respondents, with the data comparing interns’ responses from a department quarter by quarter reported as “useful” by nearly three-quarters.

Respondents were asked if they discuss the reports with anyone else and if so, with whom. Their responses are shown in Table 2.

Table 2: With whom the reports are discussed.

Just over half of the respondents reported that they discuss the reports with the clinical director and other consultants/SMOs within the department, and just over a third with the director of training. Of note, two responded that they discuss the data with the interns. In addition, although one clinical director and one director of training reported that they do not discuss the reports with anyone else, both had responded that they find the reports useful.

Fifty percent of respondents believed that the reports “fairly” or “very fairly” reflected the education and training that was available to the interns in their department; one responded with “unfair”, and the remainder were “neutral”. Note was made concerning the small numbers that can occur within some departments and therefore raise the question as to whether the data fairly reflects the whole department or the opinion of just a few. However, of the four comments made, two noted that they are still valid observations for the interns who made them and are therefore still important to know. The one respondent that felt the reports unfairly represented their department made service specific comments for why they felt this was the case.

One area of particular interest was whether the information contained within the reports is having an impact on the departments and is subsequently being used as a catalyst for change. Forty-four percent responded that changes had been implemented as a result of the feedback received in the reports, and a further 11% were currently in the process of planning changes. The most common changes noted were related to the clinical orientation provided by the department to the interns at the start of the attachment (six out of 10 comments), followed by improved teaching opportunities and attendance (three comments). Of particular significance was one department tackling a real culture change of “Improvement [to] the access of support available, [and] encourage utilisation of that access (contact us rather than trying to ‘cope’ when faced with difficulties)”.

When asked if the changes have had a positive impact within the department, 100% responded with “yes”, and two reported that this change could be seen though an improvement in their evaluation data over subsequent quarters.

Discussion

In this study, respondents reported that the quarterly reports they receive are both easy to understand and useful, and the two most useful pieces of information are the data which track how their department is performing over time along with the free text comments (which have also shown to be highly valued in other studies6). As one of the goals at CDHB was to ensure appropriate information is being provided to departments to engage in the processes of review and enhancement,4 and reflection,2 we see these outcomes as a positive step towards achieving this goal.

Results from this survey suggest consistency with Dolmans et al’s5 approach for systematic, structural and integral continuous improvement. Data also provides support for the assertion CDHB is providing a mechanism for achieving the goal of continual improvement through regular and systematic review, as discussed by Wilkinson,2 Biggs4 and Dolmans et al.5 Additionally, as a positive effect was reported by all those who had implemented change, further evidence is established for the value of this regular feedback.

We speculate whether completing the feedback loop may have also had a positive effect on the completion rate of the evaluations; we have seen an increase from 50% of interns completing the evaluations to 70% over the past 18 months. If departments are using the results to effect positive change and interns observe that positive change in action, then we propose that this may also help motivate them to provide more regular feedback. In addition, when departments see high numbers of interns responding to the evaluations, we believe this may further add to the potential to influence change.

The use of regular evaluations has now been extended to also include our post-graduate year 3 and above house officers, non-training registrars and vocational training registrars; evaluations are completed by them every six months. Long-term tracking of feedback across the CDHB, within various departments and across multiple levels of training, are now embedded in CDHB education and training culture.

We acknowledge that the number of responses involved in this survey were small, but as they were representative of a number of large departments, we feel encouraged by the results. Ensuring the anonymity of respondents resulted in not being able to investigate differences in responses from medical and surgical departments, for example, or determining from which departments we had multiple respondents versus only one. However, allowing for unreserved responding was deemed more important.

The goal of this project was to investigate whether the new evaluation reports that are provided to departments are valued, used to facilitate change and consequently enhance the education and training experiences of the interns; results would indicate that we have taken the first step towards completing the feedback loop with a positive outcome. The finding that one department has taken up the significant challenge of changing the culture of support for their interns is particularly promising.

Although changing the culture to one of valuing feedback and using it to implement change takes time and resourcing, its importance cannot be overemphasised. For those departments who implemented changes as a result of the evaluation data, all reported a positive impact in their respective services—if this can be replicated throughout the entire organisation, it should contribute to an improved educational environment across the whole DHB. Although the process of collecting anonymous feedback is required as part of the MCNZ accreditation standards,9 it is important that the feedback is not only presented to departments in a user-friendly format, but that the outcomes of such feedback is audited to ensure the information is being used to support a process of continuous improvement. Optimisation of the feedback loop is critical to positive change management for training in busy clinical environments.

Summary

Abstract

Aim

Collating quality feedback from interns on their training and educational experiences allows training institutions to identify issues or concerns within the learning environment and provides an opportunity for continuous quality improvement. This study aimed to investigate whether feedback obtained from a modified version of the Postgraduate Hospital Educational Environment Measure (PHEEM) was used by departments to facilitate change and enhance the education and training experiences of interns at Canterbury District Health Board (CDHB).

Method

Data from intern evaluations is collated and sent to departments at the end of each three-month period. A survey was undertaken to assess how much departments valued the reports they received and how, if at all, they utilise the information. The Directors of Training, Clinical Directors, and Service Managers of 16 departments were invited to respond to the survey.

Results

The response rate was 46%. Eighty-three percent responded that the reports they receive are useful, and 78% responded that they are easy to understand. Data which tracks the performance of their department over time was reported by 71% as being of particular use. Fifty-five percent of the respondents had implemented, or were in the process of implementing, change based on the information in the reports. A positive outcome was reported by 100% of those who had implemented change.

Conclusion

Evaluations of clinical attachments by interns positively influences change in clinical departments if the information is presented to departments in an accessible and meaningful format.

Author Information

Karyn Dunn, Medical Education Coordinator, Medical Education and Training Unit, Canterbury District Health Board, Christchurch; John Thwaites, Director Medical Clinical Training and Consultant Physician, Medical Education and Training Unit, Canterbury District Health Board, Christchurch.

Acknowledgements

Correspondence

Karyn Dunn, Medical Education and Training Unit, Christchurch Hospital, Private Bag 4710, Christchurch 8140.

Correspondence Email

karyn.dunn@cdhb.health.nz

Competing Interests

Nil.

  1. Wilkinson T. Get the learning right and the facts will look after themselves. NZMJ. 2007; 120(1253):5–7.
  2. Wilkinson T. Medical education – the next 40 years. NZMJ. 2013; 126(1371):82–90.
  3. Clarke N. Workplace learning environment and its relationship with learning outcomes in healthcare organizations. Journal of Human Resource Development International. 2007; 8(2):185–205.
  4. Biggs J. The reflective institution: Assuring and enhancing the quality of teaching and learning. Higher Educ. 2001; 41:221–238.
  5. Dolmans DH, Wolfhagen HA, Scherpbier AJ. From quality assurance to total quality management: How can quality assurance result in continuous improvement in health professions education? Educ Health (Abingdon). 2003; 16(2):210–217.
  6. Boerboom TB, Stalmeijer RE, Dolmans DH, Jaarsma DA. How feedback can foster professional growth of teachers in the clinical workplace: A review of the literature. Stud Ed Eval. 2015; 46:47–52.
  7. Roff S, McAleer S, Skinner A. Development and validation of an instrument to measure the postgraduate clinical learning and teaching educational environment for hospital-based junior doctors in the UK. Med Teach. 2005; 27:326–331.
  8. Chan CY, Sum MY, Lim WS, et al. Adoption and correlates of Postgraduate Hospital Educational Environment Measure (PHEEM) in the evaluation of learning environments – A systematic review. Medical Teacher. 2016; 38(12):1248–1255.
  9. Medical Council of New Zealand. Prevocational medical training for doctors in New Zealand: Accreditation standards for training providers. Wellington: Medical Council of New Zealand; 2018.
  10. Khan JS. Evaluation of the educational environment of postgraduate surgical teaching. J Ayub Med Coll Abbottabad. 2008; 20:104–107.
  11. Steinert Y, Mann K, Anderson B, et al. A systematic review of faculty development initiatives designed to enhance teaching effectiveness: A 10-year update: BEME Guide No. 40. Medical Teacher. 2016; 38(8):769–786.
  12. Steinert Y, Mann K, Canteno A, et al. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide No. 8. Medical Teacher. 2006; 28(6):497–526.

For the PDF of this article,
contact nzmj@nzma.org.nz

View Article PDF

It is widely accepted that the education and training environment within a healthcare system can influence learning outcomes for health professionals.1–3 Balancing education and training with service and clinical demands is a constant tension in today’s health systems in New Zealand. Therefore the importance of monitoring the educational environment to ensure it remains positive cannot be understated. Quality and feedback processes are required to ensure overall standards are maintained and to observe the effects of any changes that may impact either positively or negatively on training.

Biggs4 states that a “quality institution is one that has high-level aims that it intends to meet, that teaches accordingly, and that continually upgrades its practice in order to adapt to changing conditions, within resource limitations”. One aspect of the quality assurance process he describes is the need for continual review and enhancement. Similarly, an approach to quality assurance that is used at the Maastricht Medical School is described by Dolmans et al.5 It comprises three characteristics (systematic, structural and integral) which they assert should be completed in a cyclical process to meet the conditions of continuous improvement. Systematic in that it should be related to all educational activities and stakeholders involved. Structural implies that evaluations take place on a regular basis and that the data is assessed against predetermined standards. And lastly, integral relates to responsibilities being defined and procedures established.

For any teaching organisation it would seem pertinent to make every effort to adhere to the principles of continuous improvement. This will ensure that those under its guidance are receiving the most up-to-date information, the methodologies used align with relevant and current teaching theory, and time is spent reflecting on the nuances of the changing groups of learners.

One vital characteristic described above, being regular evaluations, should enlist the use of a tool with demonstrated validity.6 The Postgraduate Hospital Educational Environment Measure (PHEEM),7 and adaptations thereof are widely used around the world for assessing the learning environment within hospitals and making subsequent recommendations for improvements based on outcomes.8 Section 5 of the Medical Council of New Zealand’s (MCNZ) Accreditation Standards for Training Providers9 requires processes to be in place for interns to provide anonymous feedback on each clinical attachment and for that feedback to be used for quality improvement processes. To assist those district health boards who had yet to develop a formal feedback system, in September 2017 the MCNZ made an adapted version of the PHEEM available for voluntary use. The PHEEM has also been found to be appropriate for monitoring changes when administered before and after the implementation of a new teaching initiative.10

There are many studies6,11,12 which focus on providing feedback to individuals teaching in the clinical environment and initiatives designed to improve their teaching. However, there appear to be few which have a focus on changes made at the departmental level and the subsequent effect on intern experience. To create a positive learning environment for an intern, both the supervisor and the department need to provide the support necessary. This would seem to be an interdependent relationship, because if one aspect is adverse then the impact on the experience as a whole will be affected; having a supportive supervisor within a negative environment, or an unsupportive supervisor within a positive environment, may result in little constructive learning for the intern. It would seem reasonable, therefore, that to provide regular feedback on a department’s environment and then subsequently investigate the impact of any changes made as a result of this, would also seem to be an important facet of ensuring the provision of a comprehensive intern learning environment.

Methods

Background information on quarterly intern evaluations

At the end of each quarter, all interns are invited to complete an online, modified version of the PHEEM (using SurveyMonkeyTM); all data collected is anonymous. Although the use of evaluations have been in place for a number of years, the reports that are sent to departments have recently undergone a substantial change. The new reports comprise two spreadsheets; the first shows:

  • the weighted mean for each question for all respondents,
  • the weighted mean for each question for respondents from their department, and
  • the percent difference between the above.

The second spreadsheet shows:

  • the weighted mean for each question for respondents from their department, quarter by quarter, and
  • the percent difference from one quarter to the next.

The spreadsheets utilise colour coding to highlight data of significance. Colour is used to identify where:

  • the mean indicates a weighting towards a negative response,
  • the mean response from a department deviates more than 20% higher or lower than the mean from all respondents, and
  • the difference between the means for a department from one quarter to the next is greater than 20%.

In addition, each department receives all relevant free-text comments.

In the 18 months prior to this study the importance of the quarterly evaluations was emphasised to interns in order to improve the completion rate and therefore the robustness of the data we provide departments. As a result, the average completion rate rose from 50% during the four quarters of 2017 to 70% by the end of the second quarter of 2018.

Current survey of departments

An invitation to complete an anonymous online survey using SurveyMonkeyTM was sent to the Directors of Training, Clinical Directors and Service Managers of the 16 departments who had received at least one of the reformatted reports; this was a total of 39 staff. Although this project was undertaken as an improvement and audit activity, included in the survey was a disclosure that the resulting data may be used for publication.

The survey covered topics such as the clarity and usefulness of the reports, with whom the reports are discussed, any changes implemented as a result of the reports they have received, and the outcome of these changes.

Results

Respondents

Eighteen responses (46%) were received. Of these, 44% were directors of training, 33% were clinical directors, and 28% were service managers (one person reported they were responsible for two roles). Collectively, these roles have overall responsibility for the education and training environment within each service.

Although some respondents made it known they were answering on behalf of the whole department, for other departments there were responses from two or potentially even three of the above named roles. However, given the anonymous nature of the responses, determining the exact number of different departments the respondents represent is not possible.

Survey

Seventy-eight percent responded that the reports they receive are “somewhat easy” or “very easy” to understand; the remainder responded with “neutral”. One person commented that due to the amount of data, some “information can get lost within the masses”.

Where 83% found the reports “useful” or “very useful”, two responded that they found the reports “of little use”. Interestingly however, both these respondents still reported that they discuss the reports with others in their department.

The aspects of the report that were noted to be of use are summarised below in Table 1.

Table 1: Useful aspects of the report.

All aspects of the report were stated as being “useful” by at least half of all respondents, with the data comparing interns’ responses from a department quarter by quarter reported as “useful” by nearly three-quarters.

Respondents were asked if they discuss the reports with anyone else and if so, with whom. Their responses are shown in Table 2.

Table 2: With whom the reports are discussed.

Just over half of the respondents reported that they discuss the reports with the clinical director and other consultants/SMOs within the department, and just over a third with the director of training. Of note, two responded that they discuss the data with the interns. In addition, although one clinical director and one director of training reported that they do not discuss the reports with anyone else, both had responded that they find the reports useful.

Fifty percent of respondents believed that the reports “fairly” or “very fairly” reflected the education and training that was available to the interns in their department; one responded with “unfair”, and the remainder were “neutral”. Note was made concerning the small numbers that can occur within some departments and therefore raise the question as to whether the data fairly reflects the whole department or the opinion of just a few. However, of the four comments made, two noted that they are still valid observations for the interns who made them and are therefore still important to know. The one respondent that felt the reports unfairly represented their department made service specific comments for why they felt this was the case.

One area of particular interest was whether the information contained within the reports is having an impact on the departments and is subsequently being used as a catalyst for change. Forty-four percent responded that changes had been implemented as a result of the feedback received in the reports, and a further 11% were currently in the process of planning changes. The most common changes noted were related to the clinical orientation provided by the department to the interns at the start of the attachment (six out of 10 comments), followed by improved teaching opportunities and attendance (three comments). Of particular significance was one department tackling a real culture change of “Improvement [to] the access of support available, [and] encourage utilisation of that access (contact us rather than trying to ‘cope’ when faced with difficulties)”.

When asked if the changes have had a positive impact within the department, 100% responded with “yes”, and two reported that this change could be seen though an improvement in their evaluation data over subsequent quarters.

Discussion

In this study, respondents reported that the quarterly reports they receive are both easy to understand and useful, and the two most useful pieces of information are the data which track how their department is performing over time along with the free text comments (which have also shown to be highly valued in other studies6). As one of the goals at CDHB was to ensure appropriate information is being provided to departments to engage in the processes of review and enhancement,4 and reflection,2 we see these outcomes as a positive step towards achieving this goal.

Results from this survey suggest consistency with Dolmans et al’s5 approach for systematic, structural and integral continuous improvement. Data also provides support for the assertion CDHB is providing a mechanism for achieving the goal of continual improvement through regular and systematic review, as discussed by Wilkinson,2 Biggs4 and Dolmans et al.5 Additionally, as a positive effect was reported by all those who had implemented change, further evidence is established for the value of this regular feedback.

We speculate whether completing the feedback loop may have also had a positive effect on the completion rate of the evaluations; we have seen an increase from 50% of interns completing the evaluations to 70% over the past 18 months. If departments are using the results to effect positive change and interns observe that positive change in action, then we propose that this may also help motivate them to provide more regular feedback. In addition, when departments see high numbers of interns responding to the evaluations, we believe this may further add to the potential to influence change.

The use of regular evaluations has now been extended to also include our post-graduate year 3 and above house officers, non-training registrars and vocational training registrars; evaluations are completed by them every six months. Long-term tracking of feedback across the CDHB, within various departments and across multiple levels of training, are now embedded in CDHB education and training culture.

We acknowledge that the number of responses involved in this survey were small, but as they were representative of a number of large departments, we feel encouraged by the results. Ensuring the anonymity of respondents resulted in not being able to investigate differences in responses from medical and surgical departments, for example, or determining from which departments we had multiple respondents versus only one. However, allowing for unreserved responding was deemed more important.

The goal of this project was to investigate whether the new evaluation reports that are provided to departments are valued, used to facilitate change and consequently enhance the education and training experiences of the interns; results would indicate that we have taken the first step towards completing the feedback loop with a positive outcome. The finding that one department has taken up the significant challenge of changing the culture of support for their interns is particularly promising.

Although changing the culture to one of valuing feedback and using it to implement change takes time and resourcing, its importance cannot be overemphasised. For those departments who implemented changes as a result of the evaluation data, all reported a positive impact in their respective services—if this can be replicated throughout the entire organisation, it should contribute to an improved educational environment across the whole DHB. Although the process of collecting anonymous feedback is required as part of the MCNZ accreditation standards,9 it is important that the feedback is not only presented to departments in a user-friendly format, but that the outcomes of such feedback is audited to ensure the information is being used to support a process of continuous improvement. Optimisation of the feedback loop is critical to positive change management for training in busy clinical environments.

Summary

Abstract

Aim

Collating quality feedback from interns on their training and educational experiences allows training institutions to identify issues or concerns within the learning environment and provides an opportunity for continuous quality improvement. This study aimed to investigate whether feedback obtained from a modified version of the Postgraduate Hospital Educational Environment Measure (PHEEM) was used by departments to facilitate change and enhance the education and training experiences of interns at Canterbury District Health Board (CDHB).

Method

Data from intern evaluations is collated and sent to departments at the end of each three-month period. A survey was undertaken to assess how much departments valued the reports they received and how, if at all, they utilise the information. The Directors of Training, Clinical Directors, and Service Managers of 16 departments were invited to respond to the survey.

Results

The response rate was 46%. Eighty-three percent responded that the reports they receive are useful, and 78% responded that they are easy to understand. Data which tracks the performance of their department over time was reported by 71% as being of particular use. Fifty-five percent of the respondents had implemented, or were in the process of implementing, change based on the information in the reports. A positive outcome was reported by 100% of those who had implemented change.

Conclusion

Evaluations of clinical attachments by interns positively influences change in clinical departments if the information is presented to departments in an accessible and meaningful format.

Author Information

Karyn Dunn, Medical Education Coordinator, Medical Education and Training Unit, Canterbury District Health Board, Christchurch; John Thwaites, Director Medical Clinical Training and Consultant Physician, Medical Education and Training Unit, Canterbury District Health Board, Christchurch.

Acknowledgements

Correspondence

Karyn Dunn, Medical Education and Training Unit, Christchurch Hospital, Private Bag 4710, Christchurch 8140.

Correspondence Email

karyn.dunn@cdhb.health.nz

Competing Interests

Nil.

  1. Wilkinson T. Get the learning right and the facts will look after themselves. NZMJ. 2007; 120(1253):5–7.
  2. Wilkinson T. Medical education – the next 40 years. NZMJ. 2013; 126(1371):82–90.
  3. Clarke N. Workplace learning environment and its relationship with learning outcomes in healthcare organizations. Journal of Human Resource Development International. 2007; 8(2):185–205.
  4. Biggs J. The reflective institution: Assuring and enhancing the quality of teaching and learning. Higher Educ. 2001; 41:221–238.
  5. Dolmans DH, Wolfhagen HA, Scherpbier AJ. From quality assurance to total quality management: How can quality assurance result in continuous improvement in health professions education? Educ Health (Abingdon). 2003; 16(2):210–217.
  6. Boerboom TB, Stalmeijer RE, Dolmans DH, Jaarsma DA. How feedback can foster professional growth of teachers in the clinical workplace: A review of the literature. Stud Ed Eval. 2015; 46:47–52.
  7. Roff S, McAleer S, Skinner A. Development and validation of an instrument to measure the postgraduate clinical learning and teaching educational environment for hospital-based junior doctors in the UK. Med Teach. 2005; 27:326–331.
  8. Chan CY, Sum MY, Lim WS, et al. Adoption and correlates of Postgraduate Hospital Educational Environment Measure (PHEEM) in the evaluation of learning environments – A systematic review. Medical Teacher. 2016; 38(12):1248–1255.
  9. Medical Council of New Zealand. Prevocational medical training for doctors in New Zealand: Accreditation standards for training providers. Wellington: Medical Council of New Zealand; 2018.
  10. Khan JS. Evaluation of the educational environment of postgraduate surgical teaching. J Ayub Med Coll Abbottabad. 2008; 20:104–107.
  11. Steinert Y, Mann K, Anderson B, et al. A systematic review of faculty development initiatives designed to enhance teaching effectiveness: A 10-year update: BEME Guide No. 40. Medical Teacher. 2016; 38(8):769–786.
  12. Steinert Y, Mann K, Canteno A, et al. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide No. 8. Medical Teacher. 2006; 28(6):497–526.

Contact diana@nzma.org.nz
for the PDF of this article

Subscriber Content

The full contents of this pages only available to subscribers.
Login, subscribe or email nzmj@nzma.org.nz to purchase this article.

LOGINSUBSCRIBE
No items found.