The Montreal Cognitive Assessment (MoCA) was developed in 2005 to detect individuals with mild cognitive complaints who performed within the normal range on the Mini Mental State Exam (MMSE).1 Mild cognitive impairment is a state between the normal cognitive changes associated with aging and dementia. In the original validation study the MoCA was shown to have a sensitivity of 90% and specificity of 87% for the detection of mild cognitive impairment, compared to 18% and 100% for the MMSE.1
The MoCA has been shown to be superior for the detection of mild cognitive impairment compared to the MMSE in many other studies since,2,3,4 and is a useful tool for predicting the development of dementia in patients with mild cognitive impairment.5 It also has a role in cognitive assessment in a wide range of other conditions, including Parkinson’s disease,6,7 chronic obstructive pulmonary disease,8 transient ischaemic attack and minor stroke,9 and Huntington’s disease.10
The MoCA test comprises 30 questions assessing various domains of cognition and can be performed in 10 minutes.1 Formal instructions on how to administer the test and score the results are easily accessible on the official website (www.mocatest.org).11 It is used in over 100 countries and is available in 46 languages and dialects. There is also a blind version and more recently an application (app) for smart devices.11
Anecdotally, we noted that completing the MoCA test was often a task given to the most junior members of inpatient medical teams. Knowledge on how to administer and score the MoCA seemed to be variable among junior doctors. This led to concerns that there could be errors in administration and scoring, which could impact on patient clinical outcomes.
No studies assessing accuracy of MoCA administration and scoring were identified through a literature search. We therefore designed an audit with the aim of investigating junior doctors’ knowledge of how to complete the MoCA. We hypothesised that formal teaching would improve the results on a follow-up audit.
This audit was completed between April and June 2017 at Canterbury, Capital and Coast and Hutt Valley District Health Boards (DHBs) in New Zealand. Participants included first-year doctors (also known as house officers) and final-year medical students (trainee interns) attending protected training time sessions.
Participants completed an anonymous two-part written questionnaire (Appendix 1). Part 1 comprised of participant demographics and information regarding the frequency of MoCA administration, prior knowledge about the test and experience of formal teaching. Part 2 consisted of examples of completed questions from the MoCA designed to test participants’ ability to identify errors in administration and score the test. They were given a copy of a blank MoCA test sheet but not the official administration or marking schedule. Results were assessed using a marking scheme and verified by two separate study investigators (Appendix 2).
Several weeks after the initial questionnaire a 15-minute teaching session was given followed immediately by a repeat of the questionnaire. The teaching covered all questions in the MoCA and the formal administration and scoring guidelines corresponding to each question (available at www.mocatest.org).12 Part 1 of the repeat questionnaire was the same as the initial audit with the addition of two questions; whether participants had completed the initial questionnaire and whether they felt teaching was beneficial or not (Appendix 3). Part 2 of the questionnaire remained unchanged.
Statistical analysis was completed using IBM SPSS Statistics version 23 for Windows and Fisher’s exact test to determine statistical significance for discrete variables. The Holm-Bonferroni method was applied to the Part 2 results adjusted for multiple comparisons. Unanswered questions were marked as being incorrect.
The audit was deemed to be outside the scope of review by the Health and Disability Ethics Committee, New Zealand. It was registered at the research offices for all three DHBs.
Seventy-one individuals completed the questionnaire for the initial audit and 46 in the follow-up audit (Table 1).
Table 1: Distribution of participants among the three DHBs.
The majority of study participants in both the initial and follow-up audits were house officers, 62 (87%) and 43 (93%); with the remainder being trainee interns and other students (Table 2). There was no significant difference between the pre- and post-teaching groups. Twenty-six (57%) of participants involved in the follow-up audit had also completed the initial questionnaire.
Table 2: Results of Part 1 of audit questionnaire.
The majority of junior doctors carried out the MoCA on a monthly basis (37% in the initial audit and 43% in the follow-up audit), however frequency of performance varied with some having never carried out the test in the last 12 months. Prior to the teaching session provided, 23% of participants had received formal teaching on how to administer and score the MoCA. There was no statistically significant difference in the frequency of MoCA testing, or the number of participants who understood the reason for the MoCA being tested between the initial and re-audit groups.
Although there was no statistically significant difference in the number of participants who were aware of the formal guidelines for how to complete the MoCA, there was a statistically significant difference with more participants having read the guidelines in the follow-up audit (77% and 93% respectively). 41 participants (89%) thought that the teaching session had improved their ability to complete MoCA testing.
Statistically significant improvements were seen in participants’ ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score.
Table 3: Results of Part 2 of the initial and re-audits demonstrating the number (and %) of participants giving correct answers for each question in the MoCA questionnaire.
The MoCA is a standardised test with formal marking guidelines, so the goal should be 100% accuracy in administering and scoring the test. A single error in administration or scoring could change the score for a patient, leading to an incorrect diagnosis. Nasreddine et al identified a MoCA score of <26/30 to indicate MCI with sensitivity of 90%.1 Other studies have, however, suggested lower scoring cutoffs may have superior predictive rates, particularly in populations where the baseline probability of cognitive impairment is higher.13
Although we could not identify any studies that specifically looked at the accuracy of administering and scoring the MoCA, we did identify studies that demonstrated errors in administration and scoring of other neuropsychological tests.14,15,16 The Alzheimer’s Disease Assessment Scale—cognitive subscale (ADAS-cog) is the most commonly used primary outcome measure in Alzheimer’s disease trials. Schafer et al found that 80.6% of raters of the ADAS-cog test made errors in administration and scoring leading to concerns that errors may affect clinical trial outcomes.14 Ramos et al identified examiner errors during the administration and scoring of the Woodcock Johnson III test of Cognitive Abilities carried out by graduate students. The number of errors reduced after three test administrations, suggesting that the students may benefit from more focus and practice on correct administration and scoring. 15
Only 23% of participants in this audit had been given any formal teaching on how to conduct the test prior to our teaching session. Standardised training and feedback given to inexperienced administrators has been shown to result in a decline in errors of instruction, administration and recording of neuropsychological tests.17 Following the brief teaching session we found improvements in participants’ ability to administer the trail-making question. There were improvements in their ability to identify errors and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score. There was also subjective improvement with 89% of participants responding that the teaching session had improved their ability to carry out the MoCA.
There was no statistically significant improvement in the questions for marking the cube, attention, sentence repetition or delayed recall. Knowledge of these questions was good in the initial audit (correct answers mostly above 90%), leaving little room for improvement. Conversely, although there was a statistically significant improvement in marking the clock face examples in the follow-up audit, only 50% of participants scored all three examples correctly. Price et al demonstrated scoring variability for the clock-drawing component between clinicians of one to three points when using the MoCA guidelines, highlighting concerns that error could potentially alter the overall score by 10%. Improvement was found with repeated training and use of a more detailed scoring system (Consentino criteria), however this also took raters longer to score. They recommended training and practice to improve the reliability of scoring.18
Our audit has several limitations; the number of participants was small and there were fewer participants in the follow-up audit sessions. Although protected training time is considered compulsory, other work commitments and leave can mean that not all junior doctors attend every session. Our initial audit at Capital and Coast DHB was carried out following a teaching session given by one of the house officer supervising consultants, which may have resulted in higher attendance to this session (29 participants compared to 17 in the follow-up audit session). Carrying out the questionnaires at more than one teaching session at each DHB would likely have improved our response rates.
Only 57% of participants completed both the initial and follow-up audit. There was no statistically significant difference between the demographic statistics, suggesting that both groups were similar and therefore reasonably comparable. We felt it was important to keep the questionnaire anonymous to encourage honest answers and were therefore unable to identify individuals who participated in both audits. Further research could be improved by using the same group of participants for initial and follow-up audits.
Our audit was carried out between April and June 2017, when junior doctors had been working for 4 to 7 months (house officers in New Zealand begin work in November). Our results showed that some junior doctors had never carried out the MoCA (17% in the initial audit and 9% in the follow-up) and many had only performed the test once in the last 12 months (21% in the initial audit and 30% in the follow-up). Our results may have differed had we carried out the audit later in the year when junior doctors had received more exposure to the MoCA in their clinical practice.
The majority of participants took over 10 minutes to complete the test (89% in the initial audit and 91% in the follow-up), which may be a reflection of limited experience. The MoCA is also carried out by occupational therapists who have more experience than junior doctors, however, tend to use it as part of more complete cognitive and functional assessment rather than as a screening test. Other brief cognitive tests have been shown to compare well with the MoCA and MMSE and could be considered as simpler alternatives, which may be easier for junior doctors with limited experience to administer.19,20,21 A short-form MoCA comprising three statistically selected components; orientation, word recall and serial subtraction, has been shown to be effective at classifying MCI and Alzheimer’s disease when compared to the MoCA and MMSE.20 The Mini-Cog compares well with the MMSE for the detection of dementia but is much briefer, being made up of only the three-item recall and clock drawing components.21 This test may not be as suitable due to the aforementioned problems with marking of the clock drawing component.
As well as house officers there were a small number of trainee interns and other students involved in the audit. Students probably have less clinical experience and this may have led to lower scores of the questionnaire. The proportion of trainee interns and students involved in both audits was similar so unlikely to have made an impact on the improvements noted in the follow-up audit. Trainee interns are both full-time students and apprentice house officers, taking responsibility for patient care decisions under supervision.22 Our real-world experience is that students do carry out the MoCA and so including this group in our study was deemed important.
Our audit shows that the MoCA is a test performed by junior doctors on a variable basis. Prior to our teaching session the majority of junior doctors had not received any formal teaching on how to complete the MoCA test. A short teaching session improves junior doctors’ ability to administer and score the MoCA.
We recommend that the formal administration and scoring instructions are used each time the MoCA is performed. The newly released app for smart devices may make this easier to achieve and more appealing to today’s junior doctors. Annual teaching on how to administer and score the MoCA will be provided to the house officers in the hospitals involved in this study. We also recommend that the same training is incorporated into the medical school curriculum as trainee interns and medical students are also involved in administering the test.
Deficits in knowledge around the MoCA may lead to inaccurate administration and scoring and incorrect MoCA scores may have consequences in terms of clinical outcomes for patients. This study does not actually assess this and further research could be of value.
Use the official MoCA Administration and scoring guidelines alongside this marking schedule
For questions that are Yes/No answers, the correct answer is in bold type
In your own words, describe how you would explain to a patient how to complete the trail making question?
ANSWER: The phrase must include a description or imply a trail between alternating letters and numbers and ascending order at a minumum to be considered correct (not necessarily in these exact words).
this question should be validated between two investigators as it is somewhat open to interpretation as to whether it fits this description
Please mark these examples of a copied cube:
/1 /1 /1
ANSWER:
0/1 1/1 0/1
Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the cube)
Please mark these clock face examples:
/3 /3 /3
ANSWER:
2/3 1/3 2/3
Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the clockface)
Which of these answers would you mark as correct for the names of the animals? Please circle your correct answers; you may circle more than one answer (MoCA version1)
Image 1 Image 2 Image 3
Lion Rhino Camel
Cat Hippopotamus Horse
Tiger Rhinoceros Dromedary
ANSWER: Both Rhino and Rhinoceros and Camel and Dromedary should be selected as both are correct. If only one of these two are selected then mark as incorrect.
Memory (no question here)
Do they score a point for this question?
Yes No
100 92 85 78 71 64 points allocated:
ANSWER: 3 points (4 correct subtractions)
100 94 86 79 72 67 points allocated:
ANSWER: 2 points (2 correct subtractions)
→ The cats always hid under the couch when the dog is in the room
→ The cat always hides under the couch when the dogs were in the room
→ The cat hid under the couch when the dogs were in the room
ANSWER: none of the above options are exactly correct. A correct answer is if none of the options are circled
Score as correct if the both answers are correct. If only one is correct, score as incorrect
Please circle the words that do not score a point when the patients is asked to give “words beginning with F”
France (PN) Foxes* Frustration* Frangipane
Fox * Four (number) Face Froth
Forrest Festive Frustrating* Fossil
Fantail Frances (PN) Fantastic Fun
Fax Fourteen (number) Fred (PN) Finland (PN)
ANSWER: All of the following words must be circled as being words that do not score in order to mark this question correct:
- France, Frances, Fred, Finland (proper nouns)
- Four, Fourteen (numbers)
Words that begin with the same sound but different suffix
- Of Fox and Foxes, one must be marked as incorrect
- Of Frustration and Frustrating, one must be marked as incorrect
Which of the following are correct answer(s) for the “similarity between question” (please tick – you may select more than one option)
Train – bicycle
They both have wheels
I am interested in both trains and bicycles
They are modes of transport
Ruler – watch
They are tools for measurement
They have numbers
I own both of these items
ANSWER: only the answers are in bold are correct
Yes No
Yes No
ANSWER: Yes
ANSWER: A point is added for <12 years of formal education
We are carrying out an audit about the knowledge of medical staff around the Montreal Cognitive Assessment (MoCA). We understand that the education around how to carry out, and mark the MoCA test can be variable.
Please answer this questionnaire honestly. It will be kept anonymous and any information will be used to improve understanding around this important tool.
Please circle your answer
1. Please tell us what level you are at:
Trainee Intern House Surgeon Other__________
2. Did you complete the first audit questionnaire?
Yes No
3. How often have you completed a MoCA in the last 12 months?
Never Weekly Monthly One a year
4. Do you know the reason for completing the MoCA on your patients?
Always Often Sometimes Never
5. Were you aware before today, that there are formal Administration and Scoring instructions on how to carry out and mark the MoCA?
Yes No
6. Have you ever read the formal Administration and Scoring instructions?
Yes No
7. How long would you estimate it takes you to carry out the MoCA?
<10 min 11–20 min >20min
8. Are you aware there is more than one version of the MOCA?
Yes No
9. Do you feel the teaching today has improved your ability to perform and mark the MoCA?
Yes No
To investigate junior doctors knowledge of how to conduct the Montreal Cognitive Assessment (MoCA).
A two-part questionnaire was administered to junior doctors at teaching sessions across three New Zealand district health boards. Part 1 investigated prior experience and knowledge of the MoCA. Part 2 tested junior doctors ability to identify errors in administration and how to score the test. Several weeks later a brief MoCA teaching session was given followed immediately by a repeat questionnaire.
Seventy-one individuals completed the initial audit and 46 did the follow-up audit. The majority of junior doctors carried out the MoCA on a monthly basis. Prior to our teaching session, only 23% of participants had received formal teaching on how to administer and score the MoCA. The majority (89%) of participants thought that the teaching session had improved their ability to conduct the MoCA. Statistically significant changes were seen in participants ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the awareness about the effect of years of education on the MoCA score.
Junior doctors administer and score the MoCA but many have not received formal teaching on how to do so. A short teaching session improved their ability to conduct the MoCA and identify errors in administration and scoring.
The Montreal Cognitive Assessment (MoCA) was developed in 2005 to detect individuals with mild cognitive complaints who performed within the normal range on the Mini Mental State Exam (MMSE).1 Mild cognitive impairment is a state between the normal cognitive changes associated with aging and dementia. In the original validation study the MoCA was shown to have a sensitivity of 90% and specificity of 87% for the detection of mild cognitive impairment, compared to 18% and 100% for the MMSE.1
The MoCA has been shown to be superior for the detection of mild cognitive impairment compared to the MMSE in many other studies since,2,3,4 and is a useful tool for predicting the development of dementia in patients with mild cognitive impairment.5 It also has a role in cognitive assessment in a wide range of other conditions, including Parkinson’s disease,6,7 chronic obstructive pulmonary disease,8 transient ischaemic attack and minor stroke,9 and Huntington’s disease.10
The MoCA test comprises 30 questions assessing various domains of cognition and can be performed in 10 minutes.1 Formal instructions on how to administer the test and score the results are easily accessible on the official website (www.mocatest.org).11 It is used in over 100 countries and is available in 46 languages and dialects. There is also a blind version and more recently an application (app) for smart devices.11
Anecdotally, we noted that completing the MoCA test was often a task given to the most junior members of inpatient medical teams. Knowledge on how to administer and score the MoCA seemed to be variable among junior doctors. This led to concerns that there could be errors in administration and scoring, which could impact on patient clinical outcomes.
No studies assessing accuracy of MoCA administration and scoring were identified through a literature search. We therefore designed an audit with the aim of investigating junior doctors’ knowledge of how to complete the MoCA. We hypothesised that formal teaching would improve the results on a follow-up audit.
This audit was completed between April and June 2017 at Canterbury, Capital and Coast and Hutt Valley District Health Boards (DHBs) in New Zealand. Participants included first-year doctors (also known as house officers) and final-year medical students (trainee interns) attending protected training time sessions.
Participants completed an anonymous two-part written questionnaire (Appendix 1). Part 1 comprised of participant demographics and information regarding the frequency of MoCA administration, prior knowledge about the test and experience of formal teaching. Part 2 consisted of examples of completed questions from the MoCA designed to test participants’ ability to identify errors in administration and score the test. They were given a copy of a blank MoCA test sheet but not the official administration or marking schedule. Results were assessed using a marking scheme and verified by two separate study investigators (Appendix 2).
Several weeks after the initial questionnaire a 15-minute teaching session was given followed immediately by a repeat of the questionnaire. The teaching covered all questions in the MoCA and the formal administration and scoring guidelines corresponding to each question (available at www.mocatest.org).12 Part 1 of the repeat questionnaire was the same as the initial audit with the addition of two questions; whether participants had completed the initial questionnaire and whether they felt teaching was beneficial or not (Appendix 3). Part 2 of the questionnaire remained unchanged.
Statistical analysis was completed using IBM SPSS Statistics version 23 for Windows and Fisher’s exact test to determine statistical significance for discrete variables. The Holm-Bonferroni method was applied to the Part 2 results adjusted for multiple comparisons. Unanswered questions were marked as being incorrect.
The audit was deemed to be outside the scope of review by the Health and Disability Ethics Committee, New Zealand. It was registered at the research offices for all three DHBs.
Seventy-one individuals completed the questionnaire for the initial audit and 46 in the follow-up audit (Table 1).
Table 1: Distribution of participants among the three DHBs.
The majority of study participants in both the initial and follow-up audits were house officers, 62 (87%) and 43 (93%); with the remainder being trainee interns and other students (Table 2). There was no significant difference between the pre- and post-teaching groups. Twenty-six (57%) of participants involved in the follow-up audit had also completed the initial questionnaire.
Table 2: Results of Part 1 of audit questionnaire.
The majority of junior doctors carried out the MoCA on a monthly basis (37% in the initial audit and 43% in the follow-up audit), however frequency of performance varied with some having never carried out the test in the last 12 months. Prior to the teaching session provided, 23% of participants had received formal teaching on how to administer and score the MoCA. There was no statistically significant difference in the frequency of MoCA testing, or the number of participants who understood the reason for the MoCA being tested between the initial and re-audit groups.
Although there was no statistically significant difference in the number of participants who were aware of the formal guidelines for how to complete the MoCA, there was a statistically significant difference with more participants having read the guidelines in the follow-up audit (77% and 93% respectively). 41 participants (89%) thought that the teaching session had improved their ability to complete MoCA testing.
Statistically significant improvements were seen in participants’ ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score.
Table 3: Results of Part 2 of the initial and re-audits demonstrating the number (and %) of participants giving correct answers for each question in the MoCA questionnaire.
The MoCA is a standardised test with formal marking guidelines, so the goal should be 100% accuracy in administering and scoring the test. A single error in administration or scoring could change the score for a patient, leading to an incorrect diagnosis. Nasreddine et al identified a MoCA score of <26/30 to indicate MCI with sensitivity of 90%.1 Other studies have, however, suggested lower scoring cutoffs may have superior predictive rates, particularly in populations where the baseline probability of cognitive impairment is higher.13
Although we could not identify any studies that specifically looked at the accuracy of administering and scoring the MoCA, we did identify studies that demonstrated errors in administration and scoring of other neuropsychological tests.14,15,16 The Alzheimer’s Disease Assessment Scale—cognitive subscale (ADAS-cog) is the most commonly used primary outcome measure in Alzheimer’s disease trials. Schafer et al found that 80.6% of raters of the ADAS-cog test made errors in administration and scoring leading to concerns that errors may affect clinical trial outcomes.14 Ramos et al identified examiner errors during the administration and scoring of the Woodcock Johnson III test of Cognitive Abilities carried out by graduate students. The number of errors reduced after three test administrations, suggesting that the students may benefit from more focus and practice on correct administration and scoring. 15
Only 23% of participants in this audit had been given any formal teaching on how to conduct the test prior to our teaching session. Standardised training and feedback given to inexperienced administrators has been shown to result in a decline in errors of instruction, administration and recording of neuropsychological tests.17 Following the brief teaching session we found improvements in participants’ ability to administer the trail-making question. There were improvements in their ability to identify errors and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score. There was also subjective improvement with 89% of participants responding that the teaching session had improved their ability to carry out the MoCA.
There was no statistically significant improvement in the questions for marking the cube, attention, sentence repetition or delayed recall. Knowledge of these questions was good in the initial audit (correct answers mostly above 90%), leaving little room for improvement. Conversely, although there was a statistically significant improvement in marking the clock face examples in the follow-up audit, only 50% of participants scored all three examples correctly. Price et al demonstrated scoring variability for the clock-drawing component between clinicians of one to three points when using the MoCA guidelines, highlighting concerns that error could potentially alter the overall score by 10%. Improvement was found with repeated training and use of a more detailed scoring system (Consentino criteria), however this also took raters longer to score. They recommended training and practice to improve the reliability of scoring.18
Our audit has several limitations; the number of participants was small and there were fewer participants in the follow-up audit sessions. Although protected training time is considered compulsory, other work commitments and leave can mean that not all junior doctors attend every session. Our initial audit at Capital and Coast DHB was carried out following a teaching session given by one of the house officer supervising consultants, which may have resulted in higher attendance to this session (29 participants compared to 17 in the follow-up audit session). Carrying out the questionnaires at more than one teaching session at each DHB would likely have improved our response rates.
Only 57% of participants completed both the initial and follow-up audit. There was no statistically significant difference between the demographic statistics, suggesting that both groups were similar and therefore reasonably comparable. We felt it was important to keep the questionnaire anonymous to encourage honest answers and were therefore unable to identify individuals who participated in both audits. Further research could be improved by using the same group of participants for initial and follow-up audits.
Our audit was carried out between April and June 2017, when junior doctors had been working for 4 to 7 months (house officers in New Zealand begin work in November). Our results showed that some junior doctors had never carried out the MoCA (17% in the initial audit and 9% in the follow-up) and many had only performed the test once in the last 12 months (21% in the initial audit and 30% in the follow-up). Our results may have differed had we carried out the audit later in the year when junior doctors had received more exposure to the MoCA in their clinical practice.
The majority of participants took over 10 minutes to complete the test (89% in the initial audit and 91% in the follow-up), which may be a reflection of limited experience. The MoCA is also carried out by occupational therapists who have more experience than junior doctors, however, tend to use it as part of more complete cognitive and functional assessment rather than as a screening test. Other brief cognitive tests have been shown to compare well with the MoCA and MMSE and could be considered as simpler alternatives, which may be easier for junior doctors with limited experience to administer.19,20,21 A short-form MoCA comprising three statistically selected components; orientation, word recall and serial subtraction, has been shown to be effective at classifying MCI and Alzheimer’s disease when compared to the MoCA and MMSE.20 The Mini-Cog compares well with the MMSE for the detection of dementia but is much briefer, being made up of only the three-item recall and clock drawing components.21 This test may not be as suitable due to the aforementioned problems with marking of the clock drawing component.
As well as house officers there were a small number of trainee interns and other students involved in the audit. Students probably have less clinical experience and this may have led to lower scores of the questionnaire. The proportion of trainee interns and students involved in both audits was similar so unlikely to have made an impact on the improvements noted in the follow-up audit. Trainee interns are both full-time students and apprentice house officers, taking responsibility for patient care decisions under supervision.22 Our real-world experience is that students do carry out the MoCA and so including this group in our study was deemed important.
Our audit shows that the MoCA is a test performed by junior doctors on a variable basis. Prior to our teaching session the majority of junior doctors had not received any formal teaching on how to complete the MoCA test. A short teaching session improves junior doctors’ ability to administer and score the MoCA.
We recommend that the formal administration and scoring instructions are used each time the MoCA is performed. The newly released app for smart devices may make this easier to achieve and more appealing to today’s junior doctors. Annual teaching on how to administer and score the MoCA will be provided to the house officers in the hospitals involved in this study. We also recommend that the same training is incorporated into the medical school curriculum as trainee interns and medical students are also involved in administering the test.
Deficits in knowledge around the MoCA may lead to inaccurate administration and scoring and incorrect MoCA scores may have consequences in terms of clinical outcomes for patients. This study does not actually assess this and further research could be of value.
Use the official MoCA Administration and scoring guidelines alongside this marking schedule
For questions that are Yes/No answers, the correct answer is in bold type
In your own words, describe how you would explain to a patient how to complete the trail making question?
ANSWER: The phrase must include a description or imply a trail between alternating letters and numbers and ascending order at a minumum to be considered correct (not necessarily in these exact words).
this question should be validated between two investigators as it is somewhat open to interpretation as to whether it fits this description
Please mark these examples of a copied cube:
/1 /1 /1
ANSWER:
0/1 1/1 0/1
Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the cube)
Please mark these clock face examples:
/3 /3 /3
ANSWER:
2/3 1/3 2/3
Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the clockface)
Which of these answers would you mark as correct for the names of the animals? Please circle your correct answers; you may circle more than one answer (MoCA version1)
Image 1 Image 2 Image 3
Lion Rhino Camel
Cat Hippopotamus Horse
Tiger Rhinoceros Dromedary
ANSWER: Both Rhino and Rhinoceros and Camel and Dromedary should be selected as both are correct. If only one of these two are selected then mark as incorrect.
Memory (no question here)
Do they score a point for this question?
Yes No
100 92 85 78 71 64 points allocated:
ANSWER: 3 points (4 correct subtractions)
100 94 86 79 72 67 points allocated:
ANSWER: 2 points (2 correct subtractions)
→ The cats always hid under the couch when the dog is in the room
→ The cat always hides under the couch when the dogs were in the room
→ The cat hid under the couch when the dogs were in the room
ANSWER: none of the above options are exactly correct. A correct answer is if none of the options are circled
Score as correct if the both answers are correct. If only one is correct, score as incorrect
Please circle the words that do not score a point when the patients is asked to give “words beginning with F”
France (PN) Foxes* Frustration* Frangipane
Fox * Four (number) Face Froth
Forrest Festive Frustrating* Fossil
Fantail Frances (PN) Fantastic Fun
Fax Fourteen (number) Fred (PN) Finland (PN)
ANSWER: All of the following words must be circled as being words that do not score in order to mark this question correct:
- France, Frances, Fred, Finland (proper nouns)
- Four, Fourteen (numbers)
Words that begin with the same sound but different suffix
- Of Fox and Foxes, one must be marked as incorrect
- Of Frustration and Frustrating, one must be marked as incorrect
Which of the following are correct answer(s) for the “similarity between question” (please tick – you may select more than one option)
Train – bicycle
They both have wheels
I am interested in both trains and bicycles
They are modes of transport
Ruler – watch
They are tools for measurement
They have numbers
I own both of these items
ANSWER: only the answers are in bold are correct
Yes No
Yes No
ANSWER: Yes
ANSWER: A point is added for <12 years of formal education
We are carrying out an audit about the knowledge of medical staff around the Montreal Cognitive Assessment (MoCA). We understand that the education around how to carry out, and mark the MoCA test can be variable.
Please answer this questionnaire honestly. It will be kept anonymous and any information will be used to improve understanding around this important tool.
Please circle your answer
1. Please tell us what level you are at:
Trainee Intern House Surgeon Other__________
2. Did you complete the first audit questionnaire?
Yes No
3. How often have you completed a MoCA in the last 12 months?
Never Weekly Monthly One a year
4. Do you know the reason for completing the MoCA on your patients?
Always Often Sometimes Never
5. Were you aware before today, that there are formal Administration and Scoring instructions on how to carry out and mark the MoCA?
Yes No
6. Have you ever read the formal Administration and Scoring instructions?
Yes No
7. How long would you estimate it takes you to carry out the MoCA?
<10 min 11–20 min >20min
8. Are you aware there is more than one version of the MOCA?
Yes No
9. Do you feel the teaching today has improved your ability to perform and mark the MoCA?
Yes No
To investigate junior doctors knowledge of how to conduct the Montreal Cognitive Assessment (MoCA).
A two-part questionnaire was administered to junior doctors at teaching sessions across three New Zealand district health boards. Part 1 investigated prior experience and knowledge of the MoCA. Part 2 tested junior doctors ability to identify errors in administration and how to score the test. Several weeks later a brief MoCA teaching session was given followed immediately by a repeat questionnaire.
Seventy-one individuals completed the initial audit and 46 did the follow-up audit. The majority of junior doctors carried out the MoCA on a monthly basis. Prior to our teaching session, only 23% of participants had received formal teaching on how to administer and score the MoCA. The majority (89%) of participants thought that the teaching session had improved their ability to conduct the MoCA. Statistically significant changes were seen in participants ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the awareness about the effect of years of education on the MoCA score.
Junior doctors administer and score the MoCA but many have not received formal teaching on how to do so. A short teaching session improved their ability to conduct the MoCA and identify errors in administration and scoring.
The Montreal Cognitive Assessment (MoCA) was developed in 2005 to detect individuals with mild cognitive complaints who performed within the normal range on the Mini Mental State Exam (MMSE).1 Mild cognitive impairment is a state between the normal cognitive changes associated with aging and dementia. In the original validation study the MoCA was shown to have a sensitivity of 90% and specificity of 87% for the detection of mild cognitive impairment, compared to 18% and 100% for the MMSE.1
The MoCA has been shown to be superior for the detection of mild cognitive impairment compared to the MMSE in many other studies since,2,3,4 and is a useful tool for predicting the development of dementia in patients with mild cognitive impairment.5 It also has a role in cognitive assessment in a wide range of other conditions, including Parkinson’s disease,6,7 chronic obstructive pulmonary disease,8 transient ischaemic attack and minor stroke,9 and Huntington’s disease.10
The MoCA test comprises 30 questions assessing various domains of cognition and can be performed in 10 minutes.1 Formal instructions on how to administer the test and score the results are easily accessible on the official website (www.mocatest.org).11 It is used in over 100 countries and is available in 46 languages and dialects. There is also a blind version and more recently an application (app) for smart devices.11
Anecdotally, we noted that completing the MoCA test was often a task given to the most junior members of inpatient medical teams. Knowledge on how to administer and score the MoCA seemed to be variable among junior doctors. This led to concerns that there could be errors in administration and scoring, which could impact on patient clinical outcomes.
No studies assessing accuracy of MoCA administration and scoring were identified through a literature search. We therefore designed an audit with the aim of investigating junior doctors’ knowledge of how to complete the MoCA. We hypothesised that formal teaching would improve the results on a follow-up audit.
This audit was completed between April and June 2017 at Canterbury, Capital and Coast and Hutt Valley District Health Boards (DHBs) in New Zealand. Participants included first-year doctors (also known as house officers) and final-year medical students (trainee interns) attending protected training time sessions.
Participants completed an anonymous two-part written questionnaire (Appendix 1). Part 1 comprised of participant demographics and information regarding the frequency of MoCA administration, prior knowledge about the test and experience of formal teaching. Part 2 consisted of examples of completed questions from the MoCA designed to test participants’ ability to identify errors in administration and score the test. They were given a copy of a blank MoCA test sheet but not the official administration or marking schedule. Results were assessed using a marking scheme and verified by two separate study investigators (Appendix 2).
Several weeks after the initial questionnaire a 15-minute teaching session was given followed immediately by a repeat of the questionnaire. The teaching covered all questions in the MoCA and the formal administration and scoring guidelines corresponding to each question (available at www.mocatest.org).12 Part 1 of the repeat questionnaire was the same as the initial audit with the addition of two questions; whether participants had completed the initial questionnaire and whether they felt teaching was beneficial or not (Appendix 3). Part 2 of the questionnaire remained unchanged.
Statistical analysis was completed using IBM SPSS Statistics version 23 for Windows and Fisher’s exact test to determine statistical significance for discrete variables. The Holm-Bonferroni method was applied to the Part 2 results adjusted for multiple comparisons. Unanswered questions were marked as being incorrect.
The audit was deemed to be outside the scope of review by the Health and Disability Ethics Committee, New Zealand. It was registered at the research offices for all three DHBs.
Seventy-one individuals completed the questionnaire for the initial audit and 46 in the follow-up audit (Table 1).
Table 1: Distribution of participants among the three DHBs.
The majority of study participants in both the initial and follow-up audits were house officers, 62 (87%) and 43 (93%); with the remainder being trainee interns and other students (Table 2). There was no significant difference between the pre- and post-teaching groups. Twenty-six (57%) of participants involved in the follow-up audit had also completed the initial questionnaire.
Table 2: Results of Part 1 of audit questionnaire.
The majority of junior doctors carried out the MoCA on a monthly basis (37% in the initial audit and 43% in the follow-up audit), however frequency of performance varied with some having never carried out the test in the last 12 months. Prior to the teaching session provided, 23% of participants had received formal teaching on how to administer and score the MoCA. There was no statistically significant difference in the frequency of MoCA testing, or the number of participants who understood the reason for the MoCA being tested between the initial and re-audit groups.
Although there was no statistically significant difference in the number of participants who were aware of the formal guidelines for how to complete the MoCA, there was a statistically significant difference with more participants having read the guidelines in the follow-up audit (77% and 93% respectively). 41 participants (89%) thought that the teaching session had improved their ability to complete MoCA testing.
Statistically significant improvements were seen in participants’ ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score.
Table 3: Results of Part 2 of the initial and re-audits demonstrating the number (and %) of participants giving correct answers for each question in the MoCA questionnaire.
The MoCA is a standardised test with formal marking guidelines, so the goal should be 100% accuracy in administering and scoring the test. A single error in administration or scoring could change the score for a patient, leading to an incorrect diagnosis. Nasreddine et al identified a MoCA score of <26/30 to indicate MCI with sensitivity of 90%.1 Other studies have, however, suggested lower scoring cutoffs may have superior predictive rates, particularly in populations where the baseline probability of cognitive impairment is higher.13
Although we could not identify any studies that specifically looked at the accuracy of administering and scoring the MoCA, we did identify studies that demonstrated errors in administration and scoring of other neuropsychological tests.14,15,16 The Alzheimer’s Disease Assessment Scale—cognitive subscale (ADAS-cog) is the most commonly used primary outcome measure in Alzheimer’s disease trials. Schafer et al found that 80.6% of raters of the ADAS-cog test made errors in administration and scoring leading to concerns that errors may affect clinical trial outcomes.14 Ramos et al identified examiner errors during the administration and scoring of the Woodcock Johnson III test of Cognitive Abilities carried out by graduate students. The number of errors reduced after three test administrations, suggesting that the students may benefit from more focus and practice on correct administration and scoring. 15
Only 23% of participants in this audit had been given any formal teaching on how to conduct the test prior to our teaching session. Standardised training and feedback given to inexperienced administrators has been shown to result in a decline in errors of instruction, administration and recording of neuropsychological tests.17 Following the brief teaching session we found improvements in participants’ ability to administer the trail-making question. There were improvements in their ability to identify errors and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score. There was also subjective improvement with 89% of participants responding that the teaching session had improved their ability to carry out the MoCA.
There was no statistically significant improvement in the questions for marking the cube, attention, sentence repetition or delayed recall. Knowledge of these questions was good in the initial audit (correct answers mostly above 90%), leaving little room for improvement. Conversely, although there was a statistically significant improvement in marking the clock face examples in the follow-up audit, only 50% of participants scored all three examples correctly. Price et al demonstrated scoring variability for the clock-drawing component between clinicians of one to three points when using the MoCA guidelines, highlighting concerns that error could potentially alter the overall score by 10%. Improvement was found with repeated training and use of a more detailed scoring system (Consentino criteria), however this also took raters longer to score. They recommended training and practice to improve the reliability of scoring.18
Our audit has several limitations; the number of participants was small and there were fewer participants in the follow-up audit sessions. Although protected training time is considered compulsory, other work commitments and leave can mean that not all junior doctors attend every session. Our initial audit at Capital and Coast DHB was carried out following a teaching session given by one of the house officer supervising consultants, which may have resulted in higher attendance to this session (29 participants compared to 17 in the follow-up audit session). Carrying out the questionnaires at more than one teaching session at each DHB would likely have improved our response rates.
Only 57% of participants completed both the initial and follow-up audit. There was no statistically significant difference between the demographic statistics, suggesting that both groups were similar and therefore reasonably comparable. We felt it was important to keep the questionnaire anonymous to encourage honest answers and were therefore unable to identify individuals who participated in both audits. Further research could be improved by using the same group of participants for initial and follow-up audits.
Our audit was carried out between April and June 2017, when junior doctors had been working for 4 to 7 months (house officers in New Zealand begin work in November). Our results showed that some junior doctors had never carried out the MoCA (17% in the initial audit and 9% in the follow-up) and many had only performed the test once in the last 12 months (21% in the initial audit and 30% in the follow-up). Our results may have differed had we carried out the audit later in the year when junior doctors had received more exposure to the MoCA in their clinical practice.
The majority of participants took over 10 minutes to complete the test (89% in the initial audit and 91% in the follow-up), which may be a reflection of limited experience. The MoCA is also carried out by occupational therapists who have more experience than junior doctors, however, tend to use it as part of more complete cognitive and functional assessment rather than as a screening test. Other brief cognitive tests have been shown to compare well with the MoCA and MMSE and could be considered as simpler alternatives, which may be easier for junior doctors with limited experience to administer.19,20,21 A short-form MoCA comprising three statistically selected components; orientation, word recall and serial subtraction, has been shown to be effective at classifying MCI and Alzheimer’s disease when compared to the MoCA and MMSE.20 The Mini-Cog compares well with the MMSE for the detection of dementia but is much briefer, being made up of only the three-item recall and clock drawing components.21 This test may not be as suitable due to the aforementioned problems with marking of the clock drawing component.
As well as house officers there were a small number of trainee interns and other students involved in the audit. Students probably have less clinical experience and this may have led to lower scores of the questionnaire. The proportion of trainee interns and students involved in both audits was similar so unlikely to have made an impact on the improvements noted in the follow-up audit. Trainee interns are both full-time students and apprentice house officers, taking responsibility for patient care decisions under supervision.22 Our real-world experience is that students do carry out the MoCA and so including this group in our study was deemed important.
Our audit shows that the MoCA is a test performed by junior doctors on a variable basis. Prior to our teaching session the majority of junior doctors had not received any formal teaching on how to complete the MoCA test. A short teaching session improves junior doctors’ ability to administer and score the MoCA.
We recommend that the formal administration and scoring instructions are used each time the MoCA is performed. The newly released app for smart devices may make this easier to achieve and more appealing to today’s junior doctors. Annual teaching on how to administer and score the MoCA will be provided to the house officers in the hospitals involved in this study. We also recommend that the same training is incorporated into the medical school curriculum as trainee interns and medical students are also involved in administering the test.
Deficits in knowledge around the MoCA may lead to inaccurate administration and scoring and incorrect MoCA scores may have consequences in terms of clinical outcomes for patients. This study does not actually assess this and further research could be of value.
Use the official MoCA Administration and scoring guidelines alongside this marking schedule
For questions that are Yes/No answers, the correct answer is in bold type
In your own words, describe how you would explain to a patient how to complete the trail making question?
ANSWER: The phrase must include a description or imply a trail between alternating letters and numbers and ascending order at a minumum to be considered correct (not necessarily in these exact words).
this question should be validated between two investigators as it is somewhat open to interpretation as to whether it fits this description
Please mark these examples of a copied cube:
/1 /1 /1
ANSWER:
0/1 1/1 0/1
Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the cube)
Please mark these clock face examples:
/3 /3 /3
ANSWER:
2/3 1/3 2/3
Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the clockface)
Which of these answers would you mark as correct for the names of the animals? Please circle your correct answers; you may circle more than one answer (MoCA version1)
Image 1 Image 2 Image 3
Lion Rhino Camel
Cat Hippopotamus Horse
Tiger Rhinoceros Dromedary
ANSWER: Both Rhino and Rhinoceros and Camel and Dromedary should be selected as both are correct. If only one of these two are selected then mark as incorrect.
Memory (no question here)
Do they score a point for this question?
Yes No
100 92 85 78 71 64 points allocated:
ANSWER: 3 points (4 correct subtractions)
100 94 86 79 72 67 points allocated:
ANSWER: 2 points (2 correct subtractions)
→ The cats always hid under the couch when the dog is in the room
→ The cat always hides under the couch when the dogs were in the room
→ The cat hid under the couch when the dogs were in the room
ANSWER: none of the above options are exactly correct. A correct answer is if none of the options are circled
Score as correct if the both answers are correct. If only one is correct, score as incorrect
Please circle the words that do not score a point when the patients is asked to give “words beginning with F”
France (PN) Foxes* Frustration* Frangipane
Fox * Four (number) Face Froth
Forrest Festive Frustrating* Fossil
Fantail Frances (PN) Fantastic Fun
Fax Fourteen (number) Fred (PN) Finland (PN)
ANSWER: All of the following words must be circled as being words that do not score in order to mark this question correct:
- France, Frances, Fred, Finland (proper nouns)
- Four, Fourteen (numbers)
Words that begin with the same sound but different suffix
- Of Fox and Foxes, one must be marked as incorrect
- Of Frustration and Frustrating, one must be marked as incorrect
Which of the following are correct answer(s) for the “similarity between question” (please tick – you may select more than one option)
Train – bicycle
They both have wheels
I am interested in both trains and bicycles
They are modes of transport
Ruler – watch
They are tools for measurement
They have numbers
I own both of these items
ANSWER: only the answers are in bold are correct
Yes No
Yes No
ANSWER: Yes
ANSWER: A point is added for <12 years of formal education
We are carrying out an audit about the knowledge of medical staff around the Montreal Cognitive Assessment (MoCA). We understand that the education around how to carry out, and mark the MoCA test can be variable.
Please answer this questionnaire honestly. It will be kept anonymous and any information will be used to improve understanding around this important tool.
Please circle your answer
1. Please tell us what level you are at:
Trainee Intern House Surgeon Other__________
2. Did you complete the first audit questionnaire?
Yes No
3. How often have you completed a MoCA in the last 12 months?
Never Weekly Monthly One a year
4. Do you know the reason for completing the MoCA on your patients?
Always Often Sometimes Never
5. Were you aware before today, that there are formal Administration and Scoring instructions on how to carry out and mark the MoCA?
Yes No
6. Have you ever read the formal Administration and Scoring instructions?
Yes No
7. How long would you estimate it takes you to carry out the MoCA?
<10 min 11–20 min >20min
8. Are you aware there is more than one version of the MOCA?
Yes No
9. Do you feel the teaching today has improved your ability to perform and mark the MoCA?
Yes No
To investigate junior doctors knowledge of how to conduct the Montreal Cognitive Assessment (MoCA).
A two-part questionnaire was administered to junior doctors at teaching sessions across three New Zealand district health boards. Part 1 investigated prior experience and knowledge of the MoCA. Part 2 tested junior doctors ability to identify errors in administration and how to score the test. Several weeks later a brief MoCA teaching session was given followed immediately by a repeat questionnaire.
Seventy-one individuals completed the initial audit and 46 did the follow-up audit. The majority of junior doctors carried out the MoCA on a monthly basis. Prior to our teaching session, only 23% of participants had received formal teaching on how to administer and score the MoCA. The majority (89%) of participants thought that the teaching session had improved their ability to conduct the MoCA. Statistically significant changes were seen in participants ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the awareness about the effect of years of education on the MoCA score.
Junior doctors administer and score the MoCA but many have not received formal teaching on how to do so. A short teaching session improved their ability to conduct the MoCA and identify errors in administration and scoring.
The Montreal Cognitive Assessment (MoCA) was developed in 2005 to detect individuals with mild cognitive complaints who performed within the normal range on the Mini Mental State Exam (MMSE).1 Mild cognitive impairment is a state between the normal cognitive changes associated with aging and dementia. In the original validation study the MoCA was shown to have a sensitivity of 90% and specificity of 87% for the detection of mild cognitive impairment, compared to 18% and 100% for the MMSE.1
The MoCA has been shown to be superior for the detection of mild cognitive impairment compared to the MMSE in many other studies since,2,3,4 and is a useful tool for predicting the development of dementia in patients with mild cognitive impairment.5 It also has a role in cognitive assessment in a wide range of other conditions, including Parkinson’s disease,6,7 chronic obstructive pulmonary disease,8 transient ischaemic attack and minor stroke,9 and Huntington’s disease.10
The MoCA test comprises 30 questions assessing various domains of cognition and can be performed in 10 minutes.1 Formal instructions on how to administer the test and score the results are easily accessible on the official website (www.mocatest.org).11 It is used in over 100 countries and is available in 46 languages and dialects. There is also a blind version and more recently an application (app) for smart devices.11
Anecdotally, we noted that completing the MoCA test was often a task given to the most junior members of inpatient medical teams. Knowledge on how to administer and score the MoCA seemed to be variable among junior doctors. This led to concerns that there could be errors in administration and scoring, which could impact on patient clinical outcomes.
No studies assessing accuracy of MoCA administration and scoring were identified through a literature search. We therefore designed an audit with the aim of investigating junior doctors’ knowledge of how to complete the MoCA. We hypothesised that formal teaching would improve the results on a follow-up audit.
This audit was completed between April and June 2017 at Canterbury, Capital and Coast and Hutt Valley District Health Boards (DHBs) in New Zealand. Participants included first-year doctors (also known as house officers) and final-year medical students (trainee interns) attending protected training time sessions.
Participants completed an anonymous two-part written questionnaire (Appendix 1). Part 1 comprised of participant demographics and information regarding the frequency of MoCA administration, prior knowledge about the test and experience of formal teaching. Part 2 consisted of examples of completed questions from the MoCA designed to test participants’ ability to identify errors in administration and score the test. They were given a copy of a blank MoCA test sheet but not the official administration or marking schedule. Results were assessed using a marking scheme and verified by two separate study investigators (Appendix 2).
Several weeks after the initial questionnaire a 15-minute teaching session was given followed immediately by a repeat of the questionnaire. The teaching covered all questions in the MoCA and the formal administration and scoring guidelines corresponding to each question (available at www.mocatest.org).12 Part 1 of the repeat questionnaire was the same as the initial audit with the addition of two questions; whether participants had completed the initial questionnaire and whether they felt teaching was beneficial or not (Appendix 3). Part 2 of the questionnaire remained unchanged.
Statistical analysis was completed using IBM SPSS Statistics version 23 for Windows and Fisher’s exact test to determine statistical significance for discrete variables. The Holm-Bonferroni method was applied to the Part 2 results adjusted for multiple comparisons. Unanswered questions were marked as being incorrect.
The audit was deemed to be outside the scope of review by the Health and Disability Ethics Committee, New Zealand. It was registered at the research offices for all three DHBs.
Seventy-one individuals completed the questionnaire for the initial audit and 46 in the follow-up audit (Table 1).
Table 1: Distribution of participants among the three DHBs.
The majority of study participants in both the initial and follow-up audits were house officers, 62 (87%) and 43 (93%); with the remainder being trainee interns and other students (Table 2). There was no significant difference between the pre- and post-teaching groups. Twenty-six (57%) of participants involved in the follow-up audit had also completed the initial questionnaire.
Table 2: Results of Part 1 of audit questionnaire.
The majority of junior doctors carried out the MoCA on a monthly basis (37% in the initial audit and 43% in the follow-up audit), however frequency of performance varied with some having never carried out the test in the last 12 months. Prior to the teaching session provided, 23% of participants had received formal teaching on how to administer and score the MoCA. There was no statistically significant difference in the frequency of MoCA testing, or the number of participants who understood the reason for the MoCA being tested between the initial and re-audit groups.
Although there was no statistically significant difference in the number of participants who were aware of the formal guidelines for how to complete the MoCA, there was a statistically significant difference with more participants having read the guidelines in the follow-up audit (77% and 93% respectively). 41 participants (89%) thought that the teaching session had improved their ability to complete MoCA testing.
Statistically significant improvements were seen in participants’ ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score.
Table 3: Results of Part 2 of the initial and re-audits demonstrating the number (and %) of participants giving correct answers for each question in the MoCA questionnaire.
The MoCA is a standardised test with formal marking guidelines, so the goal should be 100% accuracy in administering and scoring the test. A single error in administration or scoring could change the score for a patient, leading to an incorrect diagnosis. Nasreddine et al identified a MoCA score of <26/30 to indicate MCI with sensitivity of 90%.1 Other studies have, however, suggested lower scoring cutoffs may have superior predictive rates, particularly in populations where the baseline probability of cognitive impairment is higher.13
Although we could not identify any studies that specifically looked at the accuracy of administering and scoring the MoCA, we did identify studies that demonstrated errors in administration and scoring of other neuropsychological tests.14,15,16 The Alzheimer’s Disease Assessment Scale—cognitive subscale (ADAS-cog) is the most commonly used primary outcome measure in Alzheimer’s disease trials. Schafer et al found that 80.6% of raters of the ADAS-cog test made errors in administration and scoring leading to concerns that errors may affect clinical trial outcomes.14 Ramos et al identified examiner errors during the administration and scoring of the Woodcock Johnson III test of Cognitive Abilities carried out by graduate students. The number of errors reduced after three test administrations, suggesting that the students may benefit from more focus and practice on correct administration and scoring. 15
Only 23% of participants in this audit had been given any formal teaching on how to conduct the test prior to our teaching session. Standardised training and feedback given to inexperienced administrators has been shown to result in a decline in errors of instruction, administration and recording of neuropsychological tests.17 Following the brief teaching session we found improvements in participants’ ability to administer the trail-making question. There were improvements in their ability to identify errors and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score. There was also subjective improvement with 89% of participants responding that the teaching session had improved their ability to carry out the MoCA.
There was no statistically significant improvement in the questions for marking the cube, attention, sentence repetition or delayed recall. Knowledge of these questions was good in the initial audit (correct answers mostly above 90%), leaving little room for improvement. Conversely, although there was a statistically significant improvement in marking the clock face examples in the follow-up audit, only 50% of participants scored all three examples correctly. Price et al demonstrated scoring variability for the clock-drawing component between clinicians of one to three points when using the MoCA guidelines, highlighting concerns that error could potentially alter the overall score by 10%. Improvement was found with repeated training and use of a more detailed scoring system (Consentino criteria), however this also took raters longer to score. They recommended training and practice to improve the reliability of scoring.18
Our audit has several limitations; the number of participants was small and there were fewer participants in the follow-up audit sessions. Although protected training time is considered compulsory, other work commitments and leave can mean that not all junior doctors attend every session. Our initial audit at Capital and Coast DHB was carried out following a teaching session given by one of the house officer supervising consultants, which may have resulted in higher attendance to this session (29 participants compared to 17 in the follow-up audit session). Carrying out the questionnaires at more than one teaching session at each DHB would likely have improved our response rates.
Only 57% of participants completed both the initial and follow-up audit. There was no statistically significant difference between the demographic statistics, suggesting that both groups were similar and therefore reasonably comparable. We felt it was important to keep the questionnaire anonymous to encourage honest answers and were therefore unable to identify individuals who participated in both audits. Further research could be improved by using the same group of participants for initial and follow-up audits.
Our audit was carried out between April and June 2017, when junior doctors had been working for 4 to 7 months (house officers in New Zealand begin work in November). Our results showed that some junior doctors had never carried out the MoCA (17% in the initial audit and 9% in the follow-up) and many had only performed the test once in the last 12 months (21% in the initial audit and 30% in the follow-up). Our results may have differed had we carried out the audit later in the year when junior doctors had received more exposure to the MoCA in their clinical practice.
The majority of participants took over 10 minutes to complete the test (89% in the initial audit and 91% in the follow-up), which may be a reflection of limited experience. The MoCA is also carried out by occupational therapists who have more experience than junior doctors, however, tend to use it as part of more complete cognitive and functional assessment rather than as a screening test. Other brief cognitive tests have been shown to compare well with the MoCA and MMSE and could be considered as simpler alternatives, which may be easier for junior doctors with limited experience to administer.19,20,21 A short-form MoCA comprising three statistically selected components; orientation, word recall and serial subtraction, has been shown to be effective at classifying MCI and Alzheimer’s disease when compared to the MoCA and MMSE.20 The Mini-Cog compares well with the MMSE for the detection of dementia but is much briefer, being made up of only the three-item recall and clock drawing components.21 This test may not be as suitable due to the aforementioned problems with marking of the clock drawing component.
As well as house officers there were a small number of trainee interns and other students involved in the audit. Students probably have less clinical experience and this may have led to lower scores of the questionnaire. The proportion of trainee interns and students involved in both audits was similar so unlikely to have made an impact on the improvements noted in the follow-up audit. Trainee interns are both full-time students and apprentice house officers, taking responsibility for patient care decisions under supervision.22 Our real-world experience is that students do carry out the MoCA and so including this group in our study was deemed important.
Our audit shows that the MoCA is a test performed by junior doctors on a variable basis. Prior to our teaching session the majority of junior doctors had not received any formal teaching on how to complete the MoCA test. A short teaching session improves junior doctors’ ability to administer and score the MoCA.
We recommend that the formal administration and scoring instructions are used each time the MoCA is performed. The newly released app for smart devices may make this easier to achieve and more appealing to today’s junior doctors. Annual teaching on how to administer and score the MoCA will be provided to the house officers in the hospitals involved in this study. We also recommend that the same training is incorporated into the medical school curriculum as trainee interns and medical students are also involved in administering the test.
Deficits in knowledge around the MoCA may lead to inaccurate administration and scoring and incorrect MoCA scores may have consequences in terms of clinical outcomes for patients. This study does not actually assess this and further research could be of value.
Use the official MoCA Administration and scoring guidelines alongside this marking schedule
For questions that are Yes/No answers, the correct answer is in bold type
In your own words, describe how you would explain to a patient how to complete the trail making question?
ANSWER: The phrase must include a description or imply a trail between alternating letters and numbers and ascending order at a minumum to be considered correct (not necessarily in these exact words).
this question should be validated between two investigators as it is somewhat open to interpretation as to whether it fits this description
Please mark these examples of a copied cube:
/1 /1 /1
ANSWER:
0/1 1/1 0/1
Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the cube)
Please mark these clock face examples:
/3 /3 /3
ANSWER:
2/3 1/3 2/3
Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the clockface)
Which of these answers would you mark as correct for the names of the animals? Please circle your correct answers; you may circle more than one answer (MoCA version1)
Image 1 Image 2 Image 3
Lion Rhino Camel
Cat Hippopotamus Horse
Tiger Rhinoceros Dromedary
ANSWER: Both Rhino and Rhinoceros and Camel and Dromedary should be selected as both are correct. If only one of these two are selected then mark as incorrect.
Memory (no question here)
Do they score a point for this question?
Yes No
100 92 85 78 71 64 points allocated:
ANSWER: 3 points (4 correct subtractions)
100 94 86 79 72 67 points allocated:
ANSWER: 2 points (2 correct subtractions)
→ The cats always hid under the couch when the dog is in the room
→ The cat always hides under the couch when the dogs were in the room
→ The cat hid under the couch when the dogs were in the room
ANSWER: none of the above options are exactly correct. A correct answer is if none of the options are circled
Score as correct if the both answers are correct. If only one is correct, score as incorrect
Please circle the words that do not score a point when the patients is asked to give “words beginning with F”
France (PN) Foxes* Frustration* Frangipane
Fox * Four (number) Face Froth
Forrest Festive Frustrating* Fossil
Fantail Frances (PN) Fantastic Fun
Fax Fourteen (number) Fred (PN) Finland (PN)
ANSWER: All of the following words must be circled as being words that do not score in order to mark this question correct:
- France, Frances, Fred, Finland (proper nouns)
- Four, Fourteen (numbers)
Words that begin with the same sound but different suffix
- Of Fox and Foxes, one must be marked as incorrect
- Of Frustration and Frustrating, one must be marked as incorrect
Which of the following are correct answer(s) for the “similarity between question” (please tick – you may select more than one option)
Train – bicycle
They both have wheels
I am interested in both trains and bicycles
They are modes of transport
Ruler – watch
They are tools for measurement
They have numbers
I own both of these items
ANSWER: only the answers are in bold are correct
Yes No
Yes No
ANSWER: Yes
ANSWER: A point is added for <12 years of formal education
We are carrying out an audit about the knowledge of medical staff around the Montreal Cognitive Assessment (MoCA). We understand that the education around how to carry out, and mark the MoCA test can be variable.
Please answer this questionnaire honestly. It will be kept anonymous and any information will be used to improve understanding around this important tool.
Please circle your answer
1. Please tell us what level you are at:
Trainee Intern House Surgeon Other__________
2. Did you complete the first audit questionnaire?
Yes No
3. How often have you completed a MoCA in the last 12 months?
Never Weekly Monthly One a year
4. Do you know the reason for completing the MoCA on your patients?
Always Often Sometimes Never
5. Were you aware before today, that there are formal Administration and Scoring instructions on how to carry out and mark the MoCA?
Yes No
6. Have you ever read the formal Administration and Scoring instructions?
Yes No
7. How long would you estimate it takes you to carry out the MoCA?
<10 min 11–20 min >20min
8. Are you aware there is more than one version of the MOCA?
Yes No
9. Do you feel the teaching today has improved your ability to perform and mark the MoCA?
Yes No
To investigate junior doctors knowledge of how to conduct the Montreal Cognitive Assessment (MoCA).
A two-part questionnaire was administered to junior doctors at teaching sessions across three New Zealand district health boards. Part 1 investigated prior experience and knowledge of the MoCA. Part 2 tested junior doctors ability to identify errors in administration and how to score the test. Several weeks later a brief MoCA teaching session was given followed immediately by a repeat questionnaire.
Seventy-one individuals completed the initial audit and 46 did the follow-up audit. The majority of junior doctors carried out the MoCA on a monthly basis. Prior to our teaching session, only 23% of participants had received formal teaching on how to administer and score the MoCA. The majority (89%) of participants thought that the teaching session had improved their ability to conduct the MoCA. Statistically significant changes were seen in participants ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the awareness about the effect of years of education on the MoCA score.
Junior doctors administer and score the MoCA but many have not received formal teaching on how to do so. A short teaching session improved their ability to conduct the MoCA and identify errors in administration and scoring.
The full contents of this pages only available to subscribers.
Login, subscribe or email nzmj@nzma.org.nz to purchase this article.