No items found.

View Article PDF

The Montreal Cognitive Assessment (MoCA) was developed in 2005 to detect individuals with mild cognitive complaints who performed within the normal range on the Mini Mental State Exam (MMSE).1 Mild cognitive impairment is a state between the normal cognitive changes associated with aging and dementia. In the original validation study the MoCA was shown to have a sensitivity of 90% and specificity of 87% for the detection of mild cognitive impairment, compared to 18% and 100% for the MMSE.1

The MoCA has been shown to be superior for the detection of mild cognitive impairment compared to the MMSE in many other studies since,2,3,4 and is a useful tool for predicting the development of dementia in patients with mild cognitive impairment.5 It also has a role in cognitive assessment in a wide range of other conditions, including Parkinson’s disease,6,7 chronic obstructive pulmonary disease,8 transient ischaemic attack and minor stroke,9 and Huntington’s disease.10

The MoCA test comprises 30 questions assessing various domains of cognition and can be performed in 10 minutes.1 Formal instructions on how to administer the test and score the results are easily accessible on the official website (www.mocatest.org).11 It is used in over 100 countries and is available in 46 languages and dialects. There is also a blind version and more recently an application (app) for smart devices.11

Anecdotally, we noted that completing the MoCA test was often a task given to the most junior members of inpatient medical teams. Knowledge on how to administer and score the MoCA seemed to be variable among junior doctors. This led to concerns that there could be errors in administration and scoring, which could impact on patient clinical outcomes.

No studies assessing accuracy of MoCA administration and scoring were identified through a literature search. We therefore designed an audit with the aim of investigating junior doctors’ knowledge of how to complete the MoCA. We hypothesised that formal teaching would improve the results on a follow-up audit.

Methods

This audit was completed between April and June 2017 at Canterbury, Capital and Coast and Hutt Valley District Health Boards (DHBs) in New Zealand. Participants included first-year doctors (also known as house officers) and final-year medical students (trainee interns) attending protected training time sessions.

Participants completed an anonymous two-part written questionnaire (Appendix 1). Part 1 comprised of participant demographics and information regarding the frequency of MoCA administration, prior knowledge about the test and experience of formal teaching. Part 2 consisted of examples of completed questions from the MoCA designed to test participants’ ability to identify errors in administration and score the test. They were given a copy of a blank MoCA test sheet but not the official administration or marking schedule. Results were assessed using a marking scheme and verified by two separate study investigators (Appendix 2).

Several weeks after the initial questionnaire a 15-minute teaching session was given followed immediately by a repeat of the questionnaire. The teaching covered all questions in the MoCA and the formal administration and scoring guidelines corresponding to each question (available at www.mocatest.org).12 Part 1 of the repeat questionnaire was the same as the initial audit with the addition of two questions; whether participants had completed the initial questionnaire and whether they felt teaching was beneficial or not (Appendix 3). Part 2 of the questionnaire remained unchanged.

Statistical analysis was completed using IBM SPSS Statistics version 23 for Windows and Fisher’s exact test to determine statistical significance for discrete variables. The Holm-Bonferroni method was applied to the Part 2 results adjusted for multiple comparisons. Unanswered questions were marked as being incorrect.

The audit was deemed to be outside the scope of review by the Health and Disability Ethics Committee, New Zealand. It was registered at the research offices for all three DHBs.

Results

Part 1

Seventy-one individuals completed the questionnaire for the initial audit and 46 in the follow-up audit (Table 1).

Table 1: Distribution of participants among the three DHBs.

The majority of study participants in both the initial and follow-up audits were house officers, 62 (87%) and 43 (93%); with the remainder being trainee interns and other students (Table 2). There was no significant difference between the pre- and post-teaching groups. Twenty-six (57%) of participants involved in the follow-up audit had also completed the initial questionnaire.

Table 2: Results of Part 1 of audit questionnaire.

 

The majority of junior doctors carried out the MoCA on a monthly basis (37% in the initial audit and 43% in the follow-up audit), however frequency of performance varied with some having never carried out the test in the last 12 months. Prior to the teaching session provided, 23% of participants had received formal teaching on how to administer and score the MoCA. There was no statistically significant difference in the frequency of MoCA testing, or the number of participants who understood the reason for the MoCA being tested between the initial and re-audit groups.

Although there was no statistically significant difference in the number of participants who were aware of the formal guidelines for how to complete the MoCA, there was a statistically significant difference with more participants having read the guidelines in the follow-up audit (77% and 93% respectively). 41 participants (89%) thought that the teaching session had improved their ability to complete MoCA testing.

Part 2

Statistically significant improvements were seen in participants’ ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score.

Table 3: Results of Part 2 of the initial and re-audits demonstrating the number (and %) of participants giving correct answers for each question in the MoCA questionnaire.

*N/A: if participants did not know that there was a change to the MoCA score with <11 years of education then they did not answer the final question.

Discussion

The MoCA is a standardised test with formal marking guidelines, so the goal should be 100% accuracy in administering and scoring the test. A single error in administration or scoring could change the score for a patient, leading to an incorrect diagnosis. Nasreddine et al identified a MoCA score of <26/30 to indicate MCI with sensitivity of 90%.1 Other studies have, however, suggested lower scoring cutoffs may have superior predictive rates, particularly in populations where the baseline probability of cognitive impairment is higher.13

Although we could not identify any studies that specifically looked at the accuracy of administering and scoring the MoCA, we did identify studies that demonstrated errors in administration and scoring of other neuropsychological tests.14,15,16  The Alzheimer’s Disease Assessment Scale—cognitive subscale (ADAS-cog) is the most commonly used primary outcome measure in Alzheimer’s disease trials. Schafer et al found that 80.6% of raters of the ADAS-cog test made errors in administration and scoring leading to concerns that errors may affect clinical trial outcomes.14 Ramos et al identified examiner errors during the administration and scoring of the Woodcock Johnson III test of Cognitive Abilities carried out by graduate students. The number of errors reduced after three test administrations, suggesting that the students may benefit from more focus and practice on correct administration and scoring. 15

Only 23% of participants in this audit had been given any formal teaching on how to conduct the test prior to our teaching session. Standardised training and feedback given to inexperienced administrators has been shown to result in a decline in errors of instruction, administration and recording of neuropsychological tests.17 Following the brief teaching session we found improvements in participants’ ability to administer the trail-making question. There were improvements in their ability to identify errors and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score. There was also subjective improvement with 89% of participants responding that the teaching session had improved their ability to carry out the MoCA.

There was no statistically significant improvement in the questions for marking the cube, attention, sentence repetition or delayed recall. Knowledge of these questions was good in the initial audit (correct answers mostly above 90%), leaving little room for improvement. Conversely, although there was a statistically significant improvement in marking the clock face examples in the follow-up audit, only 50% of participants scored all three examples correctly. Price et al demonstrated scoring variability for the clock-drawing component between clinicians of one to three points when using the MoCA guidelines, highlighting concerns that error could potentially alter the overall score by 10%. Improvement was found with repeated training and use of a more detailed scoring system (Consentino criteria), however this also took raters longer to score. They recommended training and practice to improve the reliability of scoring.18

Our audit has several limitations; the number of participants was small and there were fewer participants in the follow-up audit sessions. Although protected training time is considered compulsory, other work commitments and leave can mean that not all junior doctors attend every session. Our initial audit at Capital and Coast DHB was carried out following a teaching session given by one of the house officer supervising consultants, which may have resulted in higher attendance to this session (29 participants compared to 17 in the follow-up audit session). Carrying out the questionnaires at more than one teaching session at each DHB would likely have improved our response rates.

Only 57% of participants completed both the initial and follow-up audit. There was no statistically significant difference between the demographic statistics, suggesting that both groups were similar and therefore reasonably comparable. We felt it was important to keep the questionnaire anonymous to encourage honest answers and were therefore unable to identify individuals who participated in both audits. Further research could be improved by using the same group of participants for initial and follow-up audits.

Our audit was carried out between April and June 2017, when junior doctors had been working for 4 to 7 months (house officers in New Zealand begin work in November). Our results showed that some junior doctors had never carried out the MoCA (17% in the initial audit and 9% in the follow-up) and many had only performed the test once in the last 12 months (21% in the initial audit and 30% in the follow-up). Our results may have differed had we carried out the audit later in the year when junior doctors had received more exposure to the MoCA in their clinical practice.

The majority of participants took over 10 minutes to complete the test (89% in the initial audit and 91% in the follow-up), which may be a reflection of limited experience. The MoCA is also carried out by occupational therapists who have more experience than junior doctors, however, tend to use it as part of more complete cognitive and functional assessment rather than as a screening test. Other brief cognitive tests have been shown to compare well with the MoCA and MMSE and could be considered as simpler alternatives, which may be easier for junior doctors with limited experience to administer.19,20,21 A short-form MoCA comprising three statistically selected components; orientation, word recall and serial subtraction, has been shown to be effective at classifying MCI and Alzheimer’s disease when compared to the MoCA and MMSE.20 The Mini-Cog compares well with the MMSE for the detection of dementia but is much briefer, being made up of only the three-item recall and clock drawing components.21 This test may not be as suitable due to the aforementioned problems with marking of the clock drawing component.

As well as house officers there were a small number of trainee interns and other students involved in the audit. Students probably have less clinical experience and this may have led to lower scores of the questionnaire. The proportion of trainee interns and students involved in both audits was similar so unlikely to have made an impact on the improvements noted in the follow-up audit. Trainee interns are both full-time students and apprentice house officers, taking responsibility for patient care decisions under supervision.22 Our real-world experience is that students do carry out the MoCA and so including this group in our study was deemed important.

Conclusions

Our audit shows that the MoCA is a test performed by junior doctors on a variable basis. Prior to our teaching session the majority of junior doctors had not received any formal teaching on how to complete the MoCA test. A short teaching session improves junior doctors’ ability to administer and score the MoCA.

We recommend that the formal administration and scoring instructions are used each time the MoCA is performed. The newly released app for smart devices may make this easier to achieve and more appealing to today’s junior doctors. Annual teaching on how to administer and score the MoCA will be provided to the house officers in the hospitals involved in this study. We also recommend that the same training is incorporated into the medical school curriculum as trainee interns and medical students are also involved in administering the test.

Deficits in knowledge around the MoCA may lead to inaccurate administration and scoring and incorrect MoCA scores may have consequences in terms of clinical outcomes for patients. This study does not actually assess this and further research could be of value.

Appendix

c

c

c

c

Appendix 2

MoCA Audit: Questionnaire Marking Schedule

Use the official MoCA Administration and scoring guidelines alongside this marking schedule

For questions that are Yes/No answers, the correct answer is in bold type

Q1: Visuospatial/Executive (Trailmaker)

In your own words, describe how you would explain to a patient how to complete the trail making question?

ANSWER: The phrase must include a description or imply a trail between alternating letters and numbers and ascending order at a minumum to be considered correct (not necessarily in these exact words).

this question should be validated between two investigators as it is somewhat open to interpretation as to whether it fits this description

Q2: Cube drawing

Please mark these examples of a copied cube:

/1                  /1                  /1

ANSWER:

0/1                  1/1                  0/1


Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the cube)

Q3: Clockface

Please mark these clock face examples:

/3                  /3                  /3


ANSWER:

2/3                  1/3                  2/3

Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the clockface)

Q4: Naming animals

Which of these answers would you mark as correct for the names of the animals? Please circle your correct answers; you may circle more than one answer (MoCA version1)

Image 1            Image 2             Image 3

Lion                  Rhino                 Camel

Cat                   Hippopotamus     Horse

Tiger                 Rhinoceros         Dromedary

ANSWER: Both Rhino and Rhinoceros and Camel and Dromedary should be selected as both are correct. If only one of these two are selected then mark as incorrect.

  • Mark as 0 or 1,2,3 out of 3 (ie, if all highlighted words are circled then score 3, if rhino but not rhinoceros is circled but all others are circled then score two)

Memory (no question here)

Q5: Attention

  • When asked to tap their hand on the letter “A”, a patient also taps their hand twice on the letter “J”, they do not tap their hand once for the letter “A”

Do they score a point for this question?

Yes                  No

Q6: Serial 7 subtraction

  • For serial 7 subtraction, how many points would you give the following examples?

100    92    85    78    71    64    points allocated:

ANSWER: 3 points (4 correct subtractions)

100    94    86    79    72    67    points allocated:

ANSWER: 2 points (2 correct subtractions)

  • score 0 or 1, 2 out of 2 (ie, if both have the correct points allocated then score 2, if only one has correct points allocated then score 1)

Q7: Sentence Repetition

  • Which of these repeated sentences would be given a point? Tick the option(s)

  • I only know that John is the one I’ll help today

  • John is the one who helped today

  • I only know that John is the one to help today – correct ANSWER only

→ The cats always hid under the couch when the dog is in the room

→ The cat always hides under the couch when the dogs were in the room

→ The cat hid under the couch when the dogs were in the room

ANSWER: none of the above options are exactly correct. A correct answer is if none of the options are circled

Score as correct if the both answers are correct. If only one is correct, score as incorrect

Q8: Words beginning with F

Please circle the words that do not score a point when the patients is asked to give “words beginning with F”

France (PN)   Foxes*                         Frustration*         Frangipane

Fox *              Four (number)            Face                    Froth

Forrest            Festive                         Frustrating*         Fossil

Fantail            Frances (PN)              Fantastic              Fun

Fax                Fourteen (number)     Fred (PN)           Finland (PN)

ANSWER: All of the following words must be circled as being words that do not score in order to mark this question correct:

- France, Frances, Fred, Finland (proper nouns)

- Four, Fourteen (numbers)

Words that begin with the same sound but different suffix

- Of Fox and Foxes, one must be marked as incorrect

- Of Frustration and Frustrating, one must be marked as incorrect

  • Mark out of 8 – there are 8 answers that cannot score points (taking into account that only one of Fox or foxes and frustration and frustrating) can be scored.


Q9: Abstraction

Which of the following are correct answer(s) for the “similarity between question” (please tick – you may select more than one option)

Train – bicycle

They both have wheels

I am interested in both trains and bicycles

They are modes of transport

Ruler – watch

They are tools for measurement

They have numbers

I own both of these items

ANSWER: only the answers are in bold are correct

  • mark 0,1,2, out of 2 (ie, if both answers are correct then score 2)

Q10: Delayed Recall 1

  • If a patient gives all 5 words correct but in a different order, does this affect their score?

Yes                  No

Q11: Delayed Recall 2

  • If a patient recalls the words with a clue, do they score a point?

Yes                  No

Q12: Education

  • Your patient finished high school at year 11. Does this affect the score of the MoCA test? If so, how?

ANSWER: Yes

Q13: Reason for change to score for education

ANSWER: A point is added for <12 years of formal education

  • correct or incorrect. If the answer to Q12 was “No”, which is incorrect, then Q13 should be left blank as the participant cannot know the reason that the score would change

Appendix 3

Montreal Cognitive Assessment Re-Audit

We are carrying out an audit about the knowledge of medical staff around the Montreal Cognitive Assessment (MoCA). We understand that the education around how to carry out, and mark the MoCA test can be variable.

Please answer this questionnaire honestly. It will be kept anonymous and any information will be used to improve understanding around this important tool.

Part One: General Questions

Please circle your answer

1. Please tell us what level you are at:

Trainee Intern House Surgeon Other__________

2. Did you complete the first audit questionnaire?

Yes                  No

3. How often have you completed a MoCA in the last 12 months?

Never      Weekly      Monthly      One a year

4. Do you know the reason for completing the MoCA on your patients?

Always      Often      Sometimes      Never

5. Were you aware before today, that there are formal Administration and Scoring instructions on how to carry out and mark the MoCA?

Yes                  No

6. Have you ever read the formal Administration and Scoring instructions?

Yes                  No

7. How long would you estimate it takes you to carry out the MoCA?

<10 min          11–20 min          >20min

8. Are you aware there is more than one version of the MOCA?

Yes                  No

9. Do you feel the teaching today has improved your ability to perform and mark the MoCA?

Yes                  No

Summary

Abstract

Aim

To investigate junior doctors knowledge of how to conduct the Montreal Cognitive Assessment (MoCA).

Method

A two-part questionnaire was administered to junior doctors at teaching sessions across three New Zealand district health boards. Part 1 investigated prior experience and knowledge of the MoCA. Part 2 tested junior doctors ability to identify errors in administration and how to score the test. Several weeks later a brief MoCA teaching session was given followed immediately by a repeat questionnaire.

Results

Seventy-one individuals completed the initial audit and 46 did the follow-up audit. The majority of junior doctors carried out the MoCA on a monthly basis. Prior to our teaching session, only 23% of participants had received formal teaching on how to administer and score the MoCA. The majority (89%) of participants thought that the teaching session had improved their ability to conduct the MoCA. Statistically significant changes were seen in participants ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the awareness about the effect of years of education on the MoCA score.

Conclusion

Junior doctors administer and score the MoCA but many have not received formal teaching on how to do so. A short teaching session improved their ability to conduct the MoCA and identify errors in administration and scoring.

Author Information

- Chani Tromop-van Dalen, Advanced Trainee in General Medicine, Capital and Coast District Health Board, Wellington; Katie Thorne, Advanced Trainee in General Medicine and Geriatrics, Hutt Valley District Health Board, Lower Hutt; Krystina Common, Advan

Acknowledgements

The authors wish to acknowledge Dr Joanne Williams, Hutt Valley Distict Health Board for her assistance in supervising the project.

Correspondence

Dr Chani Tromop-van Dalen, Department of Medicine, Wellington Regional Hospital, Riddiford Street, Newtown, Wellington 6022.

Correspondence Email

chanitvd@gmail.com

Competing Interests

Dr Thorne reports affiliation with FLights to Auckland outside the submitted work.

  1. Nasreddine ZS, Phillips NA, Bédrian V, et al. The Montreal Cognitive Assessment, MoCA: A Brief Screening Tool for Mild Cognitive Impairment. J Am Geriatr Soc. 2005; 53:695–9.
  2. Tsoi KK, Chan JY, Hirai HW, et al. Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis. JAMA Intern Med. 2015; 175:1450–8.
  3. Lam B, Middleton LE, Masellis M, et al. Criterion and convergent validity of the Montreal cognitive assessment with screening and standardized neuropsychological testing. J Am Geriatr Soc. 2013; 61:2181–5.
  4. Damian AM, Jacobsen SA, Hentz JG, et al. The Montreal Cognitive Assessment and the mini-mental state examination as screening instruments for cogntive impairment: item analyses and threshold scores. Dement Geriatr Cogn Disord. 2011; 31:126–31.
  5. Smith T, Gildeh N, Holmes C. The Montreal Cognitive Assessment: validity and utility in a memory clinic setting. Can J Psychiatry. 2007; 52:329–32.
  6. Zadikoff C, Fox SH, Tang-Wai DF, et al. A comparison of the Mini-Mental state exam to the Montreal Cognitive Assessment in identifying cognitive deficits in Parkinson’s disease. Mov Disord. 2008; 23:297–9.
  7. Dalrymple-Alford JC, MacAskill MR, Nakas CT, et al. The MoCA: Well-suited screen for cognitive impairment in Parkinson disease. Neurology. 2010; 75:1717–25.
  8. Villeneuve S, Pepin V, Rahayel S, et al. Mild cognitive impairment in moderate to severe COPD: a preliminary study. Chest. 2012; 142:1516–1523.
  9. Webb AJ, Pendlebury ST, Li L, et al. Validation of the Montreal Cognitive Assessment versus Mini-Mental State Examination Against Hypertension and Hypertensive Arteriopathy After Transient Ischaemic Attack or Minor Stroke. Stroke. 2014; 45:3337–3342.
  10. Videnovic A, Bernard B, Fan W, et al. The Montreal Cognitive Assessment as a screening tool for cognitive dysfunction in Huntington’s disease. Mov Disord. 2010; 25:401–4.
  11. Mocatest.org [homepage on the internet]. Nasreddine Z. [updated 2015, cited 6 Jan 2018]. Available from http://www.mocatest.org/about
  12. Montreal Cognitive Assessment (MoCA) Administration and Scoring Instructions. Nasreddine Z. [updated August 18, 2010]. Available from http://www.mocatest.org/wp-content/uploads/2015/tests-instructions/MoCA-Instructions-English_2010.pdf
  13. Larner AJ. Screening utility of the Montreal Cognitive Assessment (MoCA): in place of – or as well as – the MMSE? Int Psychogeriatr. 2012; 24:391–6
  14. Schafer K, De Santi S, Schneider LS. Errors in ADAS-cog administration and scoring may undermine clinical trials results. Curr Alzheimer Res. 2011; 8:373–6.
  15. Ramos E, Alfonso VC, Schermerhorn SM. Graduate students’ administration and scoring errors on the Woodcock-Johnson III tests of cognitive abilities. Psychology in the Schools. 2009; 46:650–657.
  16. Kozora E, Kongs S, Hampton M, Zhang L. Effects of examiner error on neuropsychological test results in a multi-site study. Clin Neuropsychol. 2008; 22:977–88.
  17. Kazora E, Kongs S, Box T, et al. Training and management of a multisite neuropsychological testing protocol for the Department of Veterans Affairs cooperative study evaluating on-and-off pump coronary artery bypass graft procedures. Clin Neuropsychol. 2007; 21:653–62.
  18. Price CC, Cunningham H, Coronado N, et al. Clock drawing in the Montreal Cognitive Assessment: recommendations for dementia assessment. Dement Geriatr Cogn Disord. 2011; 31:179–87.
  19. Ismail Z, Rajji TK, Shulman KI. Brief cognitive screening instruments: an update. Int J Geriatr Psychiatry. 2010; 25:111–20.
  20. Horton DK, Hynan LS, Lacritz LH, et al. An Abbreviated Montreal Cognitive Assessment (MoCA) for Dementia Screening. Clin Neuropsychol. 2015; 29:413–25.
  21. Borson S, Scanlan JM, Chen P, Ganguli M. The Mini-Cog as a screen for dementia: validation in a population-based sample. J Am Geriatr Soc. 2003; 51:1451–4.
  22. University of Otago, Wellington. Advanced Learning in Medicine Sixth Year Trainee Intern Handbook: 2017–2018. University of Otago. 3 p. Available from: http://www.otago.ac.nz/wellington/otago677064.pdf

For the PDF of this article,
contact nzmj@nzma.org.nz

View Article PDF

The Montreal Cognitive Assessment (MoCA) was developed in 2005 to detect individuals with mild cognitive complaints who performed within the normal range on the Mini Mental State Exam (MMSE).1 Mild cognitive impairment is a state between the normal cognitive changes associated with aging and dementia. In the original validation study the MoCA was shown to have a sensitivity of 90% and specificity of 87% for the detection of mild cognitive impairment, compared to 18% and 100% for the MMSE.1

The MoCA has been shown to be superior for the detection of mild cognitive impairment compared to the MMSE in many other studies since,2,3,4 and is a useful tool for predicting the development of dementia in patients with mild cognitive impairment.5 It also has a role in cognitive assessment in a wide range of other conditions, including Parkinson’s disease,6,7 chronic obstructive pulmonary disease,8 transient ischaemic attack and minor stroke,9 and Huntington’s disease.10

The MoCA test comprises 30 questions assessing various domains of cognition and can be performed in 10 minutes.1 Formal instructions on how to administer the test and score the results are easily accessible on the official website (www.mocatest.org).11 It is used in over 100 countries and is available in 46 languages and dialects. There is also a blind version and more recently an application (app) for smart devices.11

Anecdotally, we noted that completing the MoCA test was often a task given to the most junior members of inpatient medical teams. Knowledge on how to administer and score the MoCA seemed to be variable among junior doctors. This led to concerns that there could be errors in administration and scoring, which could impact on patient clinical outcomes.

No studies assessing accuracy of MoCA administration and scoring were identified through a literature search. We therefore designed an audit with the aim of investigating junior doctors’ knowledge of how to complete the MoCA. We hypothesised that formal teaching would improve the results on a follow-up audit.

Methods

This audit was completed between April and June 2017 at Canterbury, Capital and Coast and Hutt Valley District Health Boards (DHBs) in New Zealand. Participants included first-year doctors (also known as house officers) and final-year medical students (trainee interns) attending protected training time sessions.

Participants completed an anonymous two-part written questionnaire (Appendix 1). Part 1 comprised of participant demographics and information regarding the frequency of MoCA administration, prior knowledge about the test and experience of formal teaching. Part 2 consisted of examples of completed questions from the MoCA designed to test participants’ ability to identify errors in administration and score the test. They were given a copy of a blank MoCA test sheet but not the official administration or marking schedule. Results were assessed using a marking scheme and verified by two separate study investigators (Appendix 2).

Several weeks after the initial questionnaire a 15-minute teaching session was given followed immediately by a repeat of the questionnaire. The teaching covered all questions in the MoCA and the formal administration and scoring guidelines corresponding to each question (available at www.mocatest.org).12 Part 1 of the repeat questionnaire was the same as the initial audit with the addition of two questions; whether participants had completed the initial questionnaire and whether they felt teaching was beneficial or not (Appendix 3). Part 2 of the questionnaire remained unchanged.

Statistical analysis was completed using IBM SPSS Statistics version 23 for Windows and Fisher’s exact test to determine statistical significance for discrete variables. The Holm-Bonferroni method was applied to the Part 2 results adjusted for multiple comparisons. Unanswered questions were marked as being incorrect.

The audit was deemed to be outside the scope of review by the Health and Disability Ethics Committee, New Zealand. It was registered at the research offices for all three DHBs.

Results

Part 1

Seventy-one individuals completed the questionnaire for the initial audit and 46 in the follow-up audit (Table 1).

Table 1: Distribution of participants among the three DHBs.

The majority of study participants in both the initial and follow-up audits were house officers, 62 (87%) and 43 (93%); with the remainder being trainee interns and other students (Table 2). There was no significant difference between the pre- and post-teaching groups. Twenty-six (57%) of participants involved in the follow-up audit had also completed the initial questionnaire.

Table 2: Results of Part 1 of audit questionnaire.

 

The majority of junior doctors carried out the MoCA on a monthly basis (37% in the initial audit and 43% in the follow-up audit), however frequency of performance varied with some having never carried out the test in the last 12 months. Prior to the teaching session provided, 23% of participants had received formal teaching on how to administer and score the MoCA. There was no statistically significant difference in the frequency of MoCA testing, or the number of participants who understood the reason for the MoCA being tested between the initial and re-audit groups.

Although there was no statistically significant difference in the number of participants who were aware of the formal guidelines for how to complete the MoCA, there was a statistically significant difference with more participants having read the guidelines in the follow-up audit (77% and 93% respectively). 41 participants (89%) thought that the teaching session had improved their ability to complete MoCA testing.

Part 2

Statistically significant improvements were seen in participants’ ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score.

Table 3: Results of Part 2 of the initial and re-audits demonstrating the number (and %) of participants giving correct answers for each question in the MoCA questionnaire.

*N/A: if participants did not know that there was a change to the MoCA score with <11 years of education then they did not answer the final question.

Discussion

The MoCA is a standardised test with formal marking guidelines, so the goal should be 100% accuracy in administering and scoring the test. A single error in administration or scoring could change the score for a patient, leading to an incorrect diagnosis. Nasreddine et al identified a MoCA score of <26/30 to indicate MCI with sensitivity of 90%.1 Other studies have, however, suggested lower scoring cutoffs may have superior predictive rates, particularly in populations where the baseline probability of cognitive impairment is higher.13

Although we could not identify any studies that specifically looked at the accuracy of administering and scoring the MoCA, we did identify studies that demonstrated errors in administration and scoring of other neuropsychological tests.14,15,16  The Alzheimer’s Disease Assessment Scale—cognitive subscale (ADAS-cog) is the most commonly used primary outcome measure in Alzheimer’s disease trials. Schafer et al found that 80.6% of raters of the ADAS-cog test made errors in administration and scoring leading to concerns that errors may affect clinical trial outcomes.14 Ramos et al identified examiner errors during the administration and scoring of the Woodcock Johnson III test of Cognitive Abilities carried out by graduate students. The number of errors reduced after three test administrations, suggesting that the students may benefit from more focus and practice on correct administration and scoring. 15

Only 23% of participants in this audit had been given any formal teaching on how to conduct the test prior to our teaching session. Standardised training and feedback given to inexperienced administrators has been shown to result in a decline in errors of instruction, administration and recording of neuropsychological tests.17 Following the brief teaching session we found improvements in participants’ ability to administer the trail-making question. There were improvements in their ability to identify errors and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score. There was also subjective improvement with 89% of participants responding that the teaching session had improved their ability to carry out the MoCA.

There was no statistically significant improvement in the questions for marking the cube, attention, sentence repetition or delayed recall. Knowledge of these questions was good in the initial audit (correct answers mostly above 90%), leaving little room for improvement. Conversely, although there was a statistically significant improvement in marking the clock face examples in the follow-up audit, only 50% of participants scored all three examples correctly. Price et al demonstrated scoring variability for the clock-drawing component between clinicians of one to three points when using the MoCA guidelines, highlighting concerns that error could potentially alter the overall score by 10%. Improvement was found with repeated training and use of a more detailed scoring system (Consentino criteria), however this also took raters longer to score. They recommended training and practice to improve the reliability of scoring.18

Our audit has several limitations; the number of participants was small and there were fewer participants in the follow-up audit sessions. Although protected training time is considered compulsory, other work commitments and leave can mean that not all junior doctors attend every session. Our initial audit at Capital and Coast DHB was carried out following a teaching session given by one of the house officer supervising consultants, which may have resulted in higher attendance to this session (29 participants compared to 17 in the follow-up audit session). Carrying out the questionnaires at more than one teaching session at each DHB would likely have improved our response rates.

Only 57% of participants completed both the initial and follow-up audit. There was no statistically significant difference between the demographic statistics, suggesting that both groups were similar and therefore reasonably comparable. We felt it was important to keep the questionnaire anonymous to encourage honest answers and were therefore unable to identify individuals who participated in both audits. Further research could be improved by using the same group of participants for initial and follow-up audits.

Our audit was carried out between April and June 2017, when junior doctors had been working for 4 to 7 months (house officers in New Zealand begin work in November). Our results showed that some junior doctors had never carried out the MoCA (17% in the initial audit and 9% in the follow-up) and many had only performed the test once in the last 12 months (21% in the initial audit and 30% in the follow-up). Our results may have differed had we carried out the audit later in the year when junior doctors had received more exposure to the MoCA in their clinical practice.

The majority of participants took over 10 minutes to complete the test (89% in the initial audit and 91% in the follow-up), which may be a reflection of limited experience. The MoCA is also carried out by occupational therapists who have more experience than junior doctors, however, tend to use it as part of more complete cognitive and functional assessment rather than as a screening test. Other brief cognitive tests have been shown to compare well with the MoCA and MMSE and could be considered as simpler alternatives, which may be easier for junior doctors with limited experience to administer.19,20,21 A short-form MoCA comprising three statistically selected components; orientation, word recall and serial subtraction, has been shown to be effective at classifying MCI and Alzheimer’s disease when compared to the MoCA and MMSE.20 The Mini-Cog compares well with the MMSE for the detection of dementia but is much briefer, being made up of only the three-item recall and clock drawing components.21 This test may not be as suitable due to the aforementioned problems with marking of the clock drawing component.

As well as house officers there were a small number of trainee interns and other students involved in the audit. Students probably have less clinical experience and this may have led to lower scores of the questionnaire. The proportion of trainee interns and students involved in both audits was similar so unlikely to have made an impact on the improvements noted in the follow-up audit. Trainee interns are both full-time students and apprentice house officers, taking responsibility for patient care decisions under supervision.22 Our real-world experience is that students do carry out the MoCA and so including this group in our study was deemed important.

Conclusions

Our audit shows that the MoCA is a test performed by junior doctors on a variable basis. Prior to our teaching session the majority of junior doctors had not received any formal teaching on how to complete the MoCA test. A short teaching session improves junior doctors’ ability to administer and score the MoCA.

We recommend that the formal administration and scoring instructions are used each time the MoCA is performed. The newly released app for smart devices may make this easier to achieve and more appealing to today’s junior doctors. Annual teaching on how to administer and score the MoCA will be provided to the house officers in the hospitals involved in this study. We also recommend that the same training is incorporated into the medical school curriculum as trainee interns and medical students are also involved in administering the test.

Deficits in knowledge around the MoCA may lead to inaccurate administration and scoring and incorrect MoCA scores may have consequences in terms of clinical outcomes for patients. This study does not actually assess this and further research could be of value.

Appendix

c

c

c

c

Appendix 2

MoCA Audit: Questionnaire Marking Schedule

Use the official MoCA Administration and scoring guidelines alongside this marking schedule

For questions that are Yes/No answers, the correct answer is in bold type

Q1: Visuospatial/Executive (Trailmaker)

In your own words, describe how you would explain to a patient how to complete the trail making question?

ANSWER: The phrase must include a description or imply a trail between alternating letters and numbers and ascending order at a minumum to be considered correct (not necessarily in these exact words).

this question should be validated between two investigators as it is somewhat open to interpretation as to whether it fits this description

Q2: Cube drawing

Please mark these examples of a copied cube:

/1                  /1                  /1

ANSWER:

0/1                  1/1                  0/1


Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the cube)

Q3: Clockface

Please mark these clock face examples:

/3                  /3                  /3


ANSWER:

2/3                  1/3                  2/3

Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the clockface)

Q4: Naming animals

Which of these answers would you mark as correct for the names of the animals? Please circle your correct answers; you may circle more than one answer (MoCA version1)

Image 1            Image 2             Image 3

Lion                  Rhino                 Camel

Cat                   Hippopotamus     Horse

Tiger                 Rhinoceros         Dromedary

ANSWER: Both Rhino and Rhinoceros and Camel and Dromedary should be selected as both are correct. If only one of these two are selected then mark as incorrect.

  • Mark as 0 or 1,2,3 out of 3 (ie, if all highlighted words are circled then score 3, if rhino but not rhinoceros is circled but all others are circled then score two)

Memory (no question here)

Q5: Attention

  • When asked to tap their hand on the letter “A”, a patient also taps their hand twice on the letter “J”, they do not tap their hand once for the letter “A”

Do they score a point for this question?

Yes                  No

Q6: Serial 7 subtraction

  • For serial 7 subtraction, how many points would you give the following examples?

100    92    85    78    71    64    points allocated:

ANSWER: 3 points (4 correct subtractions)

100    94    86    79    72    67    points allocated:

ANSWER: 2 points (2 correct subtractions)

  • score 0 or 1, 2 out of 2 (ie, if both have the correct points allocated then score 2, if only one has correct points allocated then score 1)

Q7: Sentence Repetition

  • Which of these repeated sentences would be given a point? Tick the option(s)

  • I only know that John is the one I’ll help today

  • John is the one who helped today

  • I only know that John is the one to help today – correct ANSWER only

→ The cats always hid under the couch when the dog is in the room

→ The cat always hides under the couch when the dogs were in the room

→ The cat hid under the couch when the dogs were in the room

ANSWER: none of the above options are exactly correct. A correct answer is if none of the options are circled

Score as correct if the both answers are correct. If only one is correct, score as incorrect

Q8: Words beginning with F

Please circle the words that do not score a point when the patients is asked to give “words beginning with F”

France (PN)   Foxes*                         Frustration*         Frangipane

Fox *              Four (number)            Face                    Froth

Forrest            Festive                         Frustrating*         Fossil

Fantail            Frances (PN)              Fantastic              Fun

Fax                Fourteen (number)     Fred (PN)           Finland (PN)

ANSWER: All of the following words must be circled as being words that do not score in order to mark this question correct:

- France, Frances, Fred, Finland (proper nouns)

- Four, Fourteen (numbers)

Words that begin with the same sound but different suffix

- Of Fox and Foxes, one must be marked as incorrect

- Of Frustration and Frustrating, one must be marked as incorrect

  • Mark out of 8 – there are 8 answers that cannot score points (taking into account that only one of Fox or foxes and frustration and frustrating) can be scored.


Q9: Abstraction

Which of the following are correct answer(s) for the “similarity between question” (please tick – you may select more than one option)

Train – bicycle

They both have wheels

I am interested in both trains and bicycles

They are modes of transport

Ruler – watch

They are tools for measurement

They have numbers

I own both of these items

ANSWER: only the answers are in bold are correct

  • mark 0,1,2, out of 2 (ie, if both answers are correct then score 2)

Q10: Delayed Recall 1

  • If a patient gives all 5 words correct but in a different order, does this affect their score?

Yes                  No

Q11: Delayed Recall 2

  • If a patient recalls the words with a clue, do they score a point?

Yes                  No

Q12: Education

  • Your patient finished high school at year 11. Does this affect the score of the MoCA test? If so, how?

ANSWER: Yes

Q13: Reason for change to score for education

ANSWER: A point is added for <12 years of formal education

  • correct or incorrect. If the answer to Q12 was “No”, which is incorrect, then Q13 should be left blank as the participant cannot know the reason that the score would change

Appendix 3

Montreal Cognitive Assessment Re-Audit

We are carrying out an audit about the knowledge of medical staff around the Montreal Cognitive Assessment (MoCA). We understand that the education around how to carry out, and mark the MoCA test can be variable.

Please answer this questionnaire honestly. It will be kept anonymous and any information will be used to improve understanding around this important tool.

Part One: General Questions

Please circle your answer

1. Please tell us what level you are at:

Trainee Intern House Surgeon Other__________

2. Did you complete the first audit questionnaire?

Yes                  No

3. How often have you completed a MoCA in the last 12 months?

Never      Weekly      Monthly      One a year

4. Do you know the reason for completing the MoCA on your patients?

Always      Often      Sometimes      Never

5. Were you aware before today, that there are formal Administration and Scoring instructions on how to carry out and mark the MoCA?

Yes                  No

6. Have you ever read the formal Administration and Scoring instructions?

Yes                  No

7. How long would you estimate it takes you to carry out the MoCA?

<10 min          11–20 min          >20min

8. Are you aware there is more than one version of the MOCA?

Yes                  No

9. Do you feel the teaching today has improved your ability to perform and mark the MoCA?

Yes                  No

Summary

Abstract

Aim

To investigate junior doctors knowledge of how to conduct the Montreal Cognitive Assessment (MoCA).

Method

A two-part questionnaire was administered to junior doctors at teaching sessions across three New Zealand district health boards. Part 1 investigated prior experience and knowledge of the MoCA. Part 2 tested junior doctors ability to identify errors in administration and how to score the test. Several weeks later a brief MoCA teaching session was given followed immediately by a repeat questionnaire.

Results

Seventy-one individuals completed the initial audit and 46 did the follow-up audit. The majority of junior doctors carried out the MoCA on a monthly basis. Prior to our teaching session, only 23% of participants had received formal teaching on how to administer and score the MoCA. The majority (89%) of participants thought that the teaching session had improved their ability to conduct the MoCA. Statistically significant changes were seen in participants ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the awareness about the effect of years of education on the MoCA score.

Conclusion

Junior doctors administer and score the MoCA but many have not received formal teaching on how to do so. A short teaching session improved their ability to conduct the MoCA and identify errors in administration and scoring.

Author Information

- Chani Tromop-van Dalen, Advanced Trainee in General Medicine, Capital and Coast District Health Board, Wellington; Katie Thorne, Advanced Trainee in General Medicine and Geriatrics, Hutt Valley District Health Board, Lower Hutt; Krystina Common, Advan

Acknowledgements

The authors wish to acknowledge Dr Joanne Williams, Hutt Valley Distict Health Board for her assistance in supervising the project.

Correspondence

Dr Chani Tromop-van Dalen, Department of Medicine, Wellington Regional Hospital, Riddiford Street, Newtown, Wellington 6022.

Correspondence Email

chanitvd@gmail.com

Competing Interests

Dr Thorne reports affiliation with FLights to Auckland outside the submitted work.

  1. Nasreddine ZS, Phillips NA, Bédrian V, et al. The Montreal Cognitive Assessment, MoCA: A Brief Screening Tool for Mild Cognitive Impairment. J Am Geriatr Soc. 2005; 53:695–9.
  2. Tsoi KK, Chan JY, Hirai HW, et al. Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis. JAMA Intern Med. 2015; 175:1450–8.
  3. Lam B, Middleton LE, Masellis M, et al. Criterion and convergent validity of the Montreal cognitive assessment with screening and standardized neuropsychological testing. J Am Geriatr Soc. 2013; 61:2181–5.
  4. Damian AM, Jacobsen SA, Hentz JG, et al. The Montreal Cognitive Assessment and the mini-mental state examination as screening instruments for cogntive impairment: item analyses and threshold scores. Dement Geriatr Cogn Disord. 2011; 31:126–31.
  5. Smith T, Gildeh N, Holmes C. The Montreal Cognitive Assessment: validity and utility in a memory clinic setting. Can J Psychiatry. 2007; 52:329–32.
  6. Zadikoff C, Fox SH, Tang-Wai DF, et al. A comparison of the Mini-Mental state exam to the Montreal Cognitive Assessment in identifying cognitive deficits in Parkinson’s disease. Mov Disord. 2008; 23:297–9.
  7. Dalrymple-Alford JC, MacAskill MR, Nakas CT, et al. The MoCA: Well-suited screen for cognitive impairment in Parkinson disease. Neurology. 2010; 75:1717–25.
  8. Villeneuve S, Pepin V, Rahayel S, et al. Mild cognitive impairment in moderate to severe COPD: a preliminary study. Chest. 2012; 142:1516–1523.
  9. Webb AJ, Pendlebury ST, Li L, et al. Validation of the Montreal Cognitive Assessment versus Mini-Mental State Examination Against Hypertension and Hypertensive Arteriopathy After Transient Ischaemic Attack or Minor Stroke. Stroke. 2014; 45:3337–3342.
  10. Videnovic A, Bernard B, Fan W, et al. The Montreal Cognitive Assessment as a screening tool for cognitive dysfunction in Huntington’s disease. Mov Disord. 2010; 25:401–4.
  11. Mocatest.org [homepage on the internet]. Nasreddine Z. [updated 2015, cited 6 Jan 2018]. Available from http://www.mocatest.org/about
  12. Montreal Cognitive Assessment (MoCA) Administration and Scoring Instructions. Nasreddine Z. [updated August 18, 2010]. Available from http://www.mocatest.org/wp-content/uploads/2015/tests-instructions/MoCA-Instructions-English_2010.pdf
  13. Larner AJ. Screening utility of the Montreal Cognitive Assessment (MoCA): in place of – or as well as – the MMSE? Int Psychogeriatr. 2012; 24:391–6
  14. Schafer K, De Santi S, Schneider LS. Errors in ADAS-cog administration and scoring may undermine clinical trials results. Curr Alzheimer Res. 2011; 8:373–6.
  15. Ramos E, Alfonso VC, Schermerhorn SM. Graduate students’ administration and scoring errors on the Woodcock-Johnson III tests of cognitive abilities. Psychology in the Schools. 2009; 46:650–657.
  16. Kozora E, Kongs S, Hampton M, Zhang L. Effects of examiner error on neuropsychological test results in a multi-site study. Clin Neuropsychol. 2008; 22:977–88.
  17. Kazora E, Kongs S, Box T, et al. Training and management of a multisite neuropsychological testing protocol for the Department of Veterans Affairs cooperative study evaluating on-and-off pump coronary artery bypass graft procedures. Clin Neuropsychol. 2007; 21:653–62.
  18. Price CC, Cunningham H, Coronado N, et al. Clock drawing in the Montreal Cognitive Assessment: recommendations for dementia assessment. Dement Geriatr Cogn Disord. 2011; 31:179–87.
  19. Ismail Z, Rajji TK, Shulman KI. Brief cognitive screening instruments: an update. Int J Geriatr Psychiatry. 2010; 25:111–20.
  20. Horton DK, Hynan LS, Lacritz LH, et al. An Abbreviated Montreal Cognitive Assessment (MoCA) for Dementia Screening. Clin Neuropsychol. 2015; 29:413–25.
  21. Borson S, Scanlan JM, Chen P, Ganguli M. The Mini-Cog as a screen for dementia: validation in a population-based sample. J Am Geriatr Soc. 2003; 51:1451–4.
  22. University of Otago, Wellington. Advanced Learning in Medicine Sixth Year Trainee Intern Handbook: 2017–2018. University of Otago. 3 p. Available from: http://www.otago.ac.nz/wellington/otago677064.pdf

For the PDF of this article,
contact nzmj@nzma.org.nz

View Article PDF

The Montreal Cognitive Assessment (MoCA) was developed in 2005 to detect individuals with mild cognitive complaints who performed within the normal range on the Mini Mental State Exam (MMSE).1 Mild cognitive impairment is a state between the normal cognitive changes associated with aging and dementia. In the original validation study the MoCA was shown to have a sensitivity of 90% and specificity of 87% for the detection of mild cognitive impairment, compared to 18% and 100% for the MMSE.1

The MoCA has been shown to be superior for the detection of mild cognitive impairment compared to the MMSE in many other studies since,2,3,4 and is a useful tool for predicting the development of dementia in patients with mild cognitive impairment.5 It also has a role in cognitive assessment in a wide range of other conditions, including Parkinson’s disease,6,7 chronic obstructive pulmonary disease,8 transient ischaemic attack and minor stroke,9 and Huntington’s disease.10

The MoCA test comprises 30 questions assessing various domains of cognition and can be performed in 10 minutes.1 Formal instructions on how to administer the test and score the results are easily accessible on the official website (www.mocatest.org).11 It is used in over 100 countries and is available in 46 languages and dialects. There is also a blind version and more recently an application (app) for smart devices.11

Anecdotally, we noted that completing the MoCA test was often a task given to the most junior members of inpatient medical teams. Knowledge on how to administer and score the MoCA seemed to be variable among junior doctors. This led to concerns that there could be errors in administration and scoring, which could impact on patient clinical outcomes.

No studies assessing accuracy of MoCA administration and scoring were identified through a literature search. We therefore designed an audit with the aim of investigating junior doctors’ knowledge of how to complete the MoCA. We hypothesised that formal teaching would improve the results on a follow-up audit.

Methods

This audit was completed between April and June 2017 at Canterbury, Capital and Coast and Hutt Valley District Health Boards (DHBs) in New Zealand. Participants included first-year doctors (also known as house officers) and final-year medical students (trainee interns) attending protected training time sessions.

Participants completed an anonymous two-part written questionnaire (Appendix 1). Part 1 comprised of participant demographics and information regarding the frequency of MoCA administration, prior knowledge about the test and experience of formal teaching. Part 2 consisted of examples of completed questions from the MoCA designed to test participants’ ability to identify errors in administration and score the test. They were given a copy of a blank MoCA test sheet but not the official administration or marking schedule. Results were assessed using a marking scheme and verified by two separate study investigators (Appendix 2).

Several weeks after the initial questionnaire a 15-minute teaching session was given followed immediately by a repeat of the questionnaire. The teaching covered all questions in the MoCA and the formal administration and scoring guidelines corresponding to each question (available at www.mocatest.org).12 Part 1 of the repeat questionnaire was the same as the initial audit with the addition of two questions; whether participants had completed the initial questionnaire and whether they felt teaching was beneficial or not (Appendix 3). Part 2 of the questionnaire remained unchanged.

Statistical analysis was completed using IBM SPSS Statistics version 23 for Windows and Fisher’s exact test to determine statistical significance for discrete variables. The Holm-Bonferroni method was applied to the Part 2 results adjusted for multiple comparisons. Unanswered questions were marked as being incorrect.

The audit was deemed to be outside the scope of review by the Health and Disability Ethics Committee, New Zealand. It was registered at the research offices for all three DHBs.

Results

Part 1

Seventy-one individuals completed the questionnaire for the initial audit and 46 in the follow-up audit (Table 1).

Table 1: Distribution of participants among the three DHBs.

The majority of study participants in both the initial and follow-up audits were house officers, 62 (87%) and 43 (93%); with the remainder being trainee interns and other students (Table 2). There was no significant difference between the pre- and post-teaching groups. Twenty-six (57%) of participants involved in the follow-up audit had also completed the initial questionnaire.

Table 2: Results of Part 1 of audit questionnaire.

 

The majority of junior doctors carried out the MoCA on a monthly basis (37% in the initial audit and 43% in the follow-up audit), however frequency of performance varied with some having never carried out the test in the last 12 months. Prior to the teaching session provided, 23% of participants had received formal teaching on how to administer and score the MoCA. There was no statistically significant difference in the frequency of MoCA testing, or the number of participants who understood the reason for the MoCA being tested between the initial and re-audit groups.

Although there was no statistically significant difference in the number of participants who were aware of the formal guidelines for how to complete the MoCA, there was a statistically significant difference with more participants having read the guidelines in the follow-up audit (77% and 93% respectively). 41 participants (89%) thought that the teaching session had improved their ability to complete MoCA testing.

Part 2

Statistically significant improvements were seen in participants’ ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score.

Table 3: Results of Part 2 of the initial and re-audits demonstrating the number (and %) of participants giving correct answers for each question in the MoCA questionnaire.

*N/A: if participants did not know that there was a change to the MoCA score with <11 years of education then they did not answer the final question.

Discussion

The MoCA is a standardised test with formal marking guidelines, so the goal should be 100% accuracy in administering and scoring the test. A single error in administration or scoring could change the score for a patient, leading to an incorrect diagnosis. Nasreddine et al identified a MoCA score of <26/30 to indicate MCI with sensitivity of 90%.1 Other studies have, however, suggested lower scoring cutoffs may have superior predictive rates, particularly in populations where the baseline probability of cognitive impairment is higher.13

Although we could not identify any studies that specifically looked at the accuracy of administering and scoring the MoCA, we did identify studies that demonstrated errors in administration and scoring of other neuropsychological tests.14,15,16  The Alzheimer’s Disease Assessment Scale—cognitive subscale (ADAS-cog) is the most commonly used primary outcome measure in Alzheimer’s disease trials. Schafer et al found that 80.6% of raters of the ADAS-cog test made errors in administration and scoring leading to concerns that errors may affect clinical trial outcomes.14 Ramos et al identified examiner errors during the administration and scoring of the Woodcock Johnson III test of Cognitive Abilities carried out by graduate students. The number of errors reduced after three test administrations, suggesting that the students may benefit from more focus and practice on correct administration and scoring. 15

Only 23% of participants in this audit had been given any formal teaching on how to conduct the test prior to our teaching session. Standardised training and feedback given to inexperienced administrators has been shown to result in a decline in errors of instruction, administration and recording of neuropsychological tests.17 Following the brief teaching session we found improvements in participants’ ability to administer the trail-making question. There were improvements in their ability to identify errors and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score. There was also subjective improvement with 89% of participants responding that the teaching session had improved their ability to carry out the MoCA.

There was no statistically significant improvement in the questions for marking the cube, attention, sentence repetition or delayed recall. Knowledge of these questions was good in the initial audit (correct answers mostly above 90%), leaving little room for improvement. Conversely, although there was a statistically significant improvement in marking the clock face examples in the follow-up audit, only 50% of participants scored all three examples correctly. Price et al demonstrated scoring variability for the clock-drawing component between clinicians of one to three points when using the MoCA guidelines, highlighting concerns that error could potentially alter the overall score by 10%. Improvement was found with repeated training and use of a more detailed scoring system (Consentino criteria), however this also took raters longer to score. They recommended training and practice to improve the reliability of scoring.18

Our audit has several limitations; the number of participants was small and there were fewer participants in the follow-up audit sessions. Although protected training time is considered compulsory, other work commitments and leave can mean that not all junior doctors attend every session. Our initial audit at Capital and Coast DHB was carried out following a teaching session given by one of the house officer supervising consultants, which may have resulted in higher attendance to this session (29 participants compared to 17 in the follow-up audit session). Carrying out the questionnaires at more than one teaching session at each DHB would likely have improved our response rates.

Only 57% of participants completed both the initial and follow-up audit. There was no statistically significant difference between the demographic statistics, suggesting that both groups were similar and therefore reasonably comparable. We felt it was important to keep the questionnaire anonymous to encourage honest answers and were therefore unable to identify individuals who participated in both audits. Further research could be improved by using the same group of participants for initial and follow-up audits.

Our audit was carried out between April and June 2017, when junior doctors had been working for 4 to 7 months (house officers in New Zealand begin work in November). Our results showed that some junior doctors had never carried out the MoCA (17% in the initial audit and 9% in the follow-up) and many had only performed the test once in the last 12 months (21% in the initial audit and 30% in the follow-up). Our results may have differed had we carried out the audit later in the year when junior doctors had received more exposure to the MoCA in their clinical practice.

The majority of participants took over 10 minutes to complete the test (89% in the initial audit and 91% in the follow-up), which may be a reflection of limited experience. The MoCA is also carried out by occupational therapists who have more experience than junior doctors, however, tend to use it as part of more complete cognitive and functional assessment rather than as a screening test. Other brief cognitive tests have been shown to compare well with the MoCA and MMSE and could be considered as simpler alternatives, which may be easier for junior doctors with limited experience to administer.19,20,21 A short-form MoCA comprising three statistically selected components; orientation, word recall and serial subtraction, has been shown to be effective at classifying MCI and Alzheimer’s disease when compared to the MoCA and MMSE.20 The Mini-Cog compares well with the MMSE for the detection of dementia but is much briefer, being made up of only the three-item recall and clock drawing components.21 This test may not be as suitable due to the aforementioned problems with marking of the clock drawing component.

As well as house officers there were a small number of trainee interns and other students involved in the audit. Students probably have less clinical experience and this may have led to lower scores of the questionnaire. The proportion of trainee interns and students involved in both audits was similar so unlikely to have made an impact on the improvements noted in the follow-up audit. Trainee interns are both full-time students and apprentice house officers, taking responsibility for patient care decisions under supervision.22 Our real-world experience is that students do carry out the MoCA and so including this group in our study was deemed important.

Conclusions

Our audit shows that the MoCA is a test performed by junior doctors on a variable basis. Prior to our teaching session the majority of junior doctors had not received any formal teaching on how to complete the MoCA test. A short teaching session improves junior doctors’ ability to administer and score the MoCA.

We recommend that the formal administration and scoring instructions are used each time the MoCA is performed. The newly released app for smart devices may make this easier to achieve and more appealing to today’s junior doctors. Annual teaching on how to administer and score the MoCA will be provided to the house officers in the hospitals involved in this study. We also recommend that the same training is incorporated into the medical school curriculum as trainee interns and medical students are also involved in administering the test.

Deficits in knowledge around the MoCA may lead to inaccurate administration and scoring and incorrect MoCA scores may have consequences in terms of clinical outcomes for patients. This study does not actually assess this and further research could be of value.

Appendix

c

c

c

c

Appendix 2

MoCA Audit: Questionnaire Marking Schedule

Use the official MoCA Administration and scoring guidelines alongside this marking schedule

For questions that are Yes/No answers, the correct answer is in bold type

Q1: Visuospatial/Executive (Trailmaker)

In your own words, describe how you would explain to a patient how to complete the trail making question?

ANSWER: The phrase must include a description or imply a trail between alternating letters and numbers and ascending order at a minumum to be considered correct (not necessarily in these exact words).

this question should be validated between two investigators as it is somewhat open to interpretation as to whether it fits this description

Q2: Cube drawing

Please mark these examples of a copied cube:

/1                  /1                  /1

ANSWER:

0/1                  1/1                  0/1


Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the cube)

Q3: Clockface

Please mark these clock face examples:

/3                  /3                  /3


ANSWER:

2/3                  1/3                  2/3

Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the clockface)

Q4: Naming animals

Which of these answers would you mark as correct for the names of the animals? Please circle your correct answers; you may circle more than one answer (MoCA version1)

Image 1            Image 2             Image 3

Lion                  Rhino                 Camel

Cat                   Hippopotamus     Horse

Tiger                 Rhinoceros         Dromedary

ANSWER: Both Rhino and Rhinoceros and Camel and Dromedary should be selected as both are correct. If only one of these two are selected then mark as incorrect.

  • Mark as 0 or 1,2,3 out of 3 (ie, if all highlighted words are circled then score 3, if rhino but not rhinoceros is circled but all others are circled then score two)

Memory (no question here)

Q5: Attention

  • When asked to tap their hand on the letter “A”, a patient also taps their hand twice on the letter “J”, they do not tap their hand once for the letter “A”

Do they score a point for this question?

Yes                  No

Q6: Serial 7 subtraction

  • For serial 7 subtraction, how many points would you give the following examples?

100    92    85    78    71    64    points allocated:

ANSWER: 3 points (4 correct subtractions)

100    94    86    79    72    67    points allocated:

ANSWER: 2 points (2 correct subtractions)

  • score 0 or 1, 2 out of 2 (ie, if both have the correct points allocated then score 2, if only one has correct points allocated then score 1)

Q7: Sentence Repetition

  • Which of these repeated sentences would be given a point? Tick the option(s)

  • I only know that John is the one I’ll help today

  • John is the one who helped today

  • I only know that John is the one to help today – correct ANSWER only

→ The cats always hid under the couch when the dog is in the room

→ The cat always hides under the couch when the dogs were in the room

→ The cat hid under the couch when the dogs were in the room

ANSWER: none of the above options are exactly correct. A correct answer is if none of the options are circled

Score as correct if the both answers are correct. If only one is correct, score as incorrect

Q8: Words beginning with F

Please circle the words that do not score a point when the patients is asked to give “words beginning with F”

France (PN)   Foxes*                         Frustration*         Frangipane

Fox *              Four (number)            Face                    Froth

Forrest            Festive                         Frustrating*         Fossil

Fantail            Frances (PN)              Fantastic              Fun

Fax                Fourteen (number)     Fred (PN)           Finland (PN)

ANSWER: All of the following words must be circled as being words that do not score in order to mark this question correct:

- France, Frances, Fred, Finland (proper nouns)

- Four, Fourteen (numbers)

Words that begin with the same sound but different suffix

- Of Fox and Foxes, one must be marked as incorrect

- Of Frustration and Frustrating, one must be marked as incorrect

  • Mark out of 8 – there are 8 answers that cannot score points (taking into account that only one of Fox or foxes and frustration and frustrating) can be scored.


Q9: Abstraction

Which of the following are correct answer(s) for the “similarity between question” (please tick – you may select more than one option)

Train – bicycle

They both have wheels

I am interested in both trains and bicycles

They are modes of transport

Ruler – watch

They are tools for measurement

They have numbers

I own both of these items

ANSWER: only the answers are in bold are correct

  • mark 0,1,2, out of 2 (ie, if both answers are correct then score 2)

Q10: Delayed Recall 1

  • If a patient gives all 5 words correct but in a different order, does this affect their score?

Yes                  No

Q11: Delayed Recall 2

  • If a patient recalls the words with a clue, do they score a point?

Yes                  No

Q12: Education

  • Your patient finished high school at year 11. Does this affect the score of the MoCA test? If so, how?

ANSWER: Yes

Q13: Reason for change to score for education

ANSWER: A point is added for <12 years of formal education

  • correct or incorrect. If the answer to Q12 was “No”, which is incorrect, then Q13 should be left blank as the participant cannot know the reason that the score would change

Appendix 3

Montreal Cognitive Assessment Re-Audit

We are carrying out an audit about the knowledge of medical staff around the Montreal Cognitive Assessment (MoCA). We understand that the education around how to carry out, and mark the MoCA test can be variable.

Please answer this questionnaire honestly. It will be kept anonymous and any information will be used to improve understanding around this important tool.

Part One: General Questions

Please circle your answer

1. Please tell us what level you are at:

Trainee Intern House Surgeon Other__________

2. Did you complete the first audit questionnaire?

Yes                  No

3. How often have you completed a MoCA in the last 12 months?

Never      Weekly      Monthly      One a year

4. Do you know the reason for completing the MoCA on your patients?

Always      Often      Sometimes      Never

5. Were you aware before today, that there are formal Administration and Scoring instructions on how to carry out and mark the MoCA?

Yes                  No

6. Have you ever read the formal Administration and Scoring instructions?

Yes                  No

7. How long would you estimate it takes you to carry out the MoCA?

<10 min          11–20 min          >20min

8. Are you aware there is more than one version of the MOCA?

Yes                  No

9. Do you feel the teaching today has improved your ability to perform and mark the MoCA?

Yes                  No

Summary

Abstract

Aim

To investigate junior doctors knowledge of how to conduct the Montreal Cognitive Assessment (MoCA).

Method

A two-part questionnaire was administered to junior doctors at teaching sessions across three New Zealand district health boards. Part 1 investigated prior experience and knowledge of the MoCA. Part 2 tested junior doctors ability to identify errors in administration and how to score the test. Several weeks later a brief MoCA teaching session was given followed immediately by a repeat questionnaire.

Results

Seventy-one individuals completed the initial audit and 46 did the follow-up audit. The majority of junior doctors carried out the MoCA on a monthly basis. Prior to our teaching session, only 23% of participants had received formal teaching on how to administer and score the MoCA. The majority (89%) of participants thought that the teaching session had improved their ability to conduct the MoCA. Statistically significant changes were seen in participants ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the awareness about the effect of years of education on the MoCA score.

Conclusion

Junior doctors administer and score the MoCA but many have not received formal teaching on how to do so. A short teaching session improved their ability to conduct the MoCA and identify errors in administration and scoring.

Author Information

- Chani Tromop-van Dalen, Advanced Trainee in General Medicine, Capital and Coast District Health Board, Wellington; Katie Thorne, Advanced Trainee in General Medicine and Geriatrics, Hutt Valley District Health Board, Lower Hutt; Krystina Common, Advan

Acknowledgements

The authors wish to acknowledge Dr Joanne Williams, Hutt Valley Distict Health Board for her assistance in supervising the project.

Correspondence

Dr Chani Tromop-van Dalen, Department of Medicine, Wellington Regional Hospital, Riddiford Street, Newtown, Wellington 6022.

Correspondence Email

chanitvd@gmail.com

Competing Interests

Dr Thorne reports affiliation with FLights to Auckland outside the submitted work.

  1. Nasreddine ZS, Phillips NA, Bédrian V, et al. The Montreal Cognitive Assessment, MoCA: A Brief Screening Tool for Mild Cognitive Impairment. J Am Geriatr Soc. 2005; 53:695–9.
  2. Tsoi KK, Chan JY, Hirai HW, et al. Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis. JAMA Intern Med. 2015; 175:1450–8.
  3. Lam B, Middleton LE, Masellis M, et al. Criterion and convergent validity of the Montreal cognitive assessment with screening and standardized neuropsychological testing. J Am Geriatr Soc. 2013; 61:2181–5.
  4. Damian AM, Jacobsen SA, Hentz JG, et al. The Montreal Cognitive Assessment and the mini-mental state examination as screening instruments for cogntive impairment: item analyses and threshold scores. Dement Geriatr Cogn Disord. 2011; 31:126–31.
  5. Smith T, Gildeh N, Holmes C. The Montreal Cognitive Assessment: validity and utility in a memory clinic setting. Can J Psychiatry. 2007; 52:329–32.
  6. Zadikoff C, Fox SH, Tang-Wai DF, et al. A comparison of the Mini-Mental state exam to the Montreal Cognitive Assessment in identifying cognitive deficits in Parkinson’s disease. Mov Disord. 2008; 23:297–9.
  7. Dalrymple-Alford JC, MacAskill MR, Nakas CT, et al. The MoCA: Well-suited screen for cognitive impairment in Parkinson disease. Neurology. 2010; 75:1717–25.
  8. Villeneuve S, Pepin V, Rahayel S, et al. Mild cognitive impairment in moderate to severe COPD: a preliminary study. Chest. 2012; 142:1516–1523.
  9. Webb AJ, Pendlebury ST, Li L, et al. Validation of the Montreal Cognitive Assessment versus Mini-Mental State Examination Against Hypertension and Hypertensive Arteriopathy After Transient Ischaemic Attack or Minor Stroke. Stroke. 2014; 45:3337–3342.
  10. Videnovic A, Bernard B, Fan W, et al. The Montreal Cognitive Assessment as a screening tool for cognitive dysfunction in Huntington’s disease. Mov Disord. 2010; 25:401–4.
  11. Mocatest.org [homepage on the internet]. Nasreddine Z. [updated 2015, cited 6 Jan 2018]. Available from http://www.mocatest.org/about
  12. Montreal Cognitive Assessment (MoCA) Administration and Scoring Instructions. Nasreddine Z. [updated August 18, 2010]. Available from http://www.mocatest.org/wp-content/uploads/2015/tests-instructions/MoCA-Instructions-English_2010.pdf
  13. Larner AJ. Screening utility of the Montreal Cognitive Assessment (MoCA): in place of – or as well as – the MMSE? Int Psychogeriatr. 2012; 24:391–6
  14. Schafer K, De Santi S, Schneider LS. Errors in ADAS-cog administration and scoring may undermine clinical trials results. Curr Alzheimer Res. 2011; 8:373–6.
  15. Ramos E, Alfonso VC, Schermerhorn SM. Graduate students’ administration and scoring errors on the Woodcock-Johnson III tests of cognitive abilities. Psychology in the Schools. 2009; 46:650–657.
  16. Kozora E, Kongs S, Hampton M, Zhang L. Effects of examiner error on neuropsychological test results in a multi-site study. Clin Neuropsychol. 2008; 22:977–88.
  17. Kazora E, Kongs S, Box T, et al. Training and management of a multisite neuropsychological testing protocol for the Department of Veterans Affairs cooperative study evaluating on-and-off pump coronary artery bypass graft procedures. Clin Neuropsychol. 2007; 21:653–62.
  18. Price CC, Cunningham H, Coronado N, et al. Clock drawing in the Montreal Cognitive Assessment: recommendations for dementia assessment. Dement Geriatr Cogn Disord. 2011; 31:179–87.
  19. Ismail Z, Rajji TK, Shulman KI. Brief cognitive screening instruments: an update. Int J Geriatr Psychiatry. 2010; 25:111–20.
  20. Horton DK, Hynan LS, Lacritz LH, et al. An Abbreviated Montreal Cognitive Assessment (MoCA) for Dementia Screening. Clin Neuropsychol. 2015; 29:413–25.
  21. Borson S, Scanlan JM, Chen P, Ganguli M. The Mini-Cog as a screen for dementia: validation in a population-based sample. J Am Geriatr Soc. 2003; 51:1451–4.
  22. University of Otago, Wellington. Advanced Learning in Medicine Sixth Year Trainee Intern Handbook: 2017–2018. University of Otago. 3 p. Available from: http://www.otago.ac.nz/wellington/otago677064.pdf

Contact diana@nzma.org.nz
for the PDF of this article

View Article PDF

The Montreal Cognitive Assessment (MoCA) was developed in 2005 to detect individuals with mild cognitive complaints who performed within the normal range on the Mini Mental State Exam (MMSE).1 Mild cognitive impairment is a state between the normal cognitive changes associated with aging and dementia. In the original validation study the MoCA was shown to have a sensitivity of 90% and specificity of 87% for the detection of mild cognitive impairment, compared to 18% and 100% for the MMSE.1

The MoCA has been shown to be superior for the detection of mild cognitive impairment compared to the MMSE in many other studies since,2,3,4 and is a useful tool for predicting the development of dementia in patients with mild cognitive impairment.5 It also has a role in cognitive assessment in a wide range of other conditions, including Parkinson’s disease,6,7 chronic obstructive pulmonary disease,8 transient ischaemic attack and minor stroke,9 and Huntington’s disease.10

The MoCA test comprises 30 questions assessing various domains of cognition and can be performed in 10 minutes.1 Formal instructions on how to administer the test and score the results are easily accessible on the official website (www.mocatest.org).11 It is used in over 100 countries and is available in 46 languages and dialects. There is also a blind version and more recently an application (app) for smart devices.11

Anecdotally, we noted that completing the MoCA test was often a task given to the most junior members of inpatient medical teams. Knowledge on how to administer and score the MoCA seemed to be variable among junior doctors. This led to concerns that there could be errors in administration and scoring, which could impact on patient clinical outcomes.

No studies assessing accuracy of MoCA administration and scoring were identified through a literature search. We therefore designed an audit with the aim of investigating junior doctors’ knowledge of how to complete the MoCA. We hypothesised that formal teaching would improve the results on a follow-up audit.

Methods

This audit was completed between April and June 2017 at Canterbury, Capital and Coast and Hutt Valley District Health Boards (DHBs) in New Zealand. Participants included first-year doctors (also known as house officers) and final-year medical students (trainee interns) attending protected training time sessions.

Participants completed an anonymous two-part written questionnaire (Appendix 1). Part 1 comprised of participant demographics and information regarding the frequency of MoCA administration, prior knowledge about the test and experience of formal teaching. Part 2 consisted of examples of completed questions from the MoCA designed to test participants’ ability to identify errors in administration and score the test. They were given a copy of a blank MoCA test sheet but not the official administration or marking schedule. Results were assessed using a marking scheme and verified by two separate study investigators (Appendix 2).

Several weeks after the initial questionnaire a 15-minute teaching session was given followed immediately by a repeat of the questionnaire. The teaching covered all questions in the MoCA and the formal administration and scoring guidelines corresponding to each question (available at www.mocatest.org).12 Part 1 of the repeat questionnaire was the same as the initial audit with the addition of two questions; whether participants had completed the initial questionnaire and whether they felt teaching was beneficial or not (Appendix 3). Part 2 of the questionnaire remained unchanged.

Statistical analysis was completed using IBM SPSS Statistics version 23 for Windows and Fisher’s exact test to determine statistical significance for discrete variables. The Holm-Bonferroni method was applied to the Part 2 results adjusted for multiple comparisons. Unanswered questions were marked as being incorrect.

The audit was deemed to be outside the scope of review by the Health and Disability Ethics Committee, New Zealand. It was registered at the research offices for all three DHBs.

Results

Part 1

Seventy-one individuals completed the questionnaire for the initial audit and 46 in the follow-up audit (Table 1).

Table 1: Distribution of participants among the three DHBs.

The majority of study participants in both the initial and follow-up audits were house officers, 62 (87%) and 43 (93%); with the remainder being trainee interns and other students (Table 2). There was no significant difference between the pre- and post-teaching groups. Twenty-six (57%) of participants involved in the follow-up audit had also completed the initial questionnaire.

Table 2: Results of Part 1 of audit questionnaire.

 

The majority of junior doctors carried out the MoCA on a monthly basis (37% in the initial audit and 43% in the follow-up audit), however frequency of performance varied with some having never carried out the test in the last 12 months. Prior to the teaching session provided, 23% of participants had received formal teaching on how to administer and score the MoCA. There was no statistically significant difference in the frequency of MoCA testing, or the number of participants who understood the reason for the MoCA being tested between the initial and re-audit groups.

Although there was no statistically significant difference in the number of participants who were aware of the formal guidelines for how to complete the MoCA, there was a statistically significant difference with more participants having read the guidelines in the follow-up audit (77% and 93% respectively). 41 participants (89%) thought that the teaching session had improved their ability to complete MoCA testing.

Part 2

Statistically significant improvements were seen in participants’ ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score.

Table 3: Results of Part 2 of the initial and re-audits demonstrating the number (and %) of participants giving correct answers for each question in the MoCA questionnaire.

*N/A: if participants did not know that there was a change to the MoCA score with <11 years of education then they did not answer the final question.

Discussion

The MoCA is a standardised test with formal marking guidelines, so the goal should be 100% accuracy in administering and scoring the test. A single error in administration or scoring could change the score for a patient, leading to an incorrect diagnosis. Nasreddine et al identified a MoCA score of <26/30 to indicate MCI with sensitivity of 90%.1 Other studies have, however, suggested lower scoring cutoffs may have superior predictive rates, particularly in populations where the baseline probability of cognitive impairment is higher.13

Although we could not identify any studies that specifically looked at the accuracy of administering and scoring the MoCA, we did identify studies that demonstrated errors in administration and scoring of other neuropsychological tests.14,15,16  The Alzheimer’s Disease Assessment Scale—cognitive subscale (ADAS-cog) is the most commonly used primary outcome measure in Alzheimer’s disease trials. Schafer et al found that 80.6% of raters of the ADAS-cog test made errors in administration and scoring leading to concerns that errors may affect clinical trial outcomes.14 Ramos et al identified examiner errors during the administration and scoring of the Woodcock Johnson III test of Cognitive Abilities carried out by graduate students. The number of errors reduced after three test administrations, suggesting that the students may benefit from more focus and practice on correct administration and scoring. 15

Only 23% of participants in this audit had been given any formal teaching on how to conduct the test prior to our teaching session. Standardised training and feedback given to inexperienced administrators has been shown to result in a decline in errors of instruction, administration and recording of neuropsychological tests.17 Following the brief teaching session we found improvements in participants’ ability to administer the trail-making question. There were improvements in their ability to identify errors and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the question about the effect of years of education on the MoCA score. There was also subjective improvement with 89% of participants responding that the teaching session had improved their ability to carry out the MoCA.

There was no statistically significant improvement in the questions for marking the cube, attention, sentence repetition or delayed recall. Knowledge of these questions was good in the initial audit (correct answers mostly above 90%), leaving little room for improvement. Conversely, although there was a statistically significant improvement in marking the clock face examples in the follow-up audit, only 50% of participants scored all three examples correctly. Price et al demonstrated scoring variability for the clock-drawing component between clinicians of one to three points when using the MoCA guidelines, highlighting concerns that error could potentially alter the overall score by 10%. Improvement was found with repeated training and use of a more detailed scoring system (Consentino criteria), however this also took raters longer to score. They recommended training and practice to improve the reliability of scoring.18

Our audit has several limitations; the number of participants was small and there were fewer participants in the follow-up audit sessions. Although protected training time is considered compulsory, other work commitments and leave can mean that not all junior doctors attend every session. Our initial audit at Capital and Coast DHB was carried out following a teaching session given by one of the house officer supervising consultants, which may have resulted in higher attendance to this session (29 participants compared to 17 in the follow-up audit session). Carrying out the questionnaires at more than one teaching session at each DHB would likely have improved our response rates.

Only 57% of participants completed both the initial and follow-up audit. There was no statistically significant difference between the demographic statistics, suggesting that both groups were similar and therefore reasonably comparable. We felt it was important to keep the questionnaire anonymous to encourage honest answers and were therefore unable to identify individuals who participated in both audits. Further research could be improved by using the same group of participants for initial and follow-up audits.

Our audit was carried out between April and June 2017, when junior doctors had been working for 4 to 7 months (house officers in New Zealand begin work in November). Our results showed that some junior doctors had never carried out the MoCA (17% in the initial audit and 9% in the follow-up) and many had only performed the test once in the last 12 months (21% in the initial audit and 30% in the follow-up). Our results may have differed had we carried out the audit later in the year when junior doctors had received more exposure to the MoCA in their clinical practice.

The majority of participants took over 10 minutes to complete the test (89% in the initial audit and 91% in the follow-up), which may be a reflection of limited experience. The MoCA is also carried out by occupational therapists who have more experience than junior doctors, however, tend to use it as part of more complete cognitive and functional assessment rather than as a screening test. Other brief cognitive tests have been shown to compare well with the MoCA and MMSE and could be considered as simpler alternatives, which may be easier for junior doctors with limited experience to administer.19,20,21 A short-form MoCA comprising three statistically selected components; orientation, word recall and serial subtraction, has been shown to be effective at classifying MCI and Alzheimer’s disease when compared to the MoCA and MMSE.20 The Mini-Cog compares well with the MMSE for the detection of dementia but is much briefer, being made up of only the three-item recall and clock drawing components.21 This test may not be as suitable due to the aforementioned problems with marking of the clock drawing component.

As well as house officers there were a small number of trainee interns and other students involved in the audit. Students probably have less clinical experience and this may have led to lower scores of the questionnaire. The proportion of trainee interns and students involved in both audits was similar so unlikely to have made an impact on the improvements noted in the follow-up audit. Trainee interns are both full-time students and apprentice house officers, taking responsibility for patient care decisions under supervision.22 Our real-world experience is that students do carry out the MoCA and so including this group in our study was deemed important.

Conclusions

Our audit shows that the MoCA is a test performed by junior doctors on a variable basis. Prior to our teaching session the majority of junior doctors had not received any formal teaching on how to complete the MoCA test. A short teaching session improves junior doctors’ ability to administer and score the MoCA.

We recommend that the formal administration and scoring instructions are used each time the MoCA is performed. The newly released app for smart devices may make this easier to achieve and more appealing to today’s junior doctors. Annual teaching on how to administer and score the MoCA will be provided to the house officers in the hospitals involved in this study. We also recommend that the same training is incorporated into the medical school curriculum as trainee interns and medical students are also involved in administering the test.

Deficits in knowledge around the MoCA may lead to inaccurate administration and scoring and incorrect MoCA scores may have consequences in terms of clinical outcomes for patients. This study does not actually assess this and further research could be of value.

Appendix

c

c

c

c

Appendix 2

MoCA Audit: Questionnaire Marking Schedule

Use the official MoCA Administration and scoring guidelines alongside this marking schedule

For questions that are Yes/No answers, the correct answer is in bold type

Q1: Visuospatial/Executive (Trailmaker)

In your own words, describe how you would explain to a patient how to complete the trail making question?

ANSWER: The phrase must include a description or imply a trail between alternating letters and numbers and ascending order at a minumum to be considered correct (not necessarily in these exact words).

this question should be validated between two investigators as it is somewhat open to interpretation as to whether it fits this description

Q2: Cube drawing

Please mark these examples of a copied cube:

/1                  /1                  /1

ANSWER:

0/1                  1/1                  0/1


Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the cube)

Q3: Clockface

Please mark these clock face examples:

/3                  /3                  /3


ANSWER:

2/3                  1/3                  2/3

Mark as 0 or 1,2,3 out of 3 (correct marks allocated to the clockface)

Q4: Naming animals

Which of these answers would you mark as correct for the names of the animals? Please circle your correct answers; you may circle more than one answer (MoCA version1)

Image 1            Image 2             Image 3

Lion                  Rhino                 Camel

Cat                   Hippopotamus     Horse

Tiger                 Rhinoceros         Dromedary

ANSWER: Both Rhino and Rhinoceros and Camel and Dromedary should be selected as both are correct. If only one of these two are selected then mark as incorrect.

  • Mark as 0 or 1,2,3 out of 3 (ie, if all highlighted words are circled then score 3, if rhino but not rhinoceros is circled but all others are circled then score two)

Memory (no question here)

Q5: Attention

  • When asked to tap their hand on the letter “A”, a patient also taps their hand twice on the letter “J”, they do not tap their hand once for the letter “A”

Do they score a point for this question?

Yes                  No

Q6: Serial 7 subtraction

  • For serial 7 subtraction, how many points would you give the following examples?

100    92    85    78    71    64    points allocated:

ANSWER: 3 points (4 correct subtractions)

100    94    86    79    72    67    points allocated:

ANSWER: 2 points (2 correct subtractions)

  • score 0 or 1, 2 out of 2 (ie, if both have the correct points allocated then score 2, if only one has correct points allocated then score 1)

Q7: Sentence Repetition

  • Which of these repeated sentences would be given a point? Tick the option(s)

  • I only know that John is the one I’ll help today

  • John is the one who helped today

  • I only know that John is the one to help today – correct ANSWER only

→ The cats always hid under the couch when the dog is in the room

→ The cat always hides under the couch when the dogs were in the room

→ The cat hid under the couch when the dogs were in the room

ANSWER: none of the above options are exactly correct. A correct answer is if none of the options are circled

Score as correct if the both answers are correct. If only one is correct, score as incorrect

Q8: Words beginning with F

Please circle the words that do not score a point when the patients is asked to give “words beginning with F”

France (PN)   Foxes*                         Frustration*         Frangipane

Fox *              Four (number)            Face                    Froth

Forrest            Festive                         Frustrating*         Fossil

Fantail            Frances (PN)              Fantastic              Fun

Fax                Fourteen (number)     Fred (PN)           Finland (PN)

ANSWER: All of the following words must be circled as being words that do not score in order to mark this question correct:

- France, Frances, Fred, Finland (proper nouns)

- Four, Fourteen (numbers)

Words that begin with the same sound but different suffix

- Of Fox and Foxes, one must be marked as incorrect

- Of Frustration and Frustrating, one must be marked as incorrect

  • Mark out of 8 – there are 8 answers that cannot score points (taking into account that only one of Fox or foxes and frustration and frustrating) can be scored.


Q9: Abstraction

Which of the following are correct answer(s) for the “similarity between question” (please tick – you may select more than one option)

Train – bicycle

They both have wheels

I am interested in both trains and bicycles

They are modes of transport

Ruler – watch

They are tools for measurement

They have numbers

I own both of these items

ANSWER: only the answers are in bold are correct

  • mark 0,1,2, out of 2 (ie, if both answers are correct then score 2)

Q10: Delayed Recall 1

  • If a patient gives all 5 words correct but in a different order, does this affect their score?

Yes                  No

Q11: Delayed Recall 2

  • If a patient recalls the words with a clue, do they score a point?

Yes                  No

Q12: Education

  • Your patient finished high school at year 11. Does this affect the score of the MoCA test? If so, how?

ANSWER: Yes

Q13: Reason for change to score for education

ANSWER: A point is added for <12 years of formal education

  • correct or incorrect. If the answer to Q12 was “No”, which is incorrect, then Q13 should be left blank as the participant cannot know the reason that the score would change

Appendix 3

Montreal Cognitive Assessment Re-Audit

We are carrying out an audit about the knowledge of medical staff around the Montreal Cognitive Assessment (MoCA). We understand that the education around how to carry out, and mark the MoCA test can be variable.

Please answer this questionnaire honestly. It will be kept anonymous and any information will be used to improve understanding around this important tool.

Part One: General Questions

Please circle your answer

1. Please tell us what level you are at:

Trainee Intern House Surgeon Other__________

2. Did you complete the first audit questionnaire?

Yes                  No

3. How often have you completed a MoCA in the last 12 months?

Never      Weekly      Monthly      One a year

4. Do you know the reason for completing the MoCA on your patients?

Always      Often      Sometimes      Never

5. Were you aware before today, that there are formal Administration and Scoring instructions on how to carry out and mark the MoCA?

Yes                  No

6. Have you ever read the formal Administration and Scoring instructions?

Yes                  No

7. How long would you estimate it takes you to carry out the MoCA?

<10 min          11–20 min          >20min

8. Are you aware there is more than one version of the MOCA?

Yes                  No

9. Do you feel the teaching today has improved your ability to perform and mark the MoCA?

Yes                  No

Summary

Abstract

Aim

To investigate junior doctors knowledge of how to conduct the Montreal Cognitive Assessment (MoCA).

Method

A two-part questionnaire was administered to junior doctors at teaching sessions across three New Zealand district health boards. Part 1 investigated prior experience and knowledge of the MoCA. Part 2 tested junior doctors ability to identify errors in administration and how to score the test. Several weeks later a brief MoCA teaching session was given followed immediately by a repeat questionnaire.

Results

Seventy-one individuals completed the initial audit and 46 did the follow-up audit. The majority of junior doctors carried out the MoCA on a monthly basis. Prior to our teaching session, only 23% of participants had received formal teaching on how to administer and score the MoCA. The majority (89%) of participants thought that the teaching session had improved their ability to conduct the MoCA. Statistically significant changes were seen in participants ability to administer the trail-making question and to score the example questions of clock faces, naming animals, serial seven subtractions, verbal fluency testing, abstraction and the awareness about the effect of years of education on the MoCA score.

Conclusion

Junior doctors administer and score the MoCA but many have not received formal teaching on how to do so. A short teaching session improved their ability to conduct the MoCA and identify errors in administration and scoring.

Author Information

- Chani Tromop-van Dalen, Advanced Trainee in General Medicine, Capital and Coast District Health Board, Wellington; Katie Thorne, Advanced Trainee in General Medicine and Geriatrics, Hutt Valley District Health Board, Lower Hutt; Krystina Common, Advan

Acknowledgements

The authors wish to acknowledge Dr Joanne Williams, Hutt Valley Distict Health Board for her assistance in supervising the project.

Correspondence

Dr Chani Tromop-van Dalen, Department of Medicine, Wellington Regional Hospital, Riddiford Street, Newtown, Wellington 6022.

Correspondence Email

chanitvd@gmail.com

Competing Interests

Dr Thorne reports affiliation with FLights to Auckland outside the submitted work.

  1. Nasreddine ZS, Phillips NA, Bédrian V, et al. The Montreal Cognitive Assessment, MoCA: A Brief Screening Tool for Mild Cognitive Impairment. J Am Geriatr Soc. 2005; 53:695–9.
  2. Tsoi KK, Chan JY, Hirai HW, et al. Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis. JAMA Intern Med. 2015; 175:1450–8.
  3. Lam B, Middleton LE, Masellis M, et al. Criterion and convergent validity of the Montreal cognitive assessment with screening and standardized neuropsychological testing. J Am Geriatr Soc. 2013; 61:2181–5.
  4. Damian AM, Jacobsen SA, Hentz JG, et al. The Montreal Cognitive Assessment and the mini-mental state examination as screening instruments for cogntive impairment: item analyses and threshold scores. Dement Geriatr Cogn Disord. 2011; 31:126–31.
  5. Smith T, Gildeh N, Holmes C. The Montreal Cognitive Assessment: validity and utility in a memory clinic setting. Can J Psychiatry. 2007; 52:329–32.
  6. Zadikoff C, Fox SH, Tang-Wai DF, et al. A comparison of the Mini-Mental state exam to the Montreal Cognitive Assessment in identifying cognitive deficits in Parkinson’s disease. Mov Disord. 2008; 23:297–9.
  7. Dalrymple-Alford JC, MacAskill MR, Nakas CT, et al. The MoCA: Well-suited screen for cognitive impairment in Parkinson disease. Neurology. 2010; 75:1717–25.
  8. Villeneuve S, Pepin V, Rahayel S, et al. Mild cognitive impairment in moderate to severe COPD: a preliminary study. Chest. 2012; 142:1516–1523.
  9. Webb AJ, Pendlebury ST, Li L, et al. Validation of the Montreal Cognitive Assessment versus Mini-Mental State Examination Against Hypertension and Hypertensive Arteriopathy After Transient Ischaemic Attack or Minor Stroke. Stroke. 2014; 45:3337–3342.
  10. Videnovic A, Bernard B, Fan W, et al. The Montreal Cognitive Assessment as a screening tool for cognitive dysfunction in Huntington’s disease. Mov Disord. 2010; 25:401–4.
  11. Mocatest.org [homepage on the internet]. Nasreddine Z. [updated 2015, cited 6 Jan 2018]. Available from http://www.mocatest.org/about
  12. Montreal Cognitive Assessment (MoCA) Administration and Scoring Instructions. Nasreddine Z. [updated August 18, 2010]. Available from http://www.mocatest.org/wp-content/uploads/2015/tests-instructions/MoCA-Instructions-English_2010.pdf
  13. Larner AJ. Screening utility of the Montreal Cognitive Assessment (MoCA): in place of – or as well as – the MMSE? Int Psychogeriatr. 2012; 24:391–6
  14. Schafer K, De Santi S, Schneider LS. Errors in ADAS-cog administration and scoring may undermine clinical trials results. Curr Alzheimer Res. 2011; 8:373–6.
  15. Ramos E, Alfonso VC, Schermerhorn SM. Graduate students’ administration and scoring errors on the Woodcock-Johnson III tests of cognitive abilities. Psychology in the Schools. 2009; 46:650–657.
  16. Kozora E, Kongs S, Hampton M, Zhang L. Effects of examiner error on neuropsychological test results in a multi-site study. Clin Neuropsychol. 2008; 22:977–88.
  17. Kazora E, Kongs S, Box T, et al. Training and management of a multisite neuropsychological testing protocol for the Department of Veterans Affairs cooperative study evaluating on-and-off pump coronary artery bypass graft procedures. Clin Neuropsychol. 2007; 21:653–62.
  18. Price CC, Cunningham H, Coronado N, et al. Clock drawing in the Montreal Cognitive Assessment: recommendations for dementia assessment. Dement Geriatr Cogn Disord. 2011; 31:179–87.
  19. Ismail Z, Rajji TK, Shulman KI. Brief cognitive screening instruments: an update. Int J Geriatr Psychiatry. 2010; 25:111–20.
  20. Horton DK, Hynan LS, Lacritz LH, et al. An Abbreviated Montreal Cognitive Assessment (MoCA) for Dementia Screening. Clin Neuropsychol. 2015; 29:413–25.
  21. Borson S, Scanlan JM, Chen P, Ganguli M. The Mini-Cog as a screen for dementia: validation in a population-based sample. J Am Geriatr Soc. 2003; 51:1451–4.
  22. University of Otago, Wellington. Advanced Learning in Medicine Sixth Year Trainee Intern Handbook: 2017–2018. University of Otago. 3 p. Available from: http://www.otago.ac.nz/wellington/otago677064.pdf

Contact diana@nzma.org.nz
for the PDF of this article

Subscriber Content

The full contents of this pages only available to subscribers.
Login, subscribe or email nzmj@nzma.org.nz to purchase this article.

LOGINSUBSCRIBE
No items found.