• Vol. 53 No. 1, 48–52
  • 30 January 2024

Normative data for baseline and longitudinal neuropsychological assessments in Singapore

,
,
,
,
,
,

The authors have made a correction to this article at
https://doi.org/10.47102/annals-acadmedsg.2023-98correction

Dear Editor,

Neuropsychological assessments play a vital role in the early detection of cognitive disorders. However, the lack of Singapore-specific norms has resulted in a reliance on international, usually Western, norms that potentially reduce the accuracy and reliability of assessments due to sociocultural differences.1 Moreover, the lack of longitudinal norms limits the reliable monitoring of cognitive changes over time.

We therefore sought to develop baseline and longitudinal norms for commonly used neuropsychological tests. We retrospectively retrieved data of participants who had undergone cognitive testing as part of research studies conducted at the National Neuroscience Institute between 2013 and 2022. These studies were granted approval by the SingHealth Centralised Institutional Review Board (CIRB: 2013/267/A, 2015/2218, 2017/2550, and 2019/2173).

We included participants aged ≥50 years who scored ≥23 on the Montreal cognitive assessment2 (MoCA). Exclusion criteria included known or suspected neurodegenerative, neurological or major psychiatric illnesses; scores of >9 on the Geriatric Depression Scale-Short Form (GDS-15; indicative of moderate-to-severe depressive symptoms); and being non-English speaking. Neuropsychological measures included Digit Span Forwards and Backwards, Coding, Symbol Search, and Block Design from the Wechsler Adult Intelligence Scale-Fourth Edition, Visual Reproduction (Immediate and Delayed Recall) and a modified version of Logical Memory (Immediate and Delayed Recall of an adapted version of Story B) from the Wechsler Memory Scale-Fourth Edition, the Colour Trails Test – Parts 1 and 2, the Chinese Frontal Assessment Battery, the Rey Complex Figure Test (Copy), and the Alzheimer’s Disease Assessment Scale-Cognitive Subscale Word Recall test (where scores reflect the average number of words forgotten over 3 learning trials [Immediate Recall] and after a delay [Delayed Recall]).

Our baseline normative sample consisted of 552 participants. Mean age of the baseline sample was 63.65 ± 6.93 years, mean education was 12.86 ± 3.38 years, and 47.6% were male. Mean MoCA score was 27.16 ± 1.97 and mean GDS-15 score was 2.05 ± 2.07. Baseline norms were stratified according to age (50–64 years, 65–80 years old) and education (≤10 and >10 years) and are presented in Table 1.

Our longitudinal subsample included 173 participants from the baseline sample who had completed a follow-up assessment within 12 ± 6 months from baseline testing. We selected this timeframe as it best approximated, in our experience, the typical interval between baseline and follow-up neuropsychological assessments in clinical and research settings. We excluded participants who scored <23 on the MoCA or >9 on the GDS-15 at repeat assessment. Mean age of the longitudinal sample was 63.58 ± 7.24 years, mean education was 13.14 ± 3.26 years, and 50.9% were male. Mean MoCA score was 27.34 ± 1.76 and mean GDS-15 score was 1.98 ± 1.99. Mean retest duration was 12.87 ± 2.71 months. Reliable change indices (RCIs) were calculated following established formulas3 using intraclass coefficients as a measure of test-retest stability4 and are presented in Table 1.

Table 1. Baseline and longitudinal norms.

Test-retest reliabilities in our study ranged from poor to good,4 and in general were similar to other research investigating cognitive change in adults aged ≥50 years in an approximate 1-year test-retest interval.5 Moreover, test-retest coefficients are expected to be lower as test-retest intervals increase.6 Except for the Colour Trails Test – Part 2 for participants aged 50–64 with ≤10 years of education, the practice effects (i.e. test score changes due to previous exposure) in our study were quite small. This is also consistent with research indicating that PEs are smaller at long test-retest intervals.7

We believe that these norms are relevant for the characterisation of cognition in local clinical and research settings, especially as the educational profile of our overall sample closely approximates recent census estimates for the average number of years of education of Singaporeans.8 Furthermore, our RCIs also provide a means to reliably assess the significance of change in test scores over time.3 To the best of our knowledge, the RCIs presented here are the first attempt to provide change scores for serial assessments in Singapore. In Table 1, we present RCIs at both 80% and 90% confidence levels to allow users to determine their preferred level of confidence. We also present practice effects to allow users to account for this if needed.3

An example of how to use these norms may be instructive. A 60-year-old with 12 years of education obtains 60 points on Coding at baseline assessment. Their performance can be thought of as falling in the normal range (z = -0.38 or between the 24th and 75th percentiles) using our baseline norms. RCIs can then be used to facilitate the psychometric determination of whether a change in test score at repeat assessment is reliable. At the 80% confidence level, a change of ± 12.64 points would be considered either a reliable improvement or decline. If practice effects are to be considered, they can be subtracted or added to the RCI. Hence, a decline greater than 9.01 (i.e. 12.64 – 3.63) or an improvement greater than 16.27 (i.e. 12.64 + 3.63) would be considered reliable at an 80% confidence level.

There are several limitations to our study, including a relatively small sample size, particularly for the stratified longitudinal norms, and that our norms may not be generalisable to individuals who fall outside the parameters of our study (i.e. English-speaking adults aged between 50–80 years; retest duration between 12 ± 6 months). Therefore, larger studies with more diverse samples are needed to validate and provide more generalisable data. Additionally, the test-retest coefficients for some tests were poor, although these may be explained by the longer test-retest duration of our study. However, our RCIs arguably have greater external validity given that the test-retest interval used in the current study more closely approximates typical clinical practice3,5 and may better account for age-related cognitive changes over time.

Our study provides both baseline norms and longitudinal norms, in the form of RCIs, for commonly used neuropsychological tests in a relatively well-educated, English-speaking Singaporean cohort. These data provide valuable information for administrators of neuropsychological assessments in Singapore, where such norms are scarce or absent. Larger scale studies replicating our results and including more diverse samples to improve generalisability are needed.

Funding
This work was supported by the National Neuroscience Institute-Health Research Endowment Fund (NNI-HREF), Singapore (Reference Number: 991016), SingHealth Centralised Institutional Review Board (CIRB) (Reference numbers: 2017/2550, 2013/267/A, 2015/2218 and 2019/2173), Ministry of Education, Singapore, under its MOE AcRF Tier 3 Award MOE2017-T3-1-002, National Medical Research Council (NMRC) Singapore, under its Clinician Scientist Award (MOH-CSAINV18nov-0007) and Clinician Scientist Individual research Grant (NMRC/CIRG/14MAY025).

Conflict of interests
The authors declared no conflict of interest.

Data availability statement
The dataset used in this study is available upon reasonable request from the corresponding author.

Keywords: cognition, longitudinal data, neuropsychological testing, norms, reliable change indices


References

  1. Collinson SL, Yeo D. Neuropsychology in Singapore: History, development and future directions. In: The Neuropsychology of Asian Americans. London: Psychology Press; 2010:305-12.
  2. Carson N, Leach L, Murphy KJ. A re‐examination of Montreal Cognitive Assessment (MoCA) cutoff scores. Int J Geriatr Psychiatry 2018;33:379-88.
  3. Brooks BL, Sherman E, Iverson GL, et al. Psychometric foundations for the interpretation of neuropsychological test results. In: The little black book of neuropsychology. Springer; 2011:893-922.
  4. Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med 2016;15:155-63.
  5. Gavett BE, Ashendorf L, Gurnani AS. Reliable change on neuropsychological tests in the uniform data set. Journal of the International Neuropsychological Society 2015;21:558-67.
  6. Calamia M, Markon K, Tranel D. The robust reliability of neuropsychological measures: Meta-analyses of test–retest correlations. Clin Neuropsychol 2013;27:1077-105.
  7. Calamia M, Markon K, Tranel D. Scoring higher the second time around: meta-analyses of practice effects in neuropsychological assessment. Clin Neuropsychol 2012;26:543-70.
  8. Singapore Department of Statistics. M850591 – Mean Years of Schooling, Annual. https://www.singstat.gov.sg/publications/reference/ebook/population/education-and-literacy. Accessed 8 March 2023.