Multi-cohort machine learning identifies predictors of cognitive impairment in Parkinson’s disease

Title

Predicting Cognitive Decline in Parkinson’s Disease

One-Sentence Summary

Machine learning models trained on clinical data from three independent patient cohorts identified age at diagnosis and visuospatial ability as stable predictors of cognitive decline in Parkinson’s disease.

Overview

Cognitive impairment is a common non-motor symptom in Parkinson’s disease (PD), but its early prediction remains challenging. This study aimed to develop machine learning models to predict cognitive decline by integrating clinical data from three independent PD cohorts (LuxPARK, PPMI, and ICEBERG). The models were trained to predict two outcomes: mild cognitive impairment (PD-MCI), an objective measure, and subjective cognitive decline (SCD), a patient-reported measure. The multi-cohort models, which combined data from all three groups, demonstrated more stable performance than models trained on single cohorts, while maintaining competitive predictive accuracy. For instance, the multi-cohort model for classifying PD-MCI achieved an Area Under the Curve (AUC) of 0.67, and the model for SCD classification reached an AUC of 0.72. Key predictors consistently identified across cohorts included age at PD diagnosis and visuospatial ability. The analysis revealed that patients diagnosed at age 53 or older had a nearly 2.4-fold higher risk of developing PD-MCI.

Novelty

The study’s main contribution lies in its multi-cohort approach. While previous machine learning studies for PD cognitive impairment have typically relied on data from a single patient group, this research integrated data from three distinct international cohorts. This methodology enhances the generalizability and robustness of the predictive models. By training and validating the models across diverse populations, the findings are less susceptible to cohort-specific biases, such as differences in patient demographics or clinical assessment protocols. This cross-cohort strategy represents a significant step toward developing more universally applicable predictive tools for clinical use.

My Perspective

The divergence in key predictors for objective PD-MCI and subjective SCD is particularly insightful. The model for PD-MCI heavily weighted neurocognitive test scores like visuospatial performance, whereas the SCD model highlighted non-motor symptoms such as sleep disturbances and autonomic dysfunction, as well as male sex. This suggests that the biological processes underlying measurable cognitive decline may differ from the factors influencing a patient’s self-perceived cognitive difficulties. This distinction is crucial; it implies that effective clinical management may require a dual approach that addresses not only objective cognitive performance but also the broader spectrum of non-motor symptoms that shape a patient’s daily experience and quality of life.

Potential Clinical / Research Applications

Clinically, the identified predictors could help stratify patients at high risk for cognitive decline. Clinicians could use factors like older age at diagnosis and poor visuospatial test performance to identify individuals who may benefit most from early interventions, such as cognitive training or medication adjustments. In the long term, these predictive models could be integrated into digital health platforms for remote patient monitoring and screening. For research, this study validates the multi-cohort machine learning framework as a powerful method for identifying robust predictors, a strategy that could be applied to other PD symptoms or different neurodegenerative diseases. Furthermore, the distinct predictor sets for PD-MCI and SCD encourage future research into the different neurobiological mechanisms underlying objective versus subjective cognitive changes.

Similar Posts

  • Federated Data and Sepsis Management in the EHDS

    Original Title: The next frontier in sepsis: connected ICU data for real-world clinical decision making Journal: Intensive care medicine DOI: 10.1007/s00134-025-08284-3 Overview Sepsis is a major healthcare challenge, causing one in five deaths globally and affecting approximately 49 million individuals every year. In Europe, hospital treatment costs range from 16,000 euros in France to over 27,000 euros in Greece, while follow-up care for survivors in Germany costs about 6.8 billion euros annually. Despite these high stakes, clinical data remains fragmented across local silos, hindering the development of effective decision-support tools. The European Health Data Space (EHDS) proposes a federated infrastructure to connect intensive care units across borders. This framework allows…

  • Expert Consensus on Sonazoid CEUS for Liver Lesions

    Original Title: Expert consensus regarding the clinical application of liver contrast-enhanced US with Sonazoid (Sonazoid CEUS) Journal: International journal of surgery (London, England) DOI: 10.1097/JS9.0000000000003510 Overview This document presents an expert consensus on the clinical use of Sonazoid contrast-enhanced ultrasound for managing focal liver lesions. Sonazoid is a second-generation agent that functions as both a blood pool and a Kupffer-cell agent, with a phagocytic rate of 99 percent. Unlike pure blood-pool agents, it provides a stable post-vascular phase that lasts for approximately sixty minutes, enabling thorough liver scans. The consensus covers surveillance, diagnosis of hepatocellular carcinoma, detection of metastases, and interventional guidance. In high-risk patients, Sonazoid improves the detection of…

  • Regulating ICU AI: From Narrow Tools to Generalist Systems

    Original Title: The regulation of artificial intelligence in intensive care units: from narrow tools to generalist systems Journal: NPJ digital medicine DOI: 10.1038/s41746-026-02535-3 Overview Intensive care units represent highly data-intensive environments in healthcare, requiring continuous monitoring and rapid decision-making. While artificial intelligence has been explored for decades, its formal regulation as a medical device began in 1995. By May 2025, the number of approved artificial intelligence-enabled medical devices reached 1,016 in the United States. Many of these tools are designed for narrow, single-task applications such as interpreting radiological images or predicting sepsis. The emergence of generative artificial intelligence and large language models marks a shift toward generalist systems capable of…

  • Ensuring Health Equity in the Medical AI Revolution

    Original Title: Keeping Health Equity at the Forefront of the Artificial Intelligence Revolution in Medicine and Health Journal: JAMA health forum DOI: 10.1001/jamahealthforum.2025.6477 Overview OverviewThe rapid deployment of artificial intelligence in healthcare offers potential for increased efficiency and improved health outcomes. However, significant concerns exist regarding its impact on health equity. Historically, technological innovations have often benefited advantaged populations first, a phenomenon known as the 'inverse equity hypothesis'. Evidence from studies across 89 low- and middle-income countries demonstrates that without deliberate strategies, new technologies widen existing health gaps. Digital health tools frequently sustain inequities related to socioeconomic status, race, and geographic location. For instance, individuals with lower socioeconomic status are…

  • Reform Strategies for Medicare Physician Payment Stability

    Original Title: How AI Will Help Solve Medicine's Productivity Challenges Journal: JAMA health forum DOI: 10.1001/jamahealthforum.2025.6647 Overview This analysis examines the mechanisms of the Medicare Physician Fee Schedule and the impact of budget neutrality requirements on physician reimbursement. Between 2001 and 2024, inflation-adjusted payments for physicians declined by 29 percent. Unlike other Medicare providers, physician payments are not automatically tied to inflation. Instead, they are governed by a conversion factor adjusted annually by the Centers for Medicare and Medicaid Services. The primary constraint is the budget neutrality mandate, requiring that any changes in the fee schedule projected to increase or decrease spending by more than 20 million dollars be offset…

  • Interpretable Deep Learning for Gastric Cancer T Staging

    Original Title: Interpretable deep learning for multicenter gastric cancer T staging from CT images Journal: NPJ digital medicine DOI: 10.1038/s41746-025-02002-5 Overview Gastric cancer remains a significant global health challenge, requiring precise preoperative T staging to determine the appropriate therapeutic strategy, such as neoadjuvant chemotherapy or direct surgical intervention. Standard contrast-enhanced computed tomography is the primary tool for this evaluation, yet its accuracy often ranges between 65% and 75% due to subjective interpretation and the difficulty of identifying subtle serosal invasion. This study introduces GTRNet, an automated deep-learning framework designed to classify gastric cancer into four T stages from routine portal venous phase images. Developed using a retrospective multicenter dataset of…

Leave a Reply

Your email address will not be published. Required fields are marked *

CAPTCHA