Supervised Contrastive Learning for Lacune Detection in MRI

Original Title: Biomarkers

Journal: Alzheimer's & dementia : the journal of the Alzheimer's Association

DOI: 10.1002/alz70856_099645

Overview

Lacunes are small, deep brain infarcts that indicate vascular disease and increase the risk of cognitive decline. Detecting these features manually is time-consuming and prone to error due to their small size and similarity to other structures like perivascular spaces. This study presents a deep learning framework designed to automate the segmentation of lacunes using 2D T2-FLAIR MRI scans. The researchers utilized a dataset of 427 images, which underwent preprocessing to segment intracranial volume and white matter hyperintensities. The core architecture employed is an Attention U-Net. To address the challenge of imbalanced data where lacunes are rare compared to healthy tissue, the model incorporates a ResNet-34 encoder pretrained with supervised contrastive learning. This approach allows the model to learn discriminative features by grouping similar positive instances and separating them from negative ones. Evaluation on a test set showed the model identified 102 out of 166 lacunes (61.5%), yielding a figure-of-merit of 0.726. At the patient level, the system achieved an area under the curve of 0.810 for detecting the presence of lacunes.

Novelty

The primary methodological advancement lies in the integration of supervised contrastive learning within a segmentation pipeline for rare radiological features. Traditional segmentation models often struggle with class imbalance, frequently misidentifying lacune-mimicking structures as true infarcts. By employing supervised contrastive learning during the pretraining phase, the encoder learns to map lacune samples closer together while pushing them away from non-lacune samples, such as prominent vessels or white matter lesions. This contrastive approach enhances the model's ability to extract semantic features before the actual segmentation task begins. Furthermore, the use of an Attention U-Net architecture helps the model focus on relevant spatial regions, which is critical given the minute scale of lacunar infarcts relative to the entire brain volume. This combination of contrastive representation learning and spatial attention specifically targets the difficulty of distinguishing small, infrequent pathologies from common anatomical variants.

Potential Clinical / Research Applications

The model provides a foundation for automated vascular health assessment in aging populations. In clinical practice, it could serve as a second-reader tool to assist radiologists in identifying subtle lacunes that might otherwise be overlooked during routine screenings for dementia or stroke risk. Because lacunes are linked to an increased risk of amyloid-related imaging abnormalities, this tool could be valuable in monitoring patients undergoing monoclonal antibody therapies for Alzheimer's disease. In a research context, the automated quantification of lacunar burden allows for large-scale longitudinal studies investigating the progression of small vessel disease and its correlation with cognitive impairment. Future refinements could include localizing lacunes within specific functional brain regions, such as the thalamus or basal ganglia, to predict specific neurological deficits. This would enable more personalized risk profiling and management strategies for patients at risk of vascular-related cognitive decline.

Similar Posts

  • AI-Powered Speech Analysis for Alzheimer’s Detection

    Original Title: Biomarkers Journal: Alzheimer's & dementia : the journal of the Alzheimer's Association DOI: 10.1002/alz70856_107467 Overview The study investigates the utility of spontaneous speech as a non-invasive biomarker for Alzheimer's disease by developing an automated analysis pipeline. Utilizing the ADReSS 2020 Challenge dataset, which comprises audio recordings from 108 training participants and 48 testing participants performing the Cookie Theft picture description task, the researchers explored the transition from raw audio to diagnostic classification. The methodology involved transcribing audio using commercial tools like OpenAI Whisper and AssemblyAI, followed by the generation of semantic vector embeddings using large language models. These embeddings were then used to train machine learning classifiers, including…

  • Immune Response in Pig-to-Human Heart Xenografts

    Original Title: Characterizing the Immune Response in Pig-to-human Heart Xenografts Using a Multimodal Diagnostic System Journal: Circulation DOI: 10.1161/CIRCULATIONAHA.125.074971 Overview This study aimed to characterize the early immune response in genetically modified pig hearts transplanted into humans. Researchers analyzed biopsies from two 10-gene-edited pig hearts 66 hours after transplantation into brain-dead human recipients. They employed a multimodal diagnostic approach that integrated traditional histology, electron microscopy, gene expression profiling, and advanced imaging. The latter used multiplex immunofluorescence combined with a deep learning algorithm for automated cell quantification. The key findings were that the xenografts showed mild microvascular inflammation dominated by innate immune cells, specifically neutrophils (CD15+) and macrophages (CD68+), with an…

  • A study of 691 FDA-cleared AI/ML devices reveals significant reporting gaps in efficacy, safety, and bias, calling for better regulation.

    Original Title: Benefit-Risk Reporting for FDA-Cleared Artificial Intelligence-Enabled Medical Devices Journal: JAMA health forum DOI: 10.1001/jamahealthforum.2025.3351 FDA AI/ML Device Reporting Lacks Transparency Overview A comprehensive analysis of 691 artificial intelligence and machine learning (AI/ML) medical devices cleared by the US Food and Drug Administration (FDA) between 1995 and 2023 reveals significant deficiencies in benefit-risk reporting. The cross-sectional study examined FDA decision summaries and postmarket surveillance databases. It found that crucial information was frequently missing. For instance, 95.5% of device summaries lacked demographic data for the populations on which the AI was tested, 53.3% did not report the training sample size, and 46.7% omitted the study design. The evidence supporting clearance…

  • Demographic inaccuracies and biases in the depiction of patients by artificial intelligence text-to-image generators

    AI’s Patient Images Show Demographic Biases One-Sentence Summary This study reveals that leading AI text-to-image generators produce patient depictions with significant demographic inaccuracies, over-representing White and normal-weight individuals while failing to reflect real-world disease epidemiology. Overview As artificial intelligence (AI) text-to-image generators become widely used for creating visual content, their application in medical contexts raises concerns about accuracy and bias. This research systematically evaluated four popular AI models—Adobe Firefly, Bing Image Generator, Meta Imagine, and Midjourney—to assess how accurately they depict patients for 29 different diseases. Researchers generated a total of 9060 images and had twelve independent raters assess the depicted sex, age, race/ethnicity, and weight. These AI-generated demographics were…

  • Cancer Detection in Breast MRI Screening via Explainable AI Anomaly Detection

    Title AI Anomaly Detection for Breast Cancer MRI One-Sentence Summary This study developed an artificial intelligence model using an anomaly detection approach that improved the accuracy of detecting and localizing breast cancer on MRI scans compared to a standard classification model, especially in realistic low-cancer-prevalence settings. Overview Researchers developed an AI model, Fully Convolutional Data Description (FCDD), to improve breast cancer detection on MRI. It uses an anomaly detection framework, training primarily on healthy tissue images to learn a representation of “normal” and then flagging deviations as potential cancers. The model was developed on 9,738 MRI exams and tested on internal and external datasets. It was compared against a traditional…

  • BrainGeneBot: AI-Driven Genetic Analysis for Alzheimer’s

    Original Title: Basic Science and Pathogenesis Journal: Alzheimer's & dementia : the journal of the Alzheimer's Association DOI: 10.1002/alz70855_107413 Overview Alzheimer’s disease research increasingly relies on polygenic risk scores to estimate individual susceptibility based on thousands of genetic variants. However, as genomic data expands, researchers face hurdles in reconciling results from studies with different ancestral backgrounds and methodologies. BrainGeneBot is an artificial intelligence framework designed to automate the exploration of these complex genetic datasets through a user-driven interface. It acts as a bridge between raw data and biological knowledge discovery by streamlining the interpretation of diverse omics information. By utilizing a Large Language Model, the platform allows researchers to perform…

Leave a Reply

Your email address will not be published. Required fields are marked *

CAPTCHA