Cape Town - 2026 ISMRM-ISMRT Annual Meeting and Exhibition • 09-14 May 2026
| 08:20 |
|
401-02-001.
TumorCLIP: Lightweight Vision–Language Fusion for Explainable MRI-Based Brain Tumor Classification
Impact: This study demonstrates that lightweight vision-language fusion enhances MRI-based brain tumor classification, offering an interpretable and efficient diagnostic framework that improves subtype recognition and may facilitate broader clinical adoption of explainable AI in medical imaging.
|
|
| 08:31 |
|
401-02-002.
A Unified Vision-Language Foundation Model for Multi-Task MRI Application
Impact: OmniMRI
is a unified vision-language foundation model trained on large-scale
heterogeneous MRI data, performing reconstruction, segmentation, detection,
diagnosis, and report generation in one system to enhance automation,
efficiency, and generalization across diverse protocols, anatomies, and tasks
in the MRI workflow.
|
|
| 08:42 |
|
401-02-003.
BrainMR Specialist: A Foundation Model of Brain MRI for Diverse Downstream Applications
Impact: Our brain-specialized foundation model provides a single, data-efficient, and scalable deep learning model for diverse clinical and research applications, reducing the need for task-specific models, especially in limited available data.
|
|
| 08:53 |
|
401-02-004.
BrainDFMAE: A Unified Foundation Model for Aging-Brain sMRI via Deformation-Aware Pretraining
Impact: BrainDFMAE establishes a unified foundation model for aging-brain sMRI through deformation-aware pretraining, significantly enhancing capabilities in precise brain mapping, early diagnosis, and longitudinal progression prediction. This provides a powerful tool for advancing personalized treatment strategies and improving patient outcomes.
|
|
| 09:04 |
|
401-02-005.
Site effects persist in MRI foundation models: insights from BrainIAC embeddings
Impact: Foundation-model representations carry residual site information. Aiming for invariance at training, with intensity or contrast augmentations and domain-aligned objectives, may strengthen generalization across scanners and acquisition settings, which is critical for real-world multi-centre deployments.
|
|
| 09:15 |
|
401-02-006.
Foundation model for cardiac perfusion MRI enables 10-fold reduction in labeled dataset size for deep-learning analysis
Impact: We propose the
largest-scale cardiac perfusion MRI foundation model trained on > 600,000
unlabeled multi-center images, achieving state-of-the-art performance for
automatic segmentation with over 10-fold fewer manual labels, reducing reliance
on manual annotation, and providing a reusable model for other tasks.
|
|
| 09:26 |
|
401-02-007.
KIMRA: K-space–Image Multimodal Representation Alignment for Comprehensive Cardiac Analysis
Impact: This work establishes a scalable foundation for cardiac screening by deriving comprehensive representations directly from undersampled k-space, enabling efficient cardiac function assessment and advancing cardiovascular research through the rich physiological information preserved in the raw acquisition domain.
|
|
| 09:37 |
|
401-02-008.
A Vision-Language Foundation Model for Automated Segmentation of Cardiac Contours in Cine MRI
Impact: CardiVLSM bridges vision-language reasoning with foundation segmentation, enabling fully automated, prompt-free segmentation of cine MRI. Its strong generalizability across datasets indicates a path toward scalable, clinically deployable AI tools for cardiac function assessment without requiring manual intervention.
|
|
| 09:48 |
|
401-02-009.
A Foundation Model-Driven Framework for Automated QA of Medical Imaging AI Solutions
Impact: The proposed template matching framework enables scalable, automated quality assessment of AI-based images, reducing manual review burden and supporting clinical deployment. It provides infrastructure for continuous monitoring and regulatory compliance, with broad applicability across imaging tasks requiring similarity based validation.
|
|
| 09:59 |
|
401-02-010.
Self-Supervised Representation Learning of Brain MRI Using DINOv3: From Anatomical Features to Cross-Domain Generalization
Impact: This study shows that self-supervised vision transformers can learn anatomical features from unlabeled MRI, enabling label-efficient and domain-robust medical analysis, highlighting their potential to bridge the gap between large-scale unlabeled imaging datasets and practical clinical applications.
|
© 2026 International Society for Magnetic Resonance in Medicine