Event Details:
Location
This event is open to:
Speakers:
Robert Holland, PhD: Postdoctoral Scholar, AIMI Center; Department of Radiology, Stanford University
Talk Title: MechSci: Accelerating Clinical Science through Mechanistic Interpretability of Medical Foundation Models
Abstract: Medical foundation models can identify prognostic signals in patient data at scale, yet their black-box nature prevents us from translating these learned signals into testable scientific hypotheses. In this talk, we will demonstrate how to transform these models into engines for scientific discovery. Our pipeline uses mechanistic interpretability, specifically Sparse Autoencoders (SAEs) combined with large language models (LLMs), to decompose abstract features from 3D CT scans and lab values into hundreds of understandable clinical concepts. We used it to generate over 1,200 hypotheses aimed at explaining progression to major disease outcomes, including cancer, dementia, and cardiovascular, kidney, and metabolic diseases. One example hypothesis indicated that an imaging feature scoring the level of "incidental, benign, or indeterminate lesions in solid organs (liver, kidneys, adrenal glands, pancreas, or spleen)," was more strongly associated with onset of primary cancer (OR of 3.46 (95% CI 2.52–4.77)) than clinical risk factors like age (OR 1.29 (95% CI 1.00–1.67)) and smoking (OR 1.46 (95% CI 1.03–2.07)). Overall, this work demonstrates a scalable framework for turning any medical AI from a simple prediction tool into an engine for generating new clinical insights.
Yunhe Gao, PhD: Postdoctoral Scholar, AIMI Center; Department of Radiology, Stanford University
Talk Title: Toward Universal Medical Imaging Understanding: Learning, Adapting, and Scaling
Abstract: Human radiologists develop comprehensive understanding of medical imaging through training across diverse clinical contexts, enabling them to interpret CT, MRI, and PET scans across body regions and seamlessly transition between clinical tasks. Current AI models fundamentally lack this capability, remaining confined to narrow, task-specific domains due to disease-specific datasets, inability to adapt without retraining, and prohibitive annotation costs. In this talk, I present three progressive works that systematically address these barriers toward universal 3D medical imaging understanding. Hermes enables a single model to learn from diverse, heterogeneous medical data across multiple modalities and body regions. Iris introduces efficient in-context learning where the model segments novel anatomical structures absent from its training data using only one reference example without any retraining, matching supervised performance while being orders of magnitude faster than existing adaptive methods. MASS reduces annotation dependence through mask-guided self-supervised learning, enabling training on vast unlabeled imaging data while preserving adaptation capability. Together, these works establish a framework for universal medical imaging AI that is scalable, generalizable across modalities and body regions with reduced dependence on expert annotation, advancing toward AI systems with medical imaging understanding as broadly as human radiologists.
Attendance is open to the Stanford and AIMI affiliate community. Please contact aimicenter@stanford.edu for the Zoom link if you would like to attend virtually.
Related Topics
Explore More Events
-
Lunch with AIMI
1701 Page Mill Rd-AIMI Center (Main Lobby)
1701 Page Mill Rd
Palo Alto, CA 94304
United States -