ABC Seminar: Prateek Prasanna, PhD – Stony Brook University – “Reality-aware Medical Vision: The Quest for Interpretable and Explainable AI in Medical Imaging”

Prateek Prasanna, PhDSpeaker:  Prateek Prasanna, PhD
Affiliation: Stony Brook University
Position:  Assistant Professor, Department of Biomedical Informatics
Host: Daniel Shao, Mahmood Lab

Date: Monday February 24th, 2025
Time: 4:00-5:00PM ET
Zoom: https://partners.zoom.us/j/82163676866
Meeting ID: 821 6367 6866

Abstract: Medical imaging plays a critical role in modern healthcare, yet its complexity necessitates advanced computational tools to enhance analysis, interpretability, and diagnostic accuracy. In this presentation, we will discuss our research efforts in developing computational imaging biomarkers and frameworks for precision medicine in real clinical scenarios involving imperfect data. We will cover a spectrum of computational techniques grounded in both biological and domain-specific insights that facilitate the early detection and evaluation of treatment responses across various diseases.

In radiology, we have developed methods to analyze intricate tissue structures using radiomics and topology-informed deep learning, demonstrating improved prediction of treatment response in multiple cancers. Additionally, we explore techniques that integrate expert eye gaze patterns and radiomic features to condition diffusion models, enabling disease-aware image synthesis and enhancing the generation of anatomically accurate and clinically relevant medical images. These approaches provide both diagnostic accuracy and interpretability, bridging the gap between AI models and clinical needs.

In digital pathology, our work focuses on overcoming challenges such as domain shifts, data scarcity, and limited interpretability. We employ adaptive meta-learning frameworks to generalize across unseen staining patterns, improving segmentation and classification of histopathology images. Generative models are utilized to synthesize high-resolution pathology images and restore critical details, leveraging pathology text reports and handcrafted features to enhance both quality and interpretability. Finally, we integrate expert pathological insights with deep learning to improve interpretability in whole-slide image analysis, demonstrating the potential to provide meaningful explanations alongside robust performance.

These advancements highlight the transformative potential of explainable and generative AI in radiology and digital pathology, paving the way for innovative, clinically relevant solutions in medical imaging.

Research Links: Publications, IMAGINE Lab

Prateek Prasanna is an Assistant Professor in the Department of Biomedical Informatics at Stony Brook University and directs the Imaging Informatics for Precision Medicine (IMAGINE) Lab. He received his PhD in Biomedical Engineering from Case Western Reserve University, Ohio, USA. Prior to that, he obtained his master’s degree in Electrical and Computer Engineering from Rutgers University and bachelor’s degree in Electrical and Electronics Engineering from National Institute of Technology, Calicut, India. Dr. Prasanna’s research focuses on building clinically translatable machine learning tools that leverage multiple data streams of imaging, pathology, and genomics to derive actionable insights for enabling better treatment decisions. His research involving development of companion diagnostic tools for thoracic, neuro, and breast imaging applications has been published in venues such as MICCAI, CVPR, ECCV, NeurIPS, Radiology, Medical Image Analysis, etc, and has won several innovation awards. One of the core focuses of his lab is to integrate machine generated inferences with expert clinical reads to make clinical workflows more efficient and effective. His team has been actively working on the advancement of interpretable machine learning and xAI techniques to facilitate the discovery of computational biomarkers, particularly in situations where data is limited or missing.

Click here to be added to our mail list.

Special Seminar: Anshul Kundaje, PhD – “Using deep learning models to debug regulatory genomics experiments and decode cis-regulatory syntax”

Special Seminar: Anshul Kundaje, PhD – “Using deep learning models to debug regulatory genomics experiments and decode cis-regulatory syntax”

BWH Computational Pathology Special Seminar

Title: Using deep learning models to debug regulatory genomics experiments and decode cis-regulatory syntax
Speaker: Anshul Kundaje, PhD
Affiliation: Stanford University
Position: Associate Professor, Genetics and Computer Science

Date: Monday April 22, 2024
Time: 4:00PM-5:00PM ET
Zoom: https://partners.zoom.us/j/82163676866
Meeting ID: 821 6367 6866

Anshul Kundaje, PhD, is Associate Professor of Genetics and Computer Science at Stanford University. His primary research area is large-scale computational regulatory genomics. The Kundaje lab develops deep learning models of gene regulation and model interpretation methods to decipher non-coding DNA and genetic variation associated with disease. Dr. Kundaje has led computational efforts to develop widely used resources in collaboration with several NIH consortia including ENCODE, Roadmap Epigenomics and IGVF. Dr. Kundaje is a recipient of the 2016 NIH Director’s New Innovator Award and the 2014 Alfred Sloan Fellowship.

Links: The Encyclopedia of DNA Elements (ENCODE) Project, Stanford University, MIT

Gerber Lab awarded $3.1 Million Five Year NIH-NIGMS R35 Grant “Probabilistic deep learning models and integrated biological experiments for analyzing dynamic and heterogeneous microbiomes”

Gerber Lab awarded $3.1 Million Five Year NIH-NIGMS R35 Grant “Probabilistic deep learning models and integrated biological experiments for analyzing dynamic and heterogeneous microbiomes”

This work will leverage deep learning technologies to advance the microbiome field beyond finding associations in data, to accurately predicting the effects of perturbations on microbiota, elucidating mechanisms through which the microbiota affects the host, and improving bacteriotherapies to enable their success in the clinic. New deep learning models will be developed that address specific challenges for the microbiome, including noisy/small datasets, highly heterogenous human microbiomes, the need for direct interpretability of model outputs, complex multi-modal datasets, and constraints imposed by biological principles. Computational models and biological experiments will be directly coupled through reinforcing cycles of predicting, testing predictions with new experiments, and improving models. An important objective will also be to make computational tools widely available to the research community, through release of quality open-source software.

RePORTER Link

 

Mahmood Lab’s CLAM method, A Deep-Learning-based Pipeline for Data Efficient and Weakly Supervised Whole-Slide-level Analysis, published in Nature Biomedical Engineering

Mahmood Lab’s CLAM method, A Deep-Learning-based Pipeline for Data Efficient and Weakly Supervised Whole-Slide-level Analysis, published in Nature Biomedical Engineering

Deep-learning methods for computational pathology require either manual annotation of gigapixel whole-slide images (WSIs) or large datasets of WSIs with slide-level labels and typically suffer from poor domain adaptation and interpretability. Here we report an interpretable weakly supervised deep-learning method for data-efficient WSI processing and learning that only requires slide-level labels. The method, which we named clustering-constrained-attention multiple-instance learning (CLAM), uses attention-based learning to identify subregions of high diagnostic value to accurately classify whole slides and instance-level clustering over the identified representative regions to constrain and refine the feature space. By applying CLAM to the subtyping of renal cell carcinoma and non-small-cell lung cancer as well as the detection of lymph node metastasis, we show that it can be used to localize well-known morphological features on WSIs without the need for spatial labels, that it overperforms standard weakly supervised classification algorithms and that it is adaptable to independent test cohorts, smartphone microscopy and varying tissue content.

Lu, M.Y., Williamson, D.F.K., Chen, T.Y. et al. Data-efficient and weakly supervised computational pathology on whole-slide images. Nat Biomed Eng 5, 555–570 (2021). https://doi.org/10.1038/s41551-020-00682-w