TL;DR

FAIMI Keynote

Prof. Tal Arbel

Prof. Tal Arbel

Tal Arbel is a Professor in the Department of Electrical and Computer Engineering, where she is the Director of the Probabilistic Vision Group and Medical Imaging Lab in the Centre for Intelligent Machines, McGill University. She is a Canada CIFAR AI Chair and Core Member of MILA - Quebec Artificial Intelligence Institute, a Fellow of the Canadian Academy of Engineering, and an Associate Member of the Goodman Cancer Research Centre. Prof. Arbel’s research focuses on development of probabilistic, deep learning methods in computer vision and medical image analysis, for a wide range of real-world applications, with a focus on neurological diseases. She is a recipient of the 2019 McGill Engineering Christophe Pierre Research Award. She regularly serves on the organizing team of major international conferences in computer vision and in medical image analysis (e.g. MICCAI, MIDL, ICCV, CVPR). She is currently the Editor-in-Chief and co-founder of the arXiv overlay journal: Machine Learning for Biomedical Imaging (MELBA).

Towards Equitable Image-Based Personalized Medicine: Causality, Confidence, and Bias Mitigation

In current clinical practice, treatment decisions are typically guided by broad demographic factors and standardized clinical markers—factors that may not fully capture the unique characteristics of individual patients. In this talk, I will explore how causal inference for personalized medicine, informed by medical images and patient demographics, can help make disease management more equitable by tailoring treatment recommendations to each patient’s unique profile. I will show how uncertainty-aware causal models can communicate confidence in deep learning predictions, improving the accuracy, fairness, and safety of image-based personalized medicine. The work is grounded in large-scale MRI data from randomized controlled trials for neurological disease treatments, ensuring clinical relevance and robustness. I will also highlight the transformative potential of Vision-Language Foundation Models to provide patient-specific, interpretable explanations for model predictions—tools that can expose and mitigate hidden biases arising from spurious correlations. Finally, I will present recent strategies for exposing and correcting calibration biases across population subgroups in multimodal large language models (LLMs), a crucial step toward achieving equitable outcomes.

GISTeR Keynote

Prof. Hyang Woon Lee

Prof. Hyang Woon Lee

Hyang Woon Lee is a distinguished neurologist and clinical neuroscientist specializing in chronic brain diseases like epilepsy, sleep, and cognitive disorders. As the director of the Department of Neurology at Ewha Womans University School of Medicine, her research focuses on developing multimodal brain imaging and EEG-based diagnostic methods to understand brain function, creating neuromodulation therapies, and pioneering AI-driven personalized medicine with a special interest in sex- and gender-specific brain science.

Advancing Fairness: Sex- and Gender-Aware AI for Equitable Neuroimaging

Artificial intelligence (AI) has rapidly advanced neuroimaging research, offering unprecedented precision in diagnosis, prognosis, and clinical decision-making. However, the overwhelming focus on algorithmic accuracy has often overlooked critical dimensions of fairness, particularly those related to sex and gender differences in brain structure, function, and disease manifestation. This talk explores how sex- and gender-aware design can enhance equity in AI-driven neuroimaging research, offer strategies for inclusive data, model transparency, and suggest ways to integrate ethical, clinical, and population-specific considerations beyond technical performance. In addition, we will discuss emerging evidences of sex-gender disparities in neuroimaging datasets and AI model outcomes, highlighting the risks of algorithmic bias that may exacerbate health inequities. Drawing from interdisciplinary research, the lecture introduces frameworks and best practices for embedding fairness and sex-gender specificity into the full neuroimaging AI pipeline—from data acquisition and annotation to model development, evaluation, and deployment. Special attention will be paid to inclusive data representation, transparent model reporting, and the need for regulatory and community engagement. By re-centering fairness as a core design principle, we aim to build neuroimaging AI systems that are not only accurate, but also equitable, explainable, and truly beneficial to diverse populations. We redefine neuroimaging intelligence by integrating fairness, sex and gender awareness into every step of the AI pipeline.

Schedule

Time Speaker and Title
13:30 - 13:35 Welcome
13:35 - 14:20 Keynote speaker: Prof. Tal Arbel - Towards Equitable Image-Based Personalized Medicine: Causality, Confidence, and Bias Mitigation
14:20 - 14:35 Hartmut Häntze, et al. Sex-based Bias Inherent in the Dice Similarity Coefficient: A Model Independent Analysis for Multiple Anatomical Structures
14:35 - 14:50 Utku Ozbulak, et al. Revisiting the Evaluation Bias Introduced by Frame Sampling Strategies in Surgical Video Segmentation
14:50 - 15:05 Artur Paulo, et al. Assessing Annotator and Clinician Biases in an Open - Source Tool Used to Generate Head CT Segmentations for Deep Learning Training
15:05 - 15:30 Poster session
15:30 - 16:00 Coffee Break + Poster session
16:00 - 16:15 Emma A.M. Stanley, et al. Exploring the interplay of label bias with subgroup size and separability: A case study in mammographic density classification
16:15 - 16:30 Gelei Xu, et al. Fair Dermatological Disease Diagnosis through Auto - weighted Federated Learning and Performance - aware Personalization
16:30 - 16:45 Louisa Fay, et al. MIMM-X: Disentangeling Spurious Correlations for Medical Image Analysis
16:45 - 17:30 Keynote speaker: Prof. Hyang Woon Lee - Advancing Fairness: Sex- and Gender-Aware AI for Equitable Neuroimaging
17:30 - 17:55 Round table
17:55 - 18:00 Closing and Awards

Accepted Papers

Abubakr Shafique, et al.: How Fair Are Foundation Models? Exploring the Role of Covariate Bias in Histopathology

Akshit Achara, et al.: Invisible Attributes, Visible Biases: Exploring Demographic Shortcuts in MRI-based Alzheimer’s Disease Classification

Artur Paulo, et al.: Assessing Annotator and Clinician Biases in an Open-Source-Based Tool Used to Generate Head CT Segmentations for Deep Learning Training

Chin-Wei Huang, et al.: LTCXNet: Tackling Long-Tailed Multi-Label Classification and Racial Bias in Chest X-Ray Analysis

Dishantkumar Sutariya, et al.: meval: A Statistical Toolbox for Fine-Grained Model Performance Analysis

Emma Stanley, et al.: Exploring the interplay of label bias with subgroup size and separability: A case study in mammographic density classification

Gelei Xu, et al.: Fair Dermatological Disease Diagnosis through Auto-weighted Federated Learning and Performance-aware Personalization

Grzegorz Skorupko, et al.: Fairness-Aware Data Augmentation for Cardiac MRI using Text-Conditioned Diffusion Models

Hartmut Häntze, et al.: Sex-based Bias Inherent in the Dice Similarity Coefficient: A Model Independent Analysis for Multiple Anatomical Structures

Heejae Lee, et al.: Identifying Gender-Specific Visual Bias Signals in Skin Lesion Classification

Joris Fournel, et al.: The Cervix in Context: Bias Assessment in Preterm Birth Prediction

Leonor Fernandes, et al.: Disentanglement and Assessment of Shortcuts in Ophthalmological Retinal Imaging Exams

Louisa Fay, et al.: MIMM-X: Disentangeling Spurious Correlations for Medical Image Analysis

Partha Shah, et al.: The Impact of Skin Tone Label Granularity on the Performance and Fairness of AI Based Dermatology Image Classification Models

Rajat Rasal, et al.: Causal Representation Learning with Observational Grouping for CXR Classification

Regitze Sydendal, et al.: Robustness and sex differences in skin cancer detection: logistic regression vs CNNs

Shengjia Chen, et al.: Predicting Patient Self-reported Race From Skin Histological Images with Deep Learning

Théo Sourget, et al.: Fairness and Robustness of CLIP-Based Models for Chest X-rays

Tiarna Lee, et al.: Does a Rising Tide Lift All Boats? Bias Mitigation for AI-based CMR Segmentation

Utku Ozbulak, et al.: Revisiting the Evaluation Bias Introduced by Frame Sampling Strategies in Surgical Video Segmentation Using SAM2

You-Qi Chang-Liao, et al.: ShortCXR: Benchmarking Self-Supervised Learning Methods for Shortcut Mitigation in Chest X-Ray Interpretation

Proceedings

FAIMI 2025 Proceedings You can access the Proceedings of our FAIMI 2025 workshop at MICCAI here

Awards MICCAI FAIMI 2025

Congratulations to Utku Ozbulak from Ghent University Global Campus and Louisa Fay from University Hospital of Tübingen for the Best Oral Presentation Award Congratulations to Partha Shah from King’s College London, You-Qi Chang-Liao from National Tsing Hua University, Dishantkumar Sutariya from Fraunhofer Institute for Digital Medicine MEVIS, Théo Sourget from Université Paris-Saclay, Heejae Lee from Yonsei University Wonju College of Medicine, and Abubakr Shafique from Toronto Metropolitan University for the Best Poster Presentation Award

Awards MICCAI FAIMI 2025

Congratulations to Utku Ozbulak from Ghent University Global Campus and Louisa Fay from University Hospital of Tübingen for the best oral presenter award. Congratulations to Partha Shah from King’s College London, to You-Qi Chang-Liao from National Tsing Hua University, Dishantkumar Sutariya from Fraunhofer Institute for Digital Medicine MEVIS, Théo Sourget from Universit´e Paris-Saclay, Heejae Lee from Yonsei University Wonju College of Medicine, Abubakr Shafique from Toronto Metropolitan University for the best poster presenter award.

MICCAI FAIMI 2025 Best Oral MICCAI FAIMI 2025 Best Oral MICCAI FAIMI 2025 Best Poster MICCAI FAIMI 2025 Best Poster MICCAI FAIMI 2025 Best Poster MICCAI FAIMI 2025 Best Poster MICCAI FAIMI 2025 Best Poster MICCAI FAIMI 2025 Best Poster
MICCAI FAIMI 2025 Awards

FAIMI Horizon Award

The FAIMI Horizon Award aims to provide financial support and highlight one talented early-career researcher attending the conference for the first time, with a special focus on individuals from diverse and underserved backgrounds.

The award offers a full scholarship to attend the FAIMI Workshop, creating a valuable opportunity to present research, engage with the global community, and build lasting professional networks.

Eligibility Criteria:

To be eligible for the FAIMI Horizon Award, the first author of the submitted paper must meet all of the following:

Please note: The MICCAI Registration and Travel Grants are intended to support in-person participation in the conference. If a selected grantee is unable to attend in person, the award will be offered to the next eligible candidate.

Call for Papers

We invite the submission of papers for

FAIMI: The MICCAI 2025 Workshop on Fairness of AI in Medical Imaging.

Over the past several years, research on fairness, equity, and accountability in the context of machine learning has extensively demonstrated ethical risks in the deployment of machine learning systems in critical infrastructure, such as medical imaging. The FAIMI workshop aims to encourage and emphasize research on and discussion of fairness of AI within the medical imaging domain. We therefore invite the submission of papers, which will be selected for oral or poster presentation at the workshop. Topics include but are not limited to:

The workshop proceedings will be published in the MICCAI workshops volumes of the Springer Lecture Notes Computer Science (LNCS) series. Papers should be anonymized and at most 8 pages plus at most 2 extra pages of references using the LNCS format. The review process is conducted in a double-blind manner, following MICCAI standards. Submissions are made in CMT.

Following the MICCAI paper submission guidelines, the submission of additional supplementary material is possible.

NEW Supplementary materials are limited to multimedia content (e.g., videos) as warranted by the technical application (e.g. robotics, surgery, ...). These files should not display any proofs, analysis, or additional results, and should not show any identification markers either. Violation of this guideline will lead to desk rejection. PDF files may not be submitted as supplementary materials in 2025 unless authors are citing a paper that has not yet been published. In such a case, authors are required to submit an anonymized version of the cited paper.

All supplementary material must be self-contained and zipped into a single file. Only the following formats are allowed: avi, mp4, wmv. We encourage authors to submit videos using an MP4 codec such as DivX contained in an AVI. A README text file must be included with each video specifying the exact codec used and a URL where the codec can be downloaded.

While the reviewers will have access to such supplementary material, they are under no obligation to review it, and the paper itself must contain all necessary information and illustrations for review purposes.

Dates

All dates are Anywhere on Earth.

Full Paper Deadline: June 25, 2025 June 30th, 2025

Notification of Acceptance: July 16, 2025

Camera-ready Version: July 30, 2025

Workshop: Sept 23, 2025

Sponsors

We are thrilled to announce that a prestigious award will be presented at the upcoming FAIMI workshop, made possible by the generous sponsorship of GISTeR. We encourage all attendees to take this opportunity to learn more about their work.

Organizers

Aasa Feragen, DTU Compute, Technical University of Denmark
Andrew King, King’s College London
Ben Glocker, Imperial College London
Enzo Ferrante, CONICET, Universidad Nacional del Litoral
Eike Petersen, Fraunhofer Institute for Digital Medicine MEVIS
Esther Puyol-Antón, HeartFlow and King’s College London
Melanie Ganz-Benjaminsen, University of Copenhagen & Neurobiology Research Unit, Rigshospitalet
Veronika Cheplygina, IT University Copenhagen
Heisook Lee, President of GISTeR

Contact

Please direct any inquiries related to the workshop or this website to faimi-organizers@googlegroups.com.