This repository contains the official implementation of CLMU-Net. Our paper, "Towards Modality-Agnostic Continual Domain-Incremental Brain Lesion Segmentation," studies continual brain lesion segmentation across shifting MRI domains and modality availability, using modality-flexible inputs, text-guided domain conditioning, and a lesion-aware replay buffer to mitigate forgetting.
Yousef Sadegheih, Dorit Merhof, and Pratibha Kumari
- Abstract
- Updates
- Key Contributions
- Method
- Datasets, Pre-trained Weights
- Results
- Getting Started
- Acknowledgments
- Citation
Brain lesion segmentation from multi-modal MRI often assumes fixed modality sets or predefined pathologies, making existing models difficult to adapt across cohorts and imaging protocols. Continual learning (CL) offers a natural solution but current approaches either impose a maximum modality configuration or suffer from severe forgetting in buffer-free settings. We introduce CLMU-Net, a replay-based CL framework for 3D brain lesion segmentation that supports arbitrary and variable modality combinations without requiring prior knowledge of the maximum set. A conceptually simple yet effective channel-inflation strategy maps any modality subset into a unified multi-channel representation, enabling a single model to operate across diverse datasets. To enrich inherently local 3D patch features, we incorporate lightweight domain-conditioned textual embeddings that provide global modality-disease context for each training case. Forgetting is further reduced through principled replay using a compact buffer composed of both prototypical and challenging samples. Experiments on five heterogeneous MRI brain datasets demonstrate that CLMU-Net consistently outperforms popular CL baselines. Notably, our method yields an average Dice score improvement of
- 😎 First release – January 20, 2026
- Modality-Agnostic Continual Segmentation: Enables domain-incremental brain lesion segmentation under changing and incomplete MRI modality sets via dynamic channel inflation and random modality drop.
- Text-Guided Domain Conditioning: Injects domain/modality context into the U-Net bottleneck using textual guidance with cross-attention for more robust adaptation.
- Lesion-Aware Replay for Forgetting Mitigation: Uses a lesion-aware replay buffer (representative + hard samples) to reduce catastrophic forgetting with small memory budgets.
CLMU-Net is a replay-based continual learning 3D U-Net for brain lesion segmentation across sequential MRI cohorts with variable modality availability. It combines (1) dynamic channel inflation + Random Modality Drop for modality-flexible inputs, (2) Domain-Conditioned Textual Guidance (BioBERT prompt embeddings) injected at the bottleneck via cross-attention, and (3) a lesion-aware replay buffer that keeps both representative and challenging cases to reduce forgetting.
For a detailed explanation of each component, please refer to our paper.
Our experiments were conducted on the following datasets:
For preprocessing, we followed the BrainCL repository.For the text embeddings we have utilized BioBert Pytorch implementation. For convenience, the preprocessed data can be downloaded from here and the BioBert embeddings can be downloaded here. Please, place the embedding file in the preprocessed data folder.
You can download the pre-trained weights for different sequences below:
| Sequence 1 weights (CLMU-Net |
Sequence 2 weights (CLMU-Net |
|---|---|
| Download | Download |
CLMU-Net consistently outperforms strong continual-learning baselines across five 3D brain MRI datasets, improving overall performance/stability and reducing forgetting, especially with very small replay buffers (e.g., 10 samples).
This section provides instructions on how to run CLMU-Net.
- Operating System: Ubuntu 22.04 or higher
- CUDA: Version 12.x
- Package Manager: Conda
- Hardware:
- GPU with 12GB memory or larger (recommended)
- For our experiments, we used a single GPU (H100-92G)
To install the required packages and set up the environment, simply run the following command:
conda create -n clmu python==3.12 -y
conda activate clmu
pip install -r requirements.txt
conda install -c nvidia cuda-nvrtc=12.4 -yThis will:
- Create a Conda environment named
clmu - Install all the necessary dependencies
For training and inference, you can use the provided shell scripts located in the script folder. These scripts are pre-configured for easy execution.
- Path Configuration: Before running the scripts, make sure to update the paths in the shell script files to reflect your setup.
- Metrics: There is a
metricsfolder containing Python scripts that can be used to calculate Row-wise average values, Overall avg acc, Avg acc at last episode, Avg lower triangular with diag, BWT Rodriguez/Gonzalez and FWT Rodriguez metrics for each sequence.
This repository is built based on MultiUnet, BrainCL, Mammoth, Avalanche. We thank the authors for their code repositories.
If you find this work useful for your research, please cite:
@article{sadegheih2026clmunet,
title={Towards Modality-Agnostic Continual Domain-Incremental Brain Lesion Segmentation},
author={Sadegheih, Yousef and Merhof, Dorit and Kumari, Pratibha},
year={2026},
}
