Skip to content
/ MDiCo Public

public repository of our work in multi-modal co-learning for missing modality with EO data

Notifications You must be signed in to change notification settings

fmenat/MDiCo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MDiCo: Multi-modal Disentanglement for Co-learning with Earth Observation data

Public repository of our work in multi-modal co-learning for missing modality with EO data


missing data

The previous image illustrates our MDiCo framework in a multi-modal setting. We focus on the co-learning with multi-sensor Earth observation data, including classification (binary, multi-class, multi-label) and regression tasks. The objective is to achieve robust models for the all-but-one missing modality scenarios, i.e. multi-modal data available for training and a single-modality data available for inference.

Training

We provide config file examples on how to train our model with different settings.

  • To train a method based on MDiCo framework in the CropHarvest multi dataset, run:
python train.py -s config/cropmulti_full.yaml

For other datasets you can check the other config files in the config folder.

Note

Read about the used data in data folder

Evaluation

missing data

  • To evaluate the predictive performance run:
python evaluate.py -s config/eval.yaml

All details to folder paths and configurations are inside the yaml files.


🖊️ Citation

...

About

public repository of our work in multi-modal co-learning for missing modality with EO data

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages