This is the official repository for Thalia, a global dataset for volcanic activity monitoring through InSAR imagery. Thalia builds upon Hephaestus (Bountos et al., 2022) and offers various key advantages:
- Machine Learning-ready state
- Georeferenced InSAR imagery (vs. PNG format)
- Enhanced spatial resolution at 100m GSD (vs. 333m GSD)
- Physically interpretable pixel values (vs. RGB composites)
- Zarr file format with spatial and temporal dimensions
- Additional data critical for volcanic monitoring: Digital Elevation Model (DEM), Confounding stmospheric variables
Annotations offer rich insights on the deformation type (sill, dyke, mogi, spheroid, earthquake), the intensity level (low, medium, high), the presence of non-volcanic related fringes (atmospheric, orbital, glacier) as well as the volcanic activity phase (rest, unrest, rebound). Each sample also contains a text description of the observed phenomena, ideal for language-based machine learning modelling.
You can explore a sample minicube from the dataset and investigate its structure and available annotation variables using the interactive Google Colab notebook below:
There are two ways to access the dataset:
- Option 1: Download the full dataset and generate your own webdatasets
- Option 2: Use pre-generated splits via Hugging Face
-
Download the dataset
The latest version is available here: Thalia_v0
-
Export Webdatasets
-
Set the
webdatasetparameter totruein the configuration file. -
If a webdataset for the timeseries length specified in the configuration file does not exist, running
main.pywill automatically enter webdataset creation mode. -
Execute the command three times, once for each of the train, validation, and test splits, to generate the full dataset.
-
After the process completes, run the renaming script:
./webdataset_renaming.sh
Warning: You may need to edit the bash script to specify the correct base directory and timeseries length before running.
-
The webdatasets used in the paper and for benchmarking are available via Hugging Face:
https://huggingface.co/datasets/orion-ai-lab/Thalia
The code in this repository implements an extensive benchmark on Thalia with a wide range of state of the art Deep Learning models. The benchmark consists of two basic tasks: image classification and semantic segmentation, each with both single-image and time-series input. Below we list the models used in the experimentation:
Classification:
- ResNet
- MobileNet v3
- EfficientNet v2
- ConvNeXt
- ViT
Segmentation:
- DeepLab v3
- UNet
- SegFormer
We consider a temporal split, using data in 01/2014-05/2019 for training, 06/2019-12/2019 for validation and 01/2020-12/2021 for testing.
To train and evaluate a Deep Learning model, simply run the following (with optional flags):
python main.py
Flags:
--wandb to sync results to a Wandb (Weights and Biases) project specified by configs
The model and backbone to use, as well as various training hyperparameters (e.g. batch size, number of epochs, learning rate, etc.) need to be configured in the configuration file (configs/configs.json).
We provide a Jupyter notebook showing how to download and explore a minicube.
We group per primary_date and get multiple secondary_dates --> p, [s1, ..., sn].
- If len([s1, ..., sn]) >
timeseries_lengthwe choose a random subset for every epoch - If len([s1, ..., sn]) <
timeseries_lengthwe do random duplication every epoch so that we reach the desired length
We export all available samples in the WebDataset format, but during training, we sample all positive examples and an equal number of negative examples per epoch.
We apply a 512x512 crop with a random offset from the frame center, ensuring that the deformation is included in the crop if present.
There are 4 main options for the creation of the target mask (defined as mask_target in the configuration file):
Last: Use the mask of the last sample in the time-seriesPeak: Use a sum over all masks in the time-seriesUnion: Use a union of all masks in the time-seriesAll: Return all masks in the time-series (Only used for debugging)