Skip to content

mansour2002/MF-AV-NET

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MF-AV-Net Overview

In this project, we present a fully convolutional network (FCN), MF-AV-Net, that consists of multimodal fusion options and validated it for comparative assessment of different optical coherence tomography (OCT) and OCT-angiography (OCTA) fusion strategies to improve AV segmentation performance in OCTA.

This updated repository now features a more modular and flexible training script (main.py) that accepts hyperparameters via command-line arguments, allowing for easier experimentation and deployment. The default loss function for training is now the IoU (Jaccard) loss. Dependencies

Ensure you have the following Python packages installed:

tensorflow (compatible with Keras 2.x, typically tensorflow>=2.x)

keras (typically keras>=2.x)

python (>=3.7.1)

numpy

pandas

scikit-image (skimage)

matplotlib

opencv-python (cv2)

You can install them using pip:

pip install tensorflow keras numpy pandas scikit-image matplotlib opencv-python

Usage

To train the AV-Net model, follow these steps:

  1. Prepare Dataset CSVs

First, generate the train.csv and test.csv files using the csv_generator.py script. This script scans your dataset directory to create the necessary image lists. Make sure your dataset directory structure is set up as expected (e.g., dataset/oct, dataset/octa, dataset/gt).

python csv_generator.py
  1. Run Training

Execute the main.py script to start the training process. You can customize various hyperparameters and paths using command-line arguments:

python main.py \
    --learning_rate 0.0001 \
    --epochs 50 \
    --batch_size 32 \
    --image_height 320 \
    --image_width 320 \
    --n_channels 3 \
    --dataset_base_path "dataset" \
    --results_dir "AVNet_Training_Results" \
    --train_csv "train.csv" \
    --train_split_ratio 0.8

Command-line Arguments:

--learning_rate (float, default: 0.0001): Learning rate for the Adam optimizer.

--epochs (int, default: 10): Number of epochs to train the model.

--batch_size (int, default: 16): Batch size for training.

--image_height (int, default: 320): Height of the input images.

--image_width (int, default: 320): Width of the input images.

--n_channels (int, default: 3): Number of input channels for the model (e.g., 3 for combined OCT and OCTA data).

--dataset_base_path (str, default: "dataset"): Base path to your dataset directory (e.g., where oct, octa, gt folders reside).

--results_dir (str, default: "Results"): Directory to save training results and model checkpoints.

--train_csv (str, default: "train.csv"): Path to the CSV file containing training image names, generated by csv_generator.py.

--train_split_ratio (float, default: 0.8): Fraction of the dataset to use for training (e.g., 0.8 for 80% train, 20% validation).

Citations

Mansour Abtahi, David Le, Jennifer I. Lim, and Xincheng Yao, "MF-AV-Net: an open-source deep learning network with multimodal fusion options for artery-vein segmentation in OCT angiography," Biomed. Opt. Express 13, 4870-4888 (2022) https://doi.org/10.1364/BOE.468483

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages