If you would like to continue training after crash, call setAllowResume method before calling fit
cfg= segmentation.parse("./people-1.yaml")
cfg.setAllowResume(True)
ds=SimplePNGMaskDataSet("D:/pics/train","D:/pics/train_mask")
cfg.fit(ds)One way to reduce memory usage is to limit augmentation queue limit which is 50 by default, like in the following example:
segmentation_pipeline.impl.datasets.AUGMENTER_QUEUE_LIMIT = 3How can I run sepate set of augmenters on initial image/mask when replacing backgrounds with Background Augmenter?
BackgroundReplacer:
rate: 0.5
path: D:/bg
augmenters: #this augmenters will run on original image before replacing background
Affine:
scale: [0.8, 1.5]
translate_percent:
x: [-0.2,0.2]
y: [-0.2,0.2]
rotate: [-16, 16]
shear: [-16, 16]
erosion: [0,5] You should set showDataExamples to True like in the following sample
cfg= segmentation.parse("./no_erosion_aug_on_masks/people-1.yaml")
cfg.showDataExamples=Trueif will lead to generation of training images samples and storing them in examples folder at the end of each epoch
What I can do if i have some extra training data, that should not be included into validation, but should be used during the training?
extra_data=NotzeroSimplePNGMaskDataSet("D:/phaces/all","D:/phaces/masks") #My dataset that should be added to training
segmentation.extra_train["people"] = extra_dataand in the config file:
extra_train_data: peopleThis code sample will return primary metric stats over folds/stages
cfg= segmentation.parse("./no_erosion_aug_on_masks/people-1.yaml")
metrics = cfg.info()
I have some callbacks that are configured globally, but I need some extra callbacks for my last training stage?
There are two possible ways how you may configure callbacks on stage level:
- override all global callbacks with
callbackssetting. - add your own custom callbacks with
extra_callbackssetting.
In the following sample CyclingRL callback is only appended to the sexond stage of training:
loss: binary_crossentropy
stages:
- epochs: 20
negatives: real
- epochs: 200
extra_callbacks:
CyclicLR:
base_lr: 0.000001
max_lr: 0.0001
mode: triangular
step_size: 800
negatives: realOne option to do this, is to store predictions for each file and model in numpy array, and then sum these predictions like in the following sample:
cfg.predict_to_directory("D:/pics/test","D:/pics/arr1", [0, 1, 4, 2], 1, ttflips=True,binaryArray=True)
cfg.predict_to_directory("D:/pics/test", "D:/pics/arr", [0, 1, 4, 2], 2, ttflips=True, binaryArray=True)
segmentation.ansemblePredictions("D:/pics/test",["D:/pics/arr/","D:/pics/arr1/"],onPredict,d)cfg.gpus=4 #or another number matching to the count of gpus that you have