I apologize for the inconvenience, but I have some questions about reproducing the results of your REFER paper.
When I ran command lines using a pre-trained weight file, different results from Table 1 in the paper were observed.
For example, in the case of Shenzhen Tuberculosis, 0.93 was obtained on the test set, but it is written as 0.98 in the paper. (-0.05 performance gap)
Could you tell me where the difference came from?
(I simply executed the following as written in README.md python train.py --name caption_100 --stage train --model_type ViT-B_16 --num_classes 1 --pretrained_dir "../checkpoint/refers_checkpoint.pth" --output_dir "./output/" --data_volume '100' --num_steps 100 --eval_batch_size 512 --img_size 224 --learning_rate 3e-2 --warmup_steps 5 --fp16 --fp16_opt_level O2 --train_batch_size 128 There were also performance gaps in all other fine-tuning datasets.)
I apologize for the inconvenience, but I have some questions about reproducing the results of your REFER paper.
When I ran command lines using a pre-trained weight file, different results from Table 1 in the paper were observed.
For example, in the case of Shenzhen Tuberculosis, 0.93 was obtained on the test set, but it is written as 0.98 in the paper. (-0.05 performance gap)
Could you tell me where the difference came from?
(I simply executed the following as written in README.md
python train.py --name caption_100 --stage train --model_type ViT-B_16 --num_classes 1 --pretrained_dir "../checkpoint/refers_checkpoint.pth" --output_dir "./output/" --data_volume '100' --num_steps 100 --eval_batch_size 512 --img_size 224 --learning_rate 3e-2 --warmup_steps 5 --fp16 --fp16_opt_level O2 --train_batch_size 128There were also performance gaps in all other fine-tuning datasets.)