-
Notifications
You must be signed in to change notification settings - Fork 0
Make SequentialDict return outputs from all modules as DotDict
#171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This reverts commit 01a2066.
SequentialDict return outputs from all modulesSequentialDict return outputs from all modules as DotDict
mzweilin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like there're merged changes in mart/models/modular.py.
I will review again after you merge main into this.
Should be good to review. |
mzweilin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Shall we just delete ReturnKwargs in mart.nn?
|
Leave it for now because some modules still use it. Making everything tidier should be a different PR. |
What does this PR do?
Right now
SequentialDictonly returns theoutputmodule. This requires hardcoding a module with that name and often makes configuration more verbose than necessary. This PR removes that requirement and simplifies configuration by returning all outputs in the form of aDotDict. This is a breaking change and also removesReturnKwargssince it is unnecessary.This PR depends upon the following:
LitModular.*_step_logto be a dictionary #168LitModular#169*_step_endfromLitModular#170Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
pytestCUDA_VISIBLE_DEVICES=0 python -m mart experiment=CIFAR10_CNN_Adv trainer=gpu trainer.precision=16reports 70% (21 sec/epoch).CUDA_VISIBLE_DEVICES=0,1 python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp trainer.precision=16 trainer.devices=2 model.optimizer.lr=0.2 trainer.max_steps=2925 datamodule.ims_per_batch=256 datamodule.world_size=2reports 70% (14 sec/epoch).Before submitting
pre-commit run -acommand without errorsDid you have fun?
Make sure you had fun coding 🙃