-
Notifications
You must be signed in to change notification settings - Fork 77
Refactor module access to use PyTorch get/set_submodule API #1365
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Refactor module access to use PyTorch get/set_submodule API #1365
Conversation
for more information, see https://pre-commit.ci
…obic/auto-round into refactor/use-torch-submodule-api
for more information, see https://pre-commit.ci
xin3he
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
yiliu30
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
|
Hi @scopophobic you can check the log here. https://dev.azure.com/lpot-inc/neural-compressor/_build/results?buildId=51719&view=logs&j=44c25250-aab3-5e31-d6d7-8ba2147b1266&t=262f41be-8379-5409-f492-e6c716395db9 |
|
I'm now seeing test failures that appear to be related to this change: Root CauseThe new fail-fast behavior is breaking code in for key in model._tied_weights_keys:
weight = get_attr(model, key)
set_attr(model, key, copy.deepcopy(weight))The issue: model._tied_weights_keys may contain keys that don't exist in all model configurations. The old Question: Which approach should I take?Option 1: Keep silent None behavior
Option 2: Improve the test case Update for key in model._tied_weights_keys:
try:
weight = get_attr(model, key)
set_attr(model, key, copy.deepcopy(weight))
except AttributeError:
# Skip tied weights that don't exist in this model configuration
pass
My recommendation: Option 2 (improve the test case), since fail-fast with clear errors is better for debugging and prevents silent bugs from propagating. What do you think? |
|
Discussed with @WeiweiZhang1 , we'd better keep the saved config, and find a workaround for this case. |

Summary
issue : #1362
This PR refactors module access and replacement to rely on PyTorch’s native
get_submodule/set_submoduleAPIs instead of maintaining custom traversal logic.What changed
get_attr/set_attrhelpersget_module/set_module(thin wrappers over
get_submodule/set_submodule)Why
PyTorch has provided a robust module replace API since 1.x.
Using the native API improves correctness, readability, and long-term maintainability,
and avoids subtle issues with manual attribute traversal.
Behavior
No functional changes — this is a pure refactor.