You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here is the TE-FL related configuration information for running Qwen3-0.6B. Please check if it meets your expectations.
FlagOS-Related Arguments for unified computation backend and communication backend
te_fl_prefer: vendor #flagos # enable flagos:triton in transformer engine fl
te_fl_per_op: "rmsnorm_fwd=vendor:ascend|flagos" # enable custom ops selection for transformer engine fl
te_fl_allow_vendors: "ascend" # allow vendors for transformer engine fl
te_fl_deny_vendors: "nvidia" # deny vendors for transformer engine fl
enable_flag_gems: False #True # enable flag gems to replace torch ops for distributed training
flag_gems_log_path: xxx/TE-FL-test/FlagScale/log_gems/gems.log # path of flag gems logging
flag_gems_unused: [to, copy] # flag gems unused ops list
distributed_backend: nccl #flagcx # enable flagcx for distributed training
Here are the TE-FL-related information from the logs printed during model training.
[default0]:training ...
[default0]:Setting rerun_state_machine.current_iteration to 0...
[default0]:[before the start of training step] datetime: 2026-02-27 09:26:42
[default0]:ninja: no work to do.
[default0]:[2026-02-27 09:27:15,708 TE-FL manager.py:417 INFO] Op 'multi_tensor_adam' using 'default.flagos' (kind=flagos, vendor=None)
[default0]:[WARNING] Please DO NOT tune args ['num_warps']!
[default0]:[WARNING] Please DO NOT tune args ['num_warps']!
Here is the TE-FL related configuration information for running Qwen3-0.6B. Please check if it meets your expectations.
FlagOS-Related Arguments for unified computation backend and communication backend
te_fl_prefer: vendor #flagos # enable flagos:triton in transformer engine fl te_fl_per_op: "rmsnorm_fwd=vendor:ascend|flagos" # enable custom ops selection for transformer engine fl te_fl_allow_vendors: "ascend" # allow vendors for transformer engine fl te_fl_deny_vendors: "nvidia" # deny vendors for transformer engine fl enable_flag_gems: False #True # enable flag gems to replace torch ops for distributed training flag_gems_log_path: xxx/TE-FL-test/FlagScale/log_gems/gems.log # path of flag gems logging flag_gems_unused: [to, copy] # flag gems unused ops list distributed_backend: nccl #flagcx # enable flagcx for distributed training
Here are the TE-FL-related information from the logs printed during model training.
[default0]:training ... [default0]:Setting rerun_state_machine.current_iteration to 0... [default0]:[before the start of training step] datetime: 2026-02-27 09:26:42 [default0]:ninja: no work to do. [default0]:[2026-02-27 09:27:15,708 TE-FL manager.py:417 INFO] Op 'multi_tensor_adam' using 'default.flagos' (kind=flagos, vendor=None) [default0]:[WARNING] Please DO NOT tune args ['num_warps']! [default0]:[WARNING] Please DO NOT tune args ['num_warps']!
Could you provide a complete training log so we can verify whether other TE-related ops were executed, such as generic_temm?
Why is te_fl_prefer: vendor configured instead of flagos?
If te_fl_prefer: vendor, why did multi_tensor_adam execute the FlagOS implementation?
Why is enable_flag_gems: false? This will prevent some torch ops from being replaced with FlagGems.
Please config: transformer_impl: transformer_engine to use TransformerEngine-FL, and furthermore, adapt multiple FlagOS OPs in TE-FL.
Recommend config as follows:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Add vendor:ascend backend. The latest code has been synchronized and pre-commit has been executed locally.
Fixes # (issue)
Type of change
Changes
Please list the changes introduced in this PR:
Checklist: