TP-invariant Training: bitwise-identical training across TP degrees and GPU Architecture#4740
Draft
jinzex wants to merge 2 commits into
Draft
TP-invariant Training: bitwise-identical training across TP degrees and GPU Architecture#4740jinzex wants to merge 2 commits into
jinzex wants to merge 2 commits into
Conversation
When set, NullTokenizer treats --vocab-size N as the total vocab including eod, so eod_id=N-1 and tokenizer.vocab_size=N. Default behavior is unchanged (eod_id=N, tokenizer.vocab_size=N+1). Matches Megatron-Bridge's NullTokenizer convention for mock-data benchmarks. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Bitwise-identical forward, backward, and end-to-end training across TP=1,
2, 4, 8 on Megatron-Core TransformerBlocks. Gated by NVTE_TP_INVARIANT_MODE
environment variable.
Components:
- TP-invariant GEMM (all-gather sharded weight, full-K GEMM) for both
column and row parallel linear in the TE patches under
examples/tp-numerics/patches/.
- Gradient clipping pow2-rounding to absorb 1-ULP cross-TP norm jitter.
- RMSNorm dgamma all-gather with rank-0-only reduction.
- Batch-invariant Triton kernels (BIK) for M-invariant matmul.
- Cross-entropy all-gather over exp_logits.
Validation:
- tests/unit_tests/transformer/test_tp_invariant.py — TP=1≡2≡4 bitwise
on a small TransformerBlock (fp32+bf16).
- examples/tp-numerics/submit_qwen3_{0.6b,8b,moe_toy}_tp_invariant.sh —
end-to-end raw-MLM training scripts; Qwen3-0.6B (TP=1) and Qwen3-8B
(TP=4) bitwise across 100 iters and across B300 ≡ H100 hardware.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
8 tasks
jinzex
added a commit
to jinzex/TransformerEngine
that referenced
this pull request
May 13, 2026
Gated on NVTE_TP_INVARIANT_MODE=1 (default off; stock paths unchanged). - module/linear.py: row-parallel FWD + BWD full GEMM matching TP=1 K-dim accumulation. - module/layernorm_linear.py: column-parallel BWD dgrad full GEMM with gated deinterleave for SwiGLU FC1 (partition_stride > 1). Companion Megatron-LM PR (gates this code path via env var): NVIDIA/Megatron-LM#4740. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Signed-off-by: Jinze Xue <jinzex@nvidia.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do ?
Bitwise-identical fwd/bwd/E2E training on Megatron-Core TransformerBlocks across TP=1/2/4/8, gated by
NVTE_TP_INVARIANT_MODE=1(default off).Fixes span TP-invariant GEMM (fwd+bwd, column + row parallel), gated deinterleave, cross-entropy all-gather, output-projection all-gather, float64 + pow2 gradient clipping, RMSNorm dgamma rank-0 reduction, and batch-invariant Triton kernels.
Companion TE PR: NVIDIA/TransformerEngine#2977.
Contribution process
Tests
test_tp_invariant.py).Pre-checks
Code review
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.
Step 1: Mark PR as "Ready for Review"
.github/CODEOWNERS.Final Review might get declined if these requirements are not fulfilled.
Step 2: Final Review
For PRs that change
megatron/core, once all expert reviewers have approved, theFinal Reviewlabel is applied automatically and final reviewers are assigned.For PRs outside
megatron/core, this step is skipped.Step 3: Approved
Once all required reviewers have approved, the
Approvedlabel is applied automatically.Merge
Any member of mcore-engineers will be able to merge your PR.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.