Restore clearing collision normal cache in broad_phase#50
Open
chien-an-chen wants to merge 2 commits intoamd-integrationfrom
Open
Restore clearing collision normal cache in broad_phase#50chien-an-chen wants to merge 2 commits intoamd-integrationfrom
chien-an-chen wants to merge 2 commits intoamd-integrationfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Switches the rigid-body broadphase SAP implementation back to an LDS/shared-memory fast path for small scenes, with additional packing/overlap-check optimizations.
Changes:
- Introduces
func_broad_phase_ldsusing shared arrays (LDS) for sorting and active list management whenn_geoms <= MAX_GEOMS_IN_LDS. - Packs
(i_g, is_max)into a single shared int buffer to reduce LDS traffic. - Adjusts AABB overlap checks and removes a redundant axis-specific early-out.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Collaborator
|
/run-ci |
2 similar comments
Author
|
/run-ci |
Author
|
/run-ci |
gpinkert
added a commit
that referenced
this pull request
Apr 30, 2026
…bgroup Activate THREADS_PER_ENV=4 (BLOCK_DIM=64, ENVS_PER_BLOCK=16) and use the 4 lanes per env to cooperatively execute the warm-start AABB re-fill phase, partitioning the 2*n_geoms events 4 ways. Sort and sweep stay single-threaded on lane 0 due to sequential dependencies through n_active and the in-place sort buffer. Communication is via global memory + a single qd.simt.block.sync() barrier; no LDS is allocated. This intentionally keeps the design clear of the §6 LDS-occupancy regression that PR #50 hit at 14-18 KiB/WG. Why: * The prior analysis (Pattern P1) showed the JIT was launching 152 lanes per env but only 1 was doing useful work (99% lane gating). T2.1 puts the wasted lanes to work on the warm-start re-fill, the only embarrassingly-parallel sub-phase in the SAP loop. * Available subgroup primitives in quadrants/lang/simt/subgroup.py do not include shuffle_xor or any_true, which are needed for a full cooperative bitonic sort. Within those constraints, parallelizing the warm-start re-fill is the largest no-LDS subgroup-cooperative win available. Hibernation path is intentionally not parallelized in T2.1. Risk: medium. Restructures the per-env loop body to use threaded indexing (i_thread, i_b, i_t). Adds an explicit qd.simt.block.sync() barrier between phases. Pytest gate validation required. Co-Authored-By: Grant Pinkert <gpinkert@amd.com>
gpinkert
added a commit
that referenced
this pull request
Apr 30, 2026
…bgroup Activate THREADS_PER_ENV=4 (BLOCK_DIM=64, ENVS_PER_BLOCK=16) and use the 4 lanes per env to cooperatively execute the warm-start AABB re-fill phase, partitioning the 2*n_geoms events 4 ways. Sort and sweep stay single-threaded on lane 0 due to sequential dependencies through n_active and the in-place sort buffer. Communication is via global memory + a single qd.simt.block.sync() barrier; no LDS is allocated. This intentionally keeps the design clear of the LDS-occupancy regression that PR #50 hit at 14-18 KiB/WG. Why: * The prior analysis (Pattern P1) showed the JIT was launching 152 lanes per env but only 1 was doing useful work (99% lane gating). T2.1 puts the wasted lanes to work on the warm-start re-fill, the only embarrassingly-parallel sub-phase in the SAP loop. * Available subgroup primitives in quadrants/lang/simt/subgroup.py do not include shuffle_xor or any_true, which are needed for a full cooperative bitonic sort. Within those constraints, parallelizing the warm-start re-fill is the largest no-LDS subgroup-cooperative win available. Hibernation path is intentionally not parallelized. This commit also folds in the vec3 AABB-load pattern from a previous attempted commit (was: T1.4 vectorize AABB component loads). The vec3 reads were a stand-alone no-op at the bench level (JIT was already coalescing the 6 scalar reads), but they make the source cleaner and came along for free with the T2.1 restructure. Measured (cx63, 3-run mean FPS @ 8192 envs, vs amd-integration baseline): baseline: 488.3 us k_main, 262.5 ms k_total, 138.2 FPS this commit (full): 413.8 us k_main, 213.7 ms k_total, 140.0 FPS delta: -15.3% k_main, -18.6% k_total, +1.3% FPS Risk: medium. Restructures the per-env loop body to use threaded indexing (i_thread, i_b, i_t). Adds an explicit qd.simt.block.sync() barrier between phases. Pytest gate on collision tests passed clean on the prior Tier 1 stack; T2.1 itself awaiting full pytest run. Co-Authored-By: Grant Pinkert <gpinkert@amd.com>
gpinkert
added a commit
that referenced
this pull request
May 1, 2026
…bgroup Activate THREADS_PER_ENV=4 (BLOCK_DIM=64, ENVS_PER_BLOCK=16) and use the 4 lanes per env to cooperatively execute the warm-start AABB re-fill phase, partitioning the 2*n_geoms events 4 ways. Sort and sweep stay single-threaded on lane 0 due to sequential dependencies through n_active and the in-place sort buffer. Communication is via global memory + a single qd.simt.block.sync() barrier; no LDS is allocated. This intentionally keeps the design clear of the LDS-occupancy regression that PR #50 hit at 14-18 KiB/WG. Why: * The prior analysis (Pattern P1) showed the JIT was launching 152 lanes per env but only 1 was doing useful work (99% lane gating). T2.1 puts the wasted lanes to work on the warm-start re-fill, the only embarrassingly-parallel sub-phase in the SAP loop. * Available subgroup primitives in quadrants/lang/simt/subgroup.py do not include shuffle_xor or any_true, which are needed for a full cooperative bitonic sort. Within those constraints, parallelizing the warm-start re-fill is the largest no-LDS subgroup-cooperative win available. Hibernation path is intentionally not parallelized. This commit also folds in the vec3 AABB-load pattern from a previous attempted commit (was: T1.4 vectorize AABB component loads). The vec3 reads were a stand-alone no-op at the bench level (JIT was already coalescing the 6 scalar reads), but they make the source cleaner and came along for free with the T2.1 restructure. Measured (cx63, 3-run mean FPS @ 8192 envs, vs amd-integration baseline): baseline: 488.3 us k_main, 262.5 ms k_total, 138.2 FPS this commit (full): 413.8 us k_main, 213.7 ms k_total, 140.0 FPS delta: -15.3% k_main, -18.6% k_total, +1.3% FPS Risk: medium. Restructures the per-env loop body to use threaded indexing (i_thread, i_b, i_t). Adds an explicit qd.simt.block.sync() barrier between phases. Pytest gate on collision tests passed clean on the prior Tier 1 stack; T2.1 itself awaiting full pytest run. Co-Authored-By: Grant Pinkert <gpinkert@amd.com>
gpinkert
added a commit
that referenced
this pull request
May 1, 2026
…bgroup Activate THREADS_PER_ENV=4 (BLOCK_DIM=64, ENVS_PER_BLOCK=16) and use the 4 lanes per env to cooperatively execute the warm-start AABB re-fill phase, partitioning the 2*n_geoms events 4 ways. Sort and sweep stay single-threaded on lane 0 due to sequential dependencies through n_active and the in-place sort buffer. Communication is via global memory + a single qd.simt.block.sync() barrier; no LDS is allocated. This intentionally keeps the design clear of the LDS-occupancy regression that PR #50 hit at 14-18 KiB/WG. Why: * The prior analysis (Pattern P1) showed the JIT was launching 152 lanes per env but only 1 was doing useful work (99% lane gating). T2.1 puts the wasted lanes to work on the warm-start re-fill, the only embarrassingly-parallel sub-phase in the SAP loop. * Available subgroup primitives in quadrants/lang/simt/subgroup.py do not include shuffle_xor or any_true, which are needed for a full cooperative bitonic sort. Within those constraints, parallelizing the warm-start re-fill is the largest no-LDS subgroup-cooperative win available. Hibernation path is intentionally not parallelized. This commit also folds in the vec3 AABB-load pattern from a previous attempted commit (was: T1.4 vectorize AABB component loads). The vec3 reads were a stand-alone no-op at the bench level (JIT was already coalescing the 6 scalar reads), but they make the source cleaner and came along for free with the T2.1 restructure. Measured (cx63, 3-run mean FPS @ 8192 envs, vs amd-integration baseline): baseline: 488.3 us k_main, 262.5 ms k_total, 138.2 FPS this commit (full): 413.8 us k_main, 213.7 ms k_total, 140.0 FPS delta: -15.3% k_main, -18.6% k_total, +1.3% FPS Risk: medium. Restructures the per-env loop body to use threaded indexing (i_thread, i_b, i_t). Adds an explicit qd.simt.block.sync() barrier between phases. Pytest gate on collision tests passed clean on the prior Tier 1 stack; T2.1 itself awaiting full pytest run. Co-Authored-By: Grant Pinkert <gpinkert@amd.com>
cb3421d to
310c882
Compare
Collaborator
|
/run-ci |
yaoliu13
requested changes
May 3, 2026
Collaborator
yaoliu13
left a comment
There was a problem hiding this comment.
This PR has conflicts if we want to merge it to amd-integration. Please read Confluence page "Genesis PR Review Process".
310c882 to
53d345f
Compare
Author
|
/run-ci |
53d345f to
fd421e6
Compare
Collaborator
|
/run-ci |
1 similar comment
Collaborator
|
/run-ci |
Collaborator
|
pre-submit is not good. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Restore the part of Clearing collision normal cache to fix potential accuracy issue.