SC'25 UltraAttn: Efficiently Parallelizing Attention through Hierarchical Context-Tiling
-
Updated
Aug 14, 2025 - Python
SC'25 UltraAttn: Efficiently Parallelizing Attention through Hierarchical Context-Tiling
a high-performance Block Sparse Attention kernel in Triton
Pre-compiled custom CUDA extension for Block Sparse Attention (Python 3.11 / PyTorch 2.6.0+cu124).
Add a description, image, and links to the block-sparse-attention topic page so that developers can more easily learn about it.
To associate your repository with the block-sparse-attention topic, visit your repo's landing page and select "manage topics."