Skip to content

Add TurboQuant3/4 modes to quantized_scaled_dot_product_attention#3

Open
dedalien wants to merge 2 commits intoCC-Yeh:quantized_sdpafrom
dedalien:turboq/generic-quant-sdpa
Open

Add TurboQuant3/4 modes to quantized_scaled_dot_product_attention#3
dedalien wants to merge 2 commits intoCC-Yeh:quantized_sdpafrom
dedalien:turboq/generic-quant-sdpa

Conversation

@dedalien
Copy link
Copy Markdown

Integrates TurboQuant (arXiv 2504.19874) into the generic quant-SDPA infrastructure rather than as a standalone kernel.

New modes "turbo3" (3-bit) and "turbo4" (4-bit) use Lloyd-Max codebooks for the N(0,1) distribution of WHT-rotated, norm-normalised key coordinates. Codebooks are compile-time Metal constants; no runtime buffer needed.

Memory layout

  • K/V: uint32-packed bit indices, same PackReader<N> path as affine
  • k_scales/v_scales: per-vector float16 norms / √D
  • group_size == head_dim (one scale per full D-element vector)
  • Queries pre-rotated by caller via WHT

Changes

  • primitives.h/cpp: TurboQuant3/TurboQuant4 in QuantizationMode enum, string parsing, quantization_mode_to_string explicit cases
  • fast.cpp: TurboQuant validation branch (no biases, float scales, group_size == head_dim, head_dim ∈ {64,128,256}, GPU-only fallback)
  • quantized_utils.h: QuantMode enum extension, QuantConfig/Dequant specialisations, Lloyd-Max codebook constants
  • sdpa_vector.h: static_assert extended for bits=3; QUANT_SDPA_DISPATCH entries for D ∈ {64,128,256} gated on if constexpr (D >= group_size)
  • scaled_dot_product_attention.cpp: quant_mode_to_int (TurboQuant3=4, TurboQuant4=5)
  • python/src/fast.cpp: turbo3/turbo4 in mode docstring

Tests
12 sub-cases: D ∈ {64,128,256} × bits ∈ {3,4} × B ∈ {1,2}, each targeting a distinct Metal template instantiation. Explicit bfloat16, error path coverage.

Implementation notes

This implements TurboQuant_mse (Algorithm 1). TurboQuant_prod (Algorithm 2,
which adds a 1-bit QJL residual correction on the MSE residual) is not
implemented; at 3–4 bits the inner product bias of TurboQuant_mse is
empirically negligible (paper Section 4.1, Fig. 1: bias converges to zero
above 2 bits). This matches existing community implementations (arozanov,
rachittshah, sharpner).

The random rotation uses a Walsh-Hadamard Transform (WHT) rather than the
full random Gaussian rotation from the paper. WHT runs in O(D log D) vs
O(D²), is fully vectorizable on GPU, and gives equivalent practical quality
on real LLM key vectors.

Tested on Qwen3.6-27B (head_dim=256, 24Q/4KV, GQA=6), 24 GB unified memory Mac. Enables longer context generation on memory-constrained hardware by compressing the KV cache ~5×.

Note: This PR builds on ml-explore#3026.

Integrates TurboQuant (arXiv 2504.19874) into the generic quant-SDPA
infrastructure from ml-explore#3026, rather than as a standalone kernel.

New modes "turbo3" (3-bit) and "turbo4" (4-bit) use Lloyd-Max codebooks
for the N(0,1) distribution of WHT-rotated, norm-normalised key
coordinates. Codebooks are compile-time Metal constants; no runtime
buffer needed.

Memory layout:
- K/V: uint32-packed bit indices, same PackReader<N> path as affine
- k_scales/v_scales: per-vector float16 norms / sqrt(D)
- group_size == head_dim (one scale per full D-element vector)
- Queries pre-rotated by caller via WHT

Changes:
- primitives.h/cpp: TurboQuant3/TurboQuant4 in QuantizationMode enum,
  string parsing, quantization_mode_to_string explicit cases
- fast.cpp: TurboQuant validation branch (no biases, float scales,
  group_size == head_dim, head_dim in {64,128,256}, GPU-only fallback)
- quantized_utils.h: QuantMode enum extension, QuantConfig/Dequant
  specialisations, Lloyd-Max codebook constants
- sdpa_vector.h: static_assert extended for bits=3; QUANT_SDPA_DISPATCH
  entries for D in {64,128,256} gated on if constexpr (D >= group_size)
- scaled_dot_product_attention.cpp: quant_mode_to_int (TurboQuant3=4,
  TurboQuant4=5)
- python/src/fast.cpp: turbo3/turbo4 in mode docstring

Tested on Qwen3.6-27B (head_dim=256, 24Q/4KV, GQA=6), 24 GB unified
memory Mac. Enables longer context generation on memory-constrained
hardware by compressing the KV cache ~5x.
12 sub-cases: D in {64,128,256} x bits in {3,4} x B in {1,2},
each targeting a distinct Metal template instantiation.
Explicit bfloat16, error path coverage.
@dedalien dedalien changed the title Turboq/generic quant sdpa Add TurboQuant3/4 modes to quantized_scaled_dot_product_attention Apr 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant