Skip to content

Conversation

@Sid-V5
Copy link

@Sid-V5 Sid-V5 commented Feb 12, 2026

Implemented the converter for aten::_grouped_mm.default to address #2795.

Changes

  • Added aten_grouped_mm function in onnxscript/function_libs/torch_lib/ops/core.py

Implementation Details

The aten::_grouped_mm operator performs grouped matrix multiplication. This implementation handles the batch/dense mode (when offs is None), where groups are implicit in the batch dimension:

  • self: (G, M, K), mat2: (G, K, N) → result: (G, M, N)
  • Uses op.MatMul for the core computation
  • Supports optional bias addition via op.Add
  • Supports optional out_dtype casting via op.Cast

The offset-based mode (when offs is provided) raises NotImplementedError, as it requires segment-level matrix multiplications that are not directly expressible with standard ONNX operators.

Testing

The function follows the same patterns as other converters in core.py (e.g., aten_bmm, aten_mm) and uses the @torch_op decorator for automatic registration.

Fixes #2795

Implements the converter for aten::_grouped_mm.default to address issue microsoft#2795. Handles the batch/dense mode where groups are implicit in the batch dimension using MatMul, with optional bias addition and dtype casting.
@Sid-V5
Copy link
Author

Sid-V5 commented Feb 12, 2026

@microsoft-github-policy-service agree

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Development

Successfully merging this pull request may close these issues.

Missing converter for OpOverload(op='aten._grouped_mm', overload='default')

1 participant