Skip to content

Conversation

@dependabot
Copy link

@dependabot dependabot bot commented on behalf of github Feb 5, 2026

Bumps protobuf from 4.25.8 to 5.29.6.

Release notes

Sourced from protobuf's releases.

Protocol Buffers v34.0-rc1

Announcements

Bazel

Compiler

C++

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    You can disable automated security fix PRs for this repo from the Security Alerts page.

hariharans29 and others added 7 commits February 3, 2026 11:23
### Description

Refer to V1 of the fix here:
microsoft#27214

This PR includes all fixes from the V1 PR + logic to invalidate the lhs
cache pointers in case the pad buffer's underlying buffer has changed
due to a resize. The ARM team will look at potentially enhancing this
logic after the 1.24.0 release.

### Motivation and Context
Fix microsoft#26669
### Description

According to
https://github.com/microsoft/onnxruntime/blob/main/docs/ContribOperators.md#commicrosoftmoe,

> swiglu_limit : float
The limit used to clamp in SwiGLU. No clamp when limit is not provided.

However, currently, the default is set to 0, meaning we clamp to 0 if no
limit is provided.


### Motivation and Context

Fixes microsoft#27220. See there for bug description and reproduction.

Hoping to get this in before 1.24.0 releases. cc @guschmue
### Description
Fixed for fallback provider logic bug when creating inference session
can lead to losing GPU acceleration



### Motivation and Context
Fixing this for the PR here
[microsoft#25145](microsoft#25145)
…soft#27217)

This pull request updates the attention kernel selection logic and
clarifies support for unidirectional (causal) attention in the CUDA
attention implementation. The main changes focus on improving
documentation, removing outdated comments, and explicitly setting the
kernel type for better maintainability and clarity.

Kernel selection and configuration improvements:

* Explicitly set the `kernel_type` field to `AttentionKernel_Unfused` in
the `AttentionData` structure to clarify which kernel is being used and
improve future extensibility.

Documentation and code clarity:

* Added comments to clarify that unidirectional (causal) attention is
supported by several attention kernel implementations, and that the TRT
fused runner is only used for non-unidirectional cases, as enforced
elsewhere.
* Removed outdated TODO comments regarding parameter continuation and
kernel selection, as these are now handled more explicitly in the code.
[[1]](diffhunk://#diff-0701e4cc6d4951894ae1a60f35c1e6c0f69ba7595f896a23c8f5ed7265eab4ffL194)
[[2]](diffhunk://#diff-0701e4cc6d4951894ae1a60f35c1e6c0f69ba7595f896a23c8f5ed7265eab4ffL223-R227)
… available (microsoft#27206)

### Description
As title.

Checking if FlashAttention exists check includes if torch has CUDA
support, the system has the right device to run FlashAttention, etc.

### Motivation and Context
Fix Windows CUDA CI failures

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
)

### Description
Takeaway from microsoft#26144

Resolve microsoft#26144

### Motivation and Context
Improve kernel input validation for ConvTranspose

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Bumps [protobuf](https://github.com/protocolbuffers/protobuf) from 4.25.8 to 5.29.6.
- [Release notes](https://github.com/protocolbuffers/protobuf/releases)
- [Commits](https://github.com/protocolbuffers/protobuf/commits)

---
updated-dependencies:
- dependency-name: protobuf
  dependency-version: 5.29.6
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Feb 5, 2026
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Feb 7, 2026

A newer version of protobuf exists, but since this PR has been edited by someone other than Dependabot I haven't updated it. You'll get a PR for the updated version as normal once this PR is merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update Python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants