Skip to content

Dynamic upstream keepalive cache size in Nginx #17

@sambhav2411

Description

@sambhav2411

Problem

The keepalive N directive allocates a fixed-size connection cache per upstream block at config-parse time. Under high concurrency — particularly HTTP/2 → HTTP/1.1 fanout where a single H2 connection can produce many parallel upstream requests — the cache fills and begins evicting live connections. This causes repeated TCP/TLS churn on every burst, increasing latency and backend load. A static value that is safe for lightly loaded upstreams is insufficient for heavily loaded ones, and blindly increasing it for all upstreams wastes file descriptors and RAM.

Proposed Solution

Detect cache saturation via eviction rate and expand the cache at runtime up to a configurable ceiling. Three new directives:

keepalive_cache_max_threshold 256;  # hard ceiling
keepalive_cache_churn 32;           # eviction count to trigger expansion
keepalive_cache_churn_duration 1s;  # rolling window

When evictions within the window exceed keepalive_cache_churn, new slots are appended to the free queue using the upstream's ngx_pool_t — no heap allocation in the hot path. The cache grows toward keepalive_cache_max_threshold by doubling everytime.

Please suggest if you think this problem and solution is reasonable for CDNs using Nginx.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions