Zero-loss cross-region packet relay network.
Adaptive FEC, encrypted UDP tunnels, real-time latency-mesh routing, and QUIC 0-RTT edge termination. Written in Rust.
Use case: Deploy relay nodes at global PoPs to eliminate packet loss and reduce tail latency between regions. Drop-in improvement for any TCP workload crossing unreliable or high-latency links.
- Gaming networks - eliminate tail latency spikes from packet loss on intercontinental player routes
- Financial trading - consistent sub-millisecond cross-region transport despite lossy links
- Live video/VoIP - FEC silently recovers loss so TCP retransmits never fire
- CDN and proxy infrastructure - reliable backhaul between PoPs, especially over degraded or third-world ISP links
- Cross-ocean backbone - long-haul routes with 5-10% loss become invisible to your application
- Any TCP workload crossing unreliable links - the relay is transparent to your application; just point TCP at the edge
| Feature | Detail |
|---|---|
| Adaptive FEC | Reed-Solomon forward error correction auto-tunes parity overhead from 5% to 67% based on measured loss rate |
| Multi-hop mesh routing | Real-time Dijkstra shortest-path over live latency data, not static BGP. Traffic traverses multiple PoPs on the optimal path. |
| TCP splitting | Edge nodes locally ACK client connections and relay through the mesh, masking round-trip latency from the client |
| QUIC 0-RTT edge | Clients connect via QUIC with zero round-trip handshakes for returning connections |
| ChaCha20-Poly1305 tunnels | All inter-node traffic encrypted with AEAD. Pre-shared keys, no PKI required. |
| Optional TLS at edge | TCP edge connections can be TLS-wrapped with your own certs |
| Live latency matrix | Continuous PING/PONG probing (default 500ms) with EWMA smoothing, feeds the routing engine in real time |
| Admin API | HTTP health check and full status endpoint (peer count, active flows, latency matrix, routes) with optional bearer token auth |
| Zero overhead | Encryption, FEC, headers, and forwarding add zero measurable packet loss at any tested loss level |
- Not a VPN - no IP encapsulation, no TUN/TAP, no per-user isolation
- Not a SOCKS or HTTP proxy - no proxy protocol; raw TCP/QUIC edge only
- Not a CDN - no content caching; pure packet relay with loss recovery
- Not UDP passthrough - edge accepts TCP and QUIC only (the relay layer uses UDP internally but does not expose it to clients)
If you are looking for a transparent relay fabric that sits between your existing TCP services and makes lossy links invisible, this is it.
Relays packets between globally distributed PoP (Point of Presence) nodes with:
-
Zero packet loss up to 10% link loss - adaptive Reed-Solomon FEC absorbs all loss with zero throughput impact. Auto-tunes based on observed loss:
Measured Loss Data Shards Parity Shards Overhead < 0.5% 20 1 ~5% 0.5 - 1% 10 2 ~20% 1 - 3% 10 4 ~40% 3 - 5% 8 4 ~50% 5%+ 6 4 ~67% -
Zero relay overhead - measured loss exactly matches simulated network loss, the relay adds nothing
-
Optimal routing via real-time latency mesh with Dijkstra shortest-path (not BGP)
-
Instant connections via QUIC 0-RTT + TCP splitting at edge
-
Always-encrypted tunnels with ChaCha20-Poly1305
Tested on London ↔ Sydney (~271ms RTT) over Vultr shared VPS - one of the longest internet routes on Earth, on budget infrastructure.
| Link Loss | Throughput vs Baseline | Status |
|---|---|---|
| 0% | 100% | Baseline |
| 5% | 100% | Perfect recovery |
| 10% | 99% | Perfect recovery |
| 20% | 87% | Matches theoretical prediction within 1% |
| 22% | 83% | Graceful degradation |
| 25%+ | FAIL | QUIC control plane limit (see below) |
| Link Loss | Relay Added Loss |
|---|---|
| 1% | 0% |
| 5% | 0% |
| 10% | 0% |
| 20% | 0% |
The relay introduces zero additional packet loss at any tested loss level. Encryption, header routing, and tunnel forwarding add no measurable overhead.
Same two nodes, same link, same loss - relay tunnel vs raw TCP:
| Loss | Relay p95 | Direct TCP p95 | Winner |
|---|---|---|---|
| 0% | 280ms | 271ms | TCP by 9ms |
| 1% | 280ms | 758ms | Relay by 478ms |
| 3% | 280ms | 817ms | Relay by 537ms |
| 5% | 280ms | 1089ms | Relay by 809ms |
At baseline, the relay adds ~9ms (3.5%) for encryption + FEC + UDP tunnelling. But at any non-zero packet loss, the relay delivers dramatically lower tail latency because FEC absorbs loss silently - no TCP retransmit delays.
Relay latency is dead-flat at ~280ms whether there's 0% or 5% loss. Direct TCP p95 degrades linearly.
- ~140 Mbps throughput cap: Vultr VPS NIC/bandwidth allocation, not the relay. On bare metal or higher-tier VPS, throughput scales with the NIC.
- 25%+ loss failure: At 25% unidirectional loss over 273ms RTT, each QUIC round-trip faces ~44% compound loss. No QUIC implementation (Quinn, quiche, msquic) survives this. This is a physical link constraint, not a relay limitation. On shorter routes or better infrastructure, the operational ceiling is higher.
- Real-world context: Internet backbone loss between major cities is typically 0.01–2%. This relay handles that range with zero visible loss. Even 10–20% loss (damaged undersea cable territory) still delivers 87%+ throughput.
Full benchmark methodology and raw data: BENCHMARK-RESULTS.md
- Rust 1.87+ (edition 2024)
- Linux recommended for production (UDP socket optimizations via
socket2) - Builds and tests on Windows, macOS, and Linux
Download from GitHub Releases - available for Linux (amd64/arm64), macOS (amd64/arm64), and Windows.
cargo install entrouter-linedocker build -t entrouter-line .
docker run -v ./config.toml:/etc/entrouter/config.toml \
-p 4433:4433/udp -p 8443:8443 -p 4434:4434/udp -p 9090:9090 \
entrouter-linecargo build --releaseCopy the example config and edit for your nodes:
cp config.example.toml config.tomlnode_id = "us-east-01"
region = "us-east"
[listen]
relay_addr = "0.0.0.0:4433"
tcp_addr = "0.0.0.0:8443"
quic_addr = "0.0.0.0:4434"
admin_addr = "127.0.0.1:9090"
[[peers]]
node_id = "eu-west-01"
region = "eu-west"
addr = "1.2.3.4:4433"
shared_key = "base64-encoded-32-byte-key"Generate a shared key:
openssl rand -base64 32entrouter-line --config config.tomlThe admin API is available at http://127.0.0.1:9090:
GET /health- liveness check (no auth required)GET /status- JSON with node ID, region, peer count, active TCP/QUIC flow counts, full latency matrix, and routing paths
Optional bearer token auth for /status:
admin_token = "your-secret-token"curl -H "Authorization: Bearer your-secret-token" http://127.0.0.1:9090/statusflowchart LR
Client([Client])
subgraph edge_a["Edge PoP A"]
QA[QUIC 0-RTT\nTermination]
TCP_A[TCP Split]
end
subgraph relay["Encrypted Relay Mesh"]
FEC_E[FEC Encode\nReed-Solomon]
ENC[ChaCha20-Poly1305\nEncrypt]
UDP((UDP Tunnel))
DEC[Decrypt]
FEC_D[FEC Decode\n+ Loss Recovery]
end
subgraph edge_b["Edge PoP B"]
TCP_B[TCP Split]
QB[QUIC 0-RTT\nTermination]
end
Origin([Origin])
Client --> QA --> TCP_A --> FEC_E --> ENC --> UDP --> DEC --> FEC_D --> TCP_B --> QB --> Origin
style relay fill:#1a1a2e,stroke:#e94560,stroke-width:2px,color:#eee
style edge_a fill:#0f3460,stroke:#533483,stroke-width:2px,color:#eee
style edge_b fill:#0f3460,stroke:#533483,stroke-width:2px,color:#eee
style UDP fill:#e94560,stroke:#e94560,color:#fff
Latency-mesh routing (Dijkstra on EWMA probe data) dynamically selects the fastest path between PoPs - not the BGP default.
src/
├── main.rs # Entry point
├── config.rs # Node & peer configuration
├── relay/ # Core relay engine
│ ├── tunnel.rs # Encrypted UDP tunnels
│ ├── fec.rs # Adaptive Forward Error Correction
│ ├── forwarder.rs # Packet forwarding & routing
│ ├── crypto.rs # ChaCha20-Poly1305 encryption
│ └── wire.rs # Binary wire protocol
├── mesh/ # Routing mesh
│ ├── probe.rs # Latency probing
│ ├── router.rs # Dijkstra shortest-path routing
│ └── latency_matrix.rs
└── edge/ # Edge termination
├── tcp_split.rs # TCP connection splitting
└── quic_acceptor.rs # QUIC 0-RTT acceptor
The deploy/ directory contains benchmark and test scripts:
| Script | Description |
|---|---|
bench_relay_vs_direct.py |
A/B comparison: relay tunnel vs raw TCP (latency + throughput at multiple loss levels). Requires two remote nodes and paramiko. |
bench_throughput.py |
Bulk throughput benchmark (localhost) |
bench_mtu.py |
MTU discovery benchmark (localhost) |
bench_ratelimit.py |
Rate-limited throughput test (localhost) |
simple_test.py |
Basic relay connectivity test (localhost) |
coord_bench.py |
Coordinated two-node benchmark launcher |
netem_bench.py |
Netem-based loss simulation benchmark |
sync_bench.py |
Synchronized bidirectional benchmark |
cargo testRun the full benchmark suite (requires Criterion):
cargo benchAll inter-node traffic is encrypted with ChaCha20-Poly1305. Shared keys are pre-configured per peer - no PKI required for the relay mesh. Optional TLS termination is available at the edge.
See SECURITY.md for vulnerability reporting.
Pull requests welcome. Please run cargo test and cargo clippy before submitting.
Apache 2.0 - Copyright (c) 2026 Entrouter