-
Notifications
You must be signed in to change notification settings - Fork 309
Description
Summary
Extend OpenShell's L7 proxy to inject API credentials at the network layer for arbitrary REST endpoints — the same pattern inference.local uses for LLM providers, but generalized to any service in network_policies.
Currently, non-inference credentials (Exa AI, Perplexity, YouTube, GitHub, etc.) are injected as environment variables into the sandbox via providers. This means the agent process can read raw API keys from process.env, making them vulnerable to prompt injection and malicious skills that could exfiltrate them.
The inference.local proxy already proves the architecture works — it strips caller credentials and injects backend credentials at the proxy layer. This proposal extends that pattern to all provider-managed credentials.
Problem
When running an autonomous agent (e.g., OpenClaw) inside an OpenShell sandbox:
- Agent needs API keys for external services (Exa AI, Perplexity, YouTube Data API, GitHub, etc.)
- Today, these are injected via
openshell provider createas environment variables - The agent process has full read access to
process.env.*_API_KEY - A prompt injection attack, malicious skill, or compromised dependency can trivially read these values
- The sandbox network policy limits where a leaked key can be sent, but doesn't prevent the agent from reading it
The inference.local endpoint already solves this for LLM providers — the agent calls a local proxy, and OpenShell injects the real credentials at the network layer. The agent never sees the key. But this only works for inference, not for arbitrary REST APIs.
Proposed Solution
Credential injection in network policy endpoints
Add an optional credential_injection field to endpoint definitions in network_policies:
network_policies:
exa_api:
name: exa-search-api
endpoints:
- host: api.exa.ai
port: 443
protocol: rest
tls: terminate
enforcement: enforce
credential_injection:
header: x-api-key
provider: exa
credential: EXA_API_KEY
rules:
- allow:
method: POST
path: /search
- allow:
method: POST
path: /findSimilar
binaries:
- path: /usr/bin/node
perplexity_api:
name: perplexity-api
endpoints:
- host: api.perplexity.ai
port: 443
protocol: rest
tls: terminate
enforcement: enforce
credential_injection:
header: Authorization
value_prefix: "Bearer "
provider: perplexity
credential: PERPLEXITY_API_KEY
rules:
- allow:
method: POST
path: /chat/completions
binaries:
- path: /usr/bin/node
youtube_api:
name: youtube-data-api
endpoints:
- host: www.googleapis.com
port: 443
protocol: rest
tls: terminate
enforcement: enforce
credential_injection:
query_param: key
provider: youtube
credential: YOUTUBE_API_KEY
access: read-only
binaries:
- path: /usr/bin/nodeHow it works
- Provider credentials are stored in the OpenShell gateway (outside the sandbox), not injected as env vars
- When the L7 proxy intercepts an outbound request matching an endpoint with
credential_injection:- Strip any existing auth header from the agent's request (prevent spoofing)
- Look up the credential from the referenced provider
- Inject it as the specified header (or query parameter)
- Forward the request to the destination
- The agent makes requests without credentials — they're added transparently by the proxy
- If the agent tries to read
process.env.EXA_API_KEY, it's not there
Provider creation stays the same
# Create providers with credentials (stored in gateway, NOT in sandbox env vars)
openshell provider create --name exa --type generic --credential EXA_API_KEY=2c8241f5-...
openshell provider create --name perplexity --type generic --credential PERPLEXITY_API_KEY=pplx-...
openshell provider create --name youtube --type generic --credential YOUTUBE_API_KEY=AIza...
# Create sandbox — providers are attached but credentials are NOT injected as env vars
# when credential_injection is configured in the policy
openshell sandbox create \
--provider exa \
--provider perplexity \
--provider youtube \
--policy ./my-policy.yaml \
--from openclawBackward compatibility
- If
credential_injectionis not set on an endpoint, behavior is unchanged (env var injection) - If
credential_injectionis set, the referenced credential is withheld from the sandbox env and only injected at the proxy layer - Existing providers and policies work without modification
Injection Types
| Type | Field | Example Use |
|---|---|---|
| Header | header: x-api-key |
Exa AI, most REST APIs |
| Header with prefix | header: Authorization + value_prefix: "Bearer " |
Perplexity, OpenAI-compatible APIs |
| Query parameter | query_param: key |
YouTube Data API, Google APIs |
Security Benefits
- Agent can't read credentials — they don't exist in
process.envor anywhere on the sandbox filesystem - Prompt injection resistant — even if an attacker controls the agent's output, they can't extract keys that aren't in the process
- Skills can't leak credentials — malicious ClawHub/community skills have no access to the raw key values
- Audit trail — the proxy logs every credential-injected request, so you know exactly which calls used which keys
- Complements network policy — the policy already controls where the agent can connect; credential injection controls what auth is used, without trusting the agent
Prior Art
- OpenShell
inference.local— already does this for LLM inference endpoints - Envoy Gateway credential injection —
HTTPRouteFilterwithcredentialInjectionfor header-based auth - mitmproxy addon pattern — community members already use mitmproxy scripts to intercept and inject credentials for OpenClaw sandboxes
- API Stronghold — commercial tool that injects scoped secrets at runtime outside the agent context
Impact
This would make OpenShell the first open-source agent sandbox that fully isolates credentials from the agent process for arbitrary APIs — not just inference. Combined with the existing network policy (default deny + L7 enforcement), it creates a complete zero-trust boundary: the agent can only call approved APIs, with approved methods/paths, and never sees the credentials used to authenticate those calls.
Environment
- OpenShell v0.0.13
- Relevant code:
crates/openshell-sandbox/src/l7/relay.rs(L7 proxy),crates/openshell-providers/(provider system)