Use Anthropic clients (like Claude Code) with Gemini, OpenAI, or direct Anthropic backends. 🤝
A proxy server that lets you use Anthropic clients with Gemini, OpenAI, or Anthropic models themselves (a transparent proxy of sorts), all via LiteLLM. 🌉
- OpenAI API key — for default OpenAI mapping or fallback 🔑
- Google AI Studio (Gemini) API key — only if using Google provider without Vertex auth 🔑
- Google Cloud + Vertex AI — if using Vertex auth (
USE_VERTEX_AUTH=true): project with Vertex AI API enabled, and (for Claude on Vertex) Claude models enabled in Vertex AI Model Garden ☁️ - Python 3.10+ and uv (or use
./setup_env.shfor a venv).
-
Clone this repository:
git clone https://github.com/1rgs/claude-code-proxy.git cd claude-code-proxy -
Install uv (if you haven't already):
curl -LsSf https://astral.sh/uv/install.sh | sh(
uvwill handle dependencies based onpyproject.tomlwhen you run the server) -
Configure environment:
One universal template covers all provider modes (OpenAI, Google Gemini, Google Vertex, Anthropic):
cp .env.example .env
Edit
.env: set API keys and choose a preset (or set variables manually). Key variables:- Provider:
PREFERRED_PROVIDER—openai(default),google, oranthropic. - OpenAI:
OPENAI_API_KEY(required for default or fallback). - Google (Gemini API):
GEMINI_API_KEYwhenPREFERRED_PROVIDER=googleand not using Vertex. - Google Vertex:
USE_VERTEX_AUTH=true,VERTEX_PROJECT,VERTEX_LOCATION. Authenticate via gcloud (gcloud auth application-default login) and leaveVERTEX_CREDENTIALS_PATHunset, or setVERTEX_CREDENTIALS_PATHto a service account JSON key. Use for Gemini or Claude models on Vertex (see Vertex AI setup below). - Models:
BIG_MODEL/SMALL_MODELmapsonnet/haiku; ignored whenPREFERRED_PROVIDER=anthropic. - Anthropic:
ANTHROPIC_API_KEYonly when proxying directly to Anthropic.
Mapping: With
openai, models getopenai/prefix; withgoogle+ Vertex auth,vertex_ai/(Gemini or Claude from Model Garden); withgoogleand no Vertex,gemini/when using Gemini API key. See Model mapping and the presets in.env.example. - Provider:
-
Run the server:
From repo root (uv uses
.venvby default):uv run uvicorn server:app --host 127.0.0.1 --port 8082 --reload
(
--reloadis optional, for development)
If you see a warning aboutVIRTUAL_ENVnot matching.venv, you have an old virtualenv activated—rundeactivate, then run theuv runcommand again.If you used
./setup_env.sh(creates.venv): from repo root runuv run uvicorn ...and uv will use.venvwith no warning:./setup_env.sh uv run uvicorn server:app --host 127.0.0.1 --port 8082 --reload
Or activate and run:
source .venv/bin/activatethenuvicorn server:app --host 127.0.0.1 --port 8082.
When using PREFERRED_PROVIDER=google and USE_VERTEX_AUTH=true, you can use Gemini or Claude models on Vertex.
- Google Cloud: Create or select a project and ensure billing is enabled. Enable the Vertex AI API:
gcloud services enable aiplatform.googleapis.com --project PROJECT_ID - Claude on Vertex: In Vertex AI Model Garden, open the Claude model(s) you need and click Enable.
- Authentication — use one of these; you do not need both:
- Option A — gcloud SDK (no key file): If your Google account has Vertex AI access on the project, log in with Application Default Credentials:
gcloud auth application-default login
Set your project:gcloud config set project PROJECT_ID
In.envset onlyVERTEX_PROJECTandVERTEX_LOCATION; leaveVERTEX_CREDENTIALS_PATHunset (and do not setGOOGLE_APPLICATION_CREDENTIALS). The proxy will use your gcloud identity. - Option B — Service account JSON key: Create a service account with at least
roles/aiplatform.user, create a JSON key, and setVERTEX_CREDENTIALS_PATHin.envto that file path (or setGOOGLE_APPLICATION_CREDENTIALSexternally). Use this for automation or when the machine has no interactive gcloud login.
- Option A — gcloud SDK (no key file): If your Google account has Vertex AI access on the project, log in with Application Default Credentials:
- Leave
GEMINI_API_KEYunset when using Vertex. - Scripts (optional):
./setup_vertex_claude.sh -p PROJECT_ID --create-sa -yenables the API, creates a service account and key, and writes.env(Option B)../fill_env_from_gcloud.shfillsVERTEX_PROJECT(andVERTEX_CREDENTIALS_PATHif a key file exists in the repo).
Vertex troubleshooting:
- 404 model not found — Confirm the exact model ID and that the model is enabled in Model Garden for your project and region.
- Permission denied — With gcloud: ensure your account has Vertex AI access on the project (e.g. Vertex AI User). With a key file: ensure the service account has
roles/aiplatform.user. - Location/region error — Set
VERTEX_LOCATIONto a region supported by the model (e.g.us-central1). - Auth error — With gcloud: run
gcloud auth application-default loginand do not setVERTEX_CREDENTIALS_PATH. With a key file: ensureVERTEX_CREDENTIALS_PATHpoints to a readable JSON key file.
If using Docker, copy the universal env template into .env and edit as above:
curl -o .env https://raw.githubusercontent.com/1rgs/claude-code-proxy/refs/heads/main/.env.exampleThen, you can either start the container with docker compose (preferred):
services:
proxy:
image: ghcr.io/1rgs/claude-code-proxy:latest
restart: unless-stopped
env_file: .env
ports:
- 8082:8082Or with a command:
docker run -d --env-file .env -p 8082:8082 ghcr.io/1rgs/claude-code-proxy:latestTo run the proxy as a system service (start on boot or at login, restart on failure), see SERVICE.md for systemd (Linux) and launchd (macOS) instructions.
-
Install Claude Code (if you haven't already):
npm install -g @anthropic-ai/claude-code
-
Connect to your proxy:
ANTHROPIC_BASE_URL=http://localhost:8082 claude
-
That's it! Your Claude Code client will now use the configured backend models (defaulting to Gemini) through the proxy. 🎯
The proxy maps Claude client aliases (haiku / sonnet) to the configured backend:
| Claude alias | Default (openai) | Google (Gemini API) | Google Vertex (USE_VERTEX_AUTH=true) |
|---|---|---|---|
| haiku | openai/gpt-4o-mini | gemini/[SMALL_MODEL] | vertex_ai/[SMALL_MODEL] |
| sonnet | openai/gpt-4o | gemini/[BIG_MODEL] | vertex_ai/[BIG_MODEL] |
With Vertex, BIG_MODEL / SMALL_MODEL can be Gemini or Claude model IDs from Vertex AI Model Garden (e.g. claude-sonnet-4-5@20250929). Enable the model in Model Garden for your project and region first.
The following OpenAI models are supported with automatic openai/ prefix handling:
- o3-mini
- o1
- o1-mini
- o1-pro
- gpt-4.5-preview
- gpt-4o
- gpt-4o-audio-preview
- chatgpt-4o-latest
- gpt-4o-mini
- gpt-4o-mini-audio-preview
- gpt-4.1
- gpt-4.1-mini
The following Gemini models are supported with automatic gemini/ prefix handling (Gemini API key or Vertex):
- gemini-2.5-pro
- gemini-2.5-flash
When USE_VERTEX_AUTH=true and PREFERRED_PROVIDER=google, the proxy uses the vertex_ai/ prefix. You can set BIG_MODEL / SMALL_MODEL to:
- Gemini — same model IDs as above (e.g.
gemini-2.5-pro). - Claude — Model Garden IDs (e.g.
claude-sonnet-4-5@20250929,claude-haiku-4-5@20251001). Enable the model in Vertex AI Model Garden for your project and region.
The proxy automatically adds the appropriate prefix to model names:
- OpenAI models get the
openai/prefix - Gemini models get the
gemini/prefix - Vertex models get the
vertex_ai/prefix whenUSE_VERTEX_AUTH=trueandPREFERRED_PROVIDER=google - The BIG_MODEL and SMALL_MODEL prefix depends on provider/auth mode (
openai/,gemini/, orvertex_ai/)
For example:
gpt-4obecomesopenai/gpt-4ogemini-2.5-pro-preview-03-25becomesgemini/gemini-2.5-pro-preview-03-25- When BIG_MODEL is set to a Gemini model, Claude Sonnet will map to
gemini/[model-name] - When
USE_VERTEX_AUTH=true, BIG_MODEL/SMALL_MODEL map tovertex_ai/[model-name]
Set variables in .env (or export them). .env.example contains one universal template with commented presets; copy it to .env and uncomment the block you need:
- OpenAI (default) — set
OPENAI_API_KEY; optionalBIG_MODEL/SMALL_MODEL. - Google (Gemini API) —
PREFERRED_PROVIDER=google,GEMINI_API_KEY, optionalBIG_MODEL/SMALL_MODEL(e.g.gemini-2.5-pro,gemini-2.5-flash). - Google Vertex (Gemini) —
PREFERRED_PROVIDER=google,USE_VERTEX_AUTH=true,VERTEX_PROJECT,VERTEX_LOCATION; authenticate with gcloud (gcloud auth application-default login) or setVERTEX_CREDENTIALS_PATHto a service account key. Then set Gemini model IDs forBIG_MODEL/SMALL_MODEL. - Google Vertex (Claude) — same Vertex vars and auth (gcloud or key file); set
BIG_MODEL/SMALL_MODELto Claude Model Garden IDs (e.g.claude-sonnet-4-5@20250929,claude-haiku-4-5@20251001). See Google Vertex AI setup. - Anthropic only —
PREFERRED_PROVIDER=anthropic,ANTHROPIC_API_KEY;BIG_MODEL/SMALL_MODELare ignored; haiku/sonnet go straight to Anthropic.
This proxy works by:
- Receiving requests in Anthropic's API format 📥
- Translating the requests to OpenAI format via LiteLLM 🔄
- Sending the translated request to OpenAI 📤
- Converting the response back to Anthropic format 🔄
- Returning the formatted response to the client ✅
The proxy handles both streaming and non-streaming responses, maintaining compatibility with all Claude clients. 🌊
Contributions are welcome! Please feel free to submit a Pull Request. 🎁
