Cloudflare Containers (Wrangler/Miniflare) local dev failures: container port not found, Monitor failed to find container, and missing cloudflare-dev/* images
Summary
When running a Cloudflare Containers-based Worker locally via wrangler dev (Miniflare) on macOS + Docker, the first requests frequently fail with errors like:
Error checking if container is ready: connect(): Connection refused: container port not found. Make sure you exposed the port in your container definition.
Container error: [Error: Monitor failed to find container]
Uncaught Error: No such image available named cloudflare-dev/<image>:<hash>
We have already ensured the relevant ports are exposed in the image metadata, and we have implemented retries + a dev-only image-tag “seeding” workaround. The issue appears to be a race/consistency bug in the local dev container monitor lifecycle and/or Wrangler’s internal cloudflare-dev/* image tagging / cleanup behavior.
Code reference (shareable)
All references below are either embedded verbatim in this document or link to the public repo. If any links differ from what you see here, treat the embedded snippets in this report as authoritative.
What we are building
We run a “gateway” Worker that routes by URL prefix to container-backed Durable Objects using @cloudflare/containers.
/mysql/* → ZintrustMySqlProxyContainer on port 8789
/postgres/* → ZintrustPostgresProxyContainer on port 8790
/redis/* → ZintrustRedisProxyContainer on port 8791
/mongodb/* → ZintrustMongoDbProxyContainer on port 8792
/sqlserver/* → ZintrustSqlServerProxyContainer on port 8793
/smtp/* → ZintrustSmtpProxyContainer on port 8794
Key implementation details:
- Each DO class extends
Container and starts a single container with startAndWaitForPorts().
- A single Docker image is used for all services; the runtime entrypoint selects which proxy to run (
proxy:mysql, proxy:redis, etc.) and which port.
Relevant source:
Desired outcome
Local development should be deterministic:
wrangler dev should not fail on first startup because an internal cloudflare-dev/* image tag is missing.
- The container monitor should reliably discover the container instance and the configured port without repeated 500s.
- A “first request after starting dev server” should succeed (or at least return a clean 503 “starting” response) without crashing the Worker process.
Environment
- OS: macOS
- Docker: Docker Desktop (local daemon)
- Node: >= 20 (project requirement)
- Miniflare (dev dependency):
miniflare@4.20260217.0
- Wrangler version:
4.67.0
Wrangler config validation note (containers[].port)
Wrangler 4.67.0 warns that port is an unexpected field under containers:
Unexpected fields found in containers field: "port"
Based on Cloudflare’s published Wrangler configuration schema for containers, port is not a supported key.
In our setup, the port the container listens on is defined in code via the container-enabled Durable Object class (defaultPort) and the startAndWaitForPorts({ ports: <port> }) call.
So, the correct config is to omit containers[].port entirely (this report’s embedded configs reflect that).
How to reproduce
1) Start local dev
From the repo root:
npm ci
# Start Containers Worker locally
npx wrangler dev --config wrangler.containers-proxy.dev.jsonc --env staging
Notes:
- If you hit missing internal
cloudflare-dev/* image tags (see error below), you can apply the workaround in “Dev-only seeding…” using plain Docker commands.
- This uses the local-dev config (embedded in this document, and also linked above).
- The container
image in Wrangler points to a wrapper Dockerfile:
- Some embedded JSONC comments mention
npm run dev:cp (our internal convenience wrapper). It is not required for Cloudflare reproduction; the canonical commands are npx wrangler dev ... + docker ... shown in this report.
2) Hit the health endpoints
Using curl:
curl -i http://localhost:8787/health
curl -i http://localhost:8787/mysql/health
curl -i http://localhost:8787/redis/health
Or using the REST Client file:
Actual behavior (observed)
Often on first startup (or first request), the logs show repeated readiness failures:
Error checking if container is ready: connect(): Connection refused: container port not found. Make sure you exposed the port in your container definition.
✘ [ERROR] Uncaught Error: No such image available named cloudflare-dev/zintrustmysqlproxycontainer:dcba7228
[dev:cp] Seeding missing image tag: cloudflare-dev/zintrustmysqlproxycontainer:dcba7228
... repeated "container port not found" ...
✘ [ERROR] Container error: [Error: Monitor failed to find container]
✘ [ERROR] Uncaught Error: Monitor failed to find container
... eventually ...
Port 8789 is ready
Impacts:
- The Worker may crash with “Uncaught Error …” during startup.
- Even when the process stays up, the “first request” can return 500/uncaught failures.
- The monitor may later recover and report
Port <n> is ready, but reliability is inconsistent.
Expected behavior
- If the container image is present and
EXPOSEs the relevant port, startAndWaitForPorts() should not produce container port not found.
- If the container instance is starting, we expect a clean “not ready yet” state (503 or retry), not an uncaught error.
- Wrangler should not abort because it tries to remove an internal
cloudflare-dev/* tag that does not exist.
Evidence that ports are exposed in the image
We explicitly set exposed ports in our Dockerfiles:
Despite this, the monitor still reports container port not found intermittently.
Current mitigations / workarounds we implemented
1) Dev-only seeding of missing cloudflare-dev/* tags
Wrangler sometimes fails with:
No such image available named cloudflare-dev/<name>:<hash>
We added a dev wrapper script to detect those messages and run:
docker pull docker.io/zintrust/zintrust:latest
# Example (replace with the exact tag Wrangler prints):
docker tag docker.io/zintrust/zintrust:latest cloudflare-dev/zintrustmysqlproxycontainer:dcba7228
Implementation:
This is a workaround for local dev only. It does not address the underlying “why is Wrangler referencing a tag that doesn’t exist yet?” problem.
2) Gateway retries when the container monitor is not ready
When the gateway fetch to the DO stub returns an internal 500 that contains:
Monitor failed to find container
container port not found
Connection refused
…we retry up to 20 times with a short delay, returning a 503 JSON response after max retries.
Implementation:
This reduces first-hit failures, but it still depends on the underlying monitor eventually becoming consistent.
3) Use a lightweight ping endpoint for readiness
We set the Container DO pingEndpoint to a lightweight endpoint (/containerstarthealthcheck) intended to return 200 quickly without depending on downstream DB connectivity.
Implementation:
Why we think this is an upstream (Wrangler/Miniflare/monitor) issue
From the symptoms:
-
cloudflare-dev/* tags are referenced before they exist
- Wrangler emits errors trying to use or remove a
cloudflare-dev/<name>:<hash> image that is not present.
- It seems like Wrangler expects the tag to exist but does not guarantee it has been created.
-
Readiness checks sometimes claim “port not found” even when EXPOSE is present
- Our Dockerfiles include the required
EXPOSE metadata.
- Yet the monitor sometimes cannot find the configured port.
- This suggests either:
- it is inspecting the wrong image (stale tag/reference),
- the monitor is reading image metadata before the image exists locally,
- or there is a race between container creation and the monitor lookup.
-
Monitor/container discovery appears racy
Monitor failed to find container indicates the monitor lost track of or never observed the container instance it is looking for.
- In our experience this is most frequent right after startup.
What we’d like Cloudflare to address
A) Make internal image cleanup/tagging robust
- If Wrangler runs
docker rmi cloudflare-dev/... and the tag doesn’t exist, treat it as non-fatal (ignore missing images).
- Ensure
cloudflare-dev/<name>:<hash> tags are created deterministically before they are referenced.
B) Make container monitor readiness deterministic
- If the container exists but is not ready, return a stable “starting” state (503) instead of throwing uncaught errors.
- Ensure the monitor checks the correct image reference and does not read stale metadata.
C) Improve documentation and/or configuration ergonomics
- Document clearly that ports must be in
EXPOSE in the image metadata (Compose/YAML cannot add EXPOSE).
- Consider allowing
containers[].image to be a plain image reference (e.g. docker.io/foo/bar:tag) for local dev, not only a Dockerfile path.
Useful artifacts to request when diagnosing
When reproducing internally, it would help to capture:
npx wrangler --version
node --version
uname -a
docker version
docker image ls | grep -E 'cloudflare-dev/zintrust|zintrust/zintrust'
docker ps -a --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}'
If you want, we can provide a full log from npx wrangler dev --config wrangler.containers-proxy.dev.jsonc --env staging showing the complete sequence of events.
Real code excerpts (current)
Worker + DO container classes
File: https://github.com/ZinTrust/zintrust/blob/release/packages/cloudflare-containers-proxy/src/index.ts
Container DO startup:
import { Container } from '@cloudflare/containers';
const ensureContainerStarted = async (
container: Container,
port: number,
start: { envVars: Record<string, string>; entrypoint: string[] }
): Promise<void> => {
await container.startAndWaitForPorts({
startOptions: {
envVars: start.envVars,
entrypoint: start.entrypoint,
enableInternet: true,
},
ports: port,
});
};
Example DO class:
export class ZintrustMySqlProxyContainer extends Container {
defaultPort = 8789;
sleepAfter = '10m';
// Keep this lightweight: the proxy root path responds quickly (401 without
// signing headers) and does not depend on DB connectivity like /health.
pingEndpoint = 'containerstarthealthcheck';
async fetch(request: Request): Promise<Response> {
const env = getContainerEnv(this);
await ensureContainerStarted(this, 8789, {
envVars: createMySqlProxyEnvVars(env),
entrypoint: createProxyEntrypoint(env, 'proxy:mysql', 8789),
});
return super.fetch(request);
}
}
Gateway retry logic for the startup errors:
const CONTAINER_RETRY_ATTEMPTS = 20;
const CONTAINER_RETRY_DELAY_MS = 500;
const isContainerNotReadyMessage = (value: string): boolean => {
return (
value.includes('Monitor failed to find container') ||
value.includes('container port not found') ||
value.includes('Connection refused')
);
};
const fetchWithContainerRetry = async (
stub: { fetch(request: Request): Promise<Response> },
request: Request,
attempt = 1
): Promise<Response> => {
try {
const response = await stub.fetch(request);
const notReady = await responseIndicatesContainerNotReady(response);
if (!notReady) return response;
if (attempt >= CONTAINER_RETRY_ATTEMPTS) {
return createContainerNotReadyResponse('Container monitor not ready (max retries reached).');
}
Logger.warn('Container not ready; retrying', { attempt, max: CONTAINER_RETRY_ATTEMPTS });
await sleepMs(CONTAINER_RETRY_DELAY_MS);
return fetchWithContainerRetry(stub, request, attempt + 1);
} catch (error) {
if (!errorIndicatesContainerNotReady(error)) throw error;
if (attempt >= CONTAINER_RETRY_ATTEMPTS) {
return createContainerNotReadyResponse(String(error));
}
Logger.warn('Container connection error; retrying', {
attempt,
max: CONTAINER_RETRY_ATTEMPTS,
error: String(error),
});
await sleepMs(CONTAINER_RETRY_DELAY_MS);
return fetchWithContainerRetry(stub, request, attempt + 1);
}
};
Local dev Wrangler config
File: https://github.com/ZinTrust/zintrust/blob/release/wrangler.containers-proxy.dev.jsonc
Full file: wrangler.containers-proxy.dev.jsonc (verbatim)
Full file: wrangler.containers-proxy.jsonc (verbatim)
Cloudflare Containers (Wrangler/Miniflare) local dev failures:
container port not found,Monitor failed to find container, and missingcloudflare-dev/*imagesSummary
When running a Cloudflare Containers-based Worker locally via
wrangler dev(Miniflare) on macOS + Docker, the first requests frequently fail with errors like:Error checking if container is ready: connect(): Connection refused: container port not found. Make sure you exposed the port in your container definition.Container error: [Error: Monitor failed to find container]Uncaught Error: No such image available named cloudflare-dev/<image>:<hash>We have already ensured the relevant ports are exposed in the image metadata, and we have implemented retries + a dev-only image-tag “seeding” workaround. The issue appears to be a race/consistency bug in the local dev container monitor lifecycle and/or Wrangler’s internal
cloudflare-dev/*image tagging / cleanup behavior.Code reference (shareable)
All references below are either embedded verbatim in this document or link to the public repo. If any links differ from what you see here, treat the embedded snippets in this report as authoritative.
releaseWhat we are building
We run a “gateway” Worker that routes by URL prefix to container-backed Durable Objects using
@cloudflare/containers./mysql/*→ZintrustMySqlProxyContaineron port8789/postgres/*→ZintrustPostgresProxyContaineron port8790/redis/*→ZintrustRedisProxyContaineron port8791/mongodb/*→ZintrustMongoDbProxyContaineron port8792/sqlserver/*→ZintrustSqlServerProxyContaineron port8793/smtp/*→ZintrustSmtpProxyContaineron port8794Key implementation details:
Containerand starts a single container withstartAndWaitForPorts().proxy:mysql,proxy:redis, etc.) and which port.Relevant source:
Desired outcome
Local development should be deterministic:
wrangler devshould not fail on first startup because an internalcloudflare-dev/*image tag is missing.Environment
miniflare@4.20260217.04.67.0Wrangler config validation note (
containers[].port)Wrangler
4.67.0warns thatportis an unexpected field undercontainers:Based on Cloudflare’s published Wrangler configuration schema for
containers,portis not a supported key.In our setup, the port the container listens on is defined in code via the container-enabled Durable Object class (
defaultPort) and thestartAndWaitForPorts({ ports: <port> })call.So, the correct config is to omit
containers[].portentirely (this report’s embedded configs reflect that).How to reproduce
1) Start local dev
From the repo root:
npm ci # Start Containers Worker locally npx wrangler dev --config wrangler.containers-proxy.dev.jsonc --env stagingNotes:
cloudflare-dev/*image tags (see error below), you can apply the workaround in “Dev-only seeding…” using plain Docker commands.imagein Wrangler points to a wrapper Dockerfile:npm run dev:cp(our internal convenience wrapper). It is not required for Cloudflare reproduction; the canonical commands arenpx wrangler dev ...+docker ...shown in this report.2) Hit the health endpoints
Using curl:
Or using the REST Client file:
Actual behavior (observed)
Often on first startup (or first request), the logs show repeated readiness failures:
Impacts:
Port <n> is ready, but reliability is inconsistent.Expected behavior
EXPOSEs the relevant port,startAndWaitForPorts()should not producecontainer port not found.cloudflare-dev/*tag that does not exist.Evidence that ports are exposed in the image
We explicitly set exposed ports in our Dockerfiles:
Base runtime image
Dockerfileexposes both the app server port and all proxy ports:EXPOSE 7772 8789 8790 8791 8792 8793 8794Local dev wrapper image disables the base image healthcheck and exposes the proxy ports:
HEALTHCHECK NONEEXPOSE 8789 8790 8791 8792 8793 8794Despite this, the monitor still reports
container port not foundintermittently.Current mitigations / workarounds we implemented
1) Dev-only seeding of missing
cloudflare-dev/*tagsWrangler sometimes fails with:
No such image available named cloudflare-dev/<name>:<hash>We added a dev wrapper script to detect those messages and run:
docker pull docker.io/zintrust/zintrust:latest # Example (replace with the exact tag Wrangler prints): docker tag docker.io/zintrust/zintrust:latest cloudflare-dev/zintrustmysqlproxycontainer:dcba7228Implementation:
This is a workaround for local dev only. It does not address the underlying “why is Wrangler referencing a tag that doesn’t exist yet?” problem.
2) Gateway retries when the container monitor is not ready
When the gateway fetch to the DO stub returns an internal 500 that contains:
Monitor failed to find containercontainer port not foundConnection refused…we retry up to 20 times with a short delay, returning a 503 JSON response after max retries.
Implementation:
fetchWithContainerRetry()in https://github.com/ZinTrust/zintrust/blob/release/packages/cloudflare-containers-proxy/src/index.tsThis reduces first-hit failures, but it still depends on the underlying monitor eventually becoming consistent.
3) Use a lightweight ping endpoint for readiness
We set the Container DO
pingEndpointto a lightweight endpoint (/containerstarthealthcheck) intended to return200quickly without depending on downstream DB connectivity.Implementation:
pingEndpoint = 'containerstarthealthcheck'in each DO classWhy we think this is an upstream (Wrangler/Miniflare/monitor) issue
From the symptoms:
cloudflare-dev/*tags are referenced before they existcloudflare-dev/<name>:<hash>image that is not present.Readiness checks sometimes claim “port not found” even when
EXPOSEis presentEXPOSEmetadata.Monitor/container discovery appears racy
Monitor failed to find containerindicates the monitor lost track of or never observed the container instance it is looking for.What we’d like Cloudflare to address
A) Make internal image cleanup/tagging robust
docker rmi cloudflare-dev/...and the tag doesn’t exist, treat it as non-fatal (ignore missing images).cloudflare-dev/<name>:<hash>tags are created deterministically before they are referenced.B) Make container monitor readiness deterministic
C) Improve documentation and/or configuration ergonomics
EXPOSEin the image metadata (Compose/YAML cannot addEXPOSE).containers[].imageto be a plain image reference (e.g.docker.io/foo/bar:tag) for local dev, not only a Dockerfile path.Useful artifacts to request when diagnosing
When reproducing internally, it would help to capture:
If you want, we can provide a full log from
npx wrangler dev --config wrangler.containers-proxy.dev.jsonc --env stagingshowing the complete sequence of events.Real code excerpts (current)
Worker + DO container classes
File: https://github.com/ZinTrust/zintrust/blob/release/packages/cloudflare-containers-proxy/src/index.ts
Container DO startup:
Example DO class:
Gateway retry logic for the startup errors:
Local dev Wrangler config
File: https://github.com/ZinTrust/zintrust/blob/release/wrangler.containers-proxy.dev.jsonc
{ "containers": [ { "class_name": "ZintrustMySqlProxyContainer", "image": "./docker/containers-proxy-dev/Dockerfile", "max_instances": 10, }, { "class_name": "ZintrustPostgresProxyContainer", "image": "./docker/containers-proxy-dev/Dockerfile", "max_instances": 10, }, { "class_name": "ZintrustRedisProxyContainer", "image": "./docker/containers-proxy-dev/Dockerfile", "max_instances": 10, }, ], "durable_objects": { "bindings": [ { "name": "ZT_PROXY_MYSQL", "class_name": "ZintrustMySqlProxyContainer" }, { "name": "ZT_PROXY_POSTGRES", "class_name": "ZintrustPostgresProxyContainer" }, { "name": "ZT_PROXY_REDIS", "class_name": "ZintrustRedisProxyContainer" }, ], }, }Full file: wrangler.containers-proxy.dev.jsonc (verbatim)
Full file: wrangler.containers-proxy.jsonc (verbatim)