The recommended setup is the dev shell below — it provides the full toolchain. If you're not using Nix, you need:
- Go 1.26+
- A kcp binary (downloaded automatically by
make kcp) golangci-lint,helm,kind,docker, andpre-commiton$PATH
The repo ships a Nix flake wired up via direnv
(see .envrc). With both installed, the dev shell auto-loads on cd
and provides Go 1.26.2, golangci-lint, gopls, helm, kind, task, the kcp
toolchain, and the rest of the dependencies.
direnv allow # one-time, on first entry
# or, without direnv:
nix developThe shell is defined by opendefensecloud/dev-kit
via the dev-kit flake input — adding tools project-wide is a PR there, not here.
pre-commit install # registers the hooks listed in .pre-commit-config.yamlThe configured hooks cover trailing whitespace, YAML/JSON syntax, yamllint,
shellcheck, gofmt, go vet, go mod tidy, golangci-lint (manual stage),
and helm lint.
api/v1alpha1/ Go types for DependencyRule
cmd/
controller/ Controller entrypoint
webhook/ Webhook server entrypoint
charts/
dependency-controller/ Helm chart (deploys both controller and webhook)
config/
crds/ Generated CRDs (intermediate, from controller-gen)
kcp/ Generated APIResourceSchemas + APIExport (from apigen)
docs/ Documentation
internal/
controller/
dependencyrule_controller.go Reconciler + workspace resolver (VW routing)
webhook_installer.go Manages ValidatingWebhookConfigurations
fieldpath/
fieldpath.go Dot-notation field path resolver
webhook/
rule_cache_manager.go Per-rule indexed cache lifecycle manager
rule_registry.go Thread-safe registry of rule caches
deletion_validator.go Admission webhook handler
test/
e2e/ End-to-end tests (kind + kcp + helm)
fixtures/ YAML fixtures for test provider schemas
make generate # Generate deepcopy methods (controller-gen object)
make manifests # Generate CRDs -> APIResourceSchemas + APIExportmake manifests runs two stages:
controller-gen crdgenerates standard Kubernetes CRDs intoconfig/crds/apigen(fromgithub.com/kcp-dev/sdk) converts CRDs into kcpAPIResourceSchemaandAPIExportmanifests inconfig/kcp/
The apigen tool preserves schema names across regenerations when the spec
hasn't changed, avoiding unnecessary churn.
make build # Build both binaries to bin/
make docker-build # Build Docker image
make helm-package # Package Helm chartmake run-controller # Run the controller from source
make run-webhook # Run the webhook server from sourcemake test # Unit + integration tests (requires kcp binary, excludes e2e)
make test-e2e # E2E tests (requires kind, helm, docker) — uses the active shard config
make test-e2e-matrix # Run e2e tests against both shard configs sequentially
make clean-e2e # Remove kind cluster from e2e testsmake test-e2e honors the E2E_SHARD_CONFIG environment variable
(single-shard or multi-shard, default multi-shard); see the E2E Tests
section for what each config exercises. make test-e2e-matrix runs both back-to-back
with a kind cleanup between runs.
Tool paths can be overridden via KIND, KUBECTL, HELM, DOCKER env vars
(fallback: PATH lookup). Set E2E_SKIP_CLEANUP=1 to keep the kind cluster
running after the suite for inspection.
make fmt # Add license headers, format code, run lint --fix
make lint # Run golangci-lint
make vet # Run go vetThe integration tests (internal/controller/integration_test.go) use
kcp envtest
to spin up a real kcp instance in-process and create a 5-workspace topology:
- dep-ctrl -- hosts the DependencyRule APIExport
- network-provider -- exports VPCs
- compute-provider -- exports VirtualMachines, creates a DependencyRule
- consumer1 -- binds to all three, where test resources are created
- consumer2 -- binds to all three, used to verify no cross-workspace leakage
The test starts both the controller reconciler (with WebhookInstaller) and the
webhook's RuleCacheManager on the same multicluster manager, plus an HTTPS
webhook server with a self-signed CA. It verifies the full lifecycle:
- Webhook blocks VPC deletion while VMs reference it
- Consumer2's VPC is unaffected (cross-workspace isolation)
- Webhook allows VPC deletion after the VM is deleted
- Webhook removal when the DependencyRule is deleted
The e2e tests (test/e2e/) run against a real kind cluster with a multi-shard
kcp instance deployed via the
kcp-operator. The test suite:
- Creates a kind cluster with a NodePort for the kcp front-proxy
- Installs cert-manager and the kcp-operator Helm chart
- Deploys two etcd instances and creates a
RootShard,Shard(shard1), andFrontProxyvia kcp-operator CRs - Generates admin and component kubeconfigs via kcp-operator
KubeconfigCRs (usingrootShardRefso certs are trusted by both front-proxy and shards) - Bootstraps RBAC (see below)
- Builds the Docker image, loads it into kind, and deploys via Helm
- Exercises the full system including TLS webhook dispatch through kcp's admission pipeline
The five test workspaces (dep-ctrl, network-provider, compute-provider,
consumer1, consumer2) are pinned deterministically to either root or
shard1 via spec.location.selector. Both shards carry an e2e-target=<name>
label. After workspace readiness, verifyShardPlacements reads each
workspace's internal.tenancy.kcp.io/shard annotation and asserts that
selectors weren't ignored.
Selection is driven by the E2E_SHARD_CONFIG env var:
| Config | Purpose | Placements |
|---|---|---|
single-shard |
sanity / single-shard fast paths | all five workspaces on root |
multi-shard (default) |
exercises every cross-shard path | compute-provider and consumer2 on shard1; dep-ctrl, network-provider, consumer1 on root |
Together, the two configs exercise: same-shard fast paths (single-shard);
controller-VW cross-shard webhook installation, provider ↔ consumer
cross-shard binding, and webhook ↔ consumer cross-shard query (multi-shard).
Run make test-e2e-matrix to execute both sequentially.
Three RBAC bundles are applied during setup:
system:adminper shard (test/fixtures/system-admin-rbac-bootstrap.yaml): ClusterRole + ClusterRoleBinding granting the webhook SA*/*get/list. Applied via direct (port-forwarded) connection to each shard, using asystem:masterskubeconfig generated from a kcp-operatorKubeconfigCR withrootShardRef/shardRef. kcp'sBootstrapPolicyAuthorizerreads this binding from the local shard only — bindings do not propagate across shards — so the fixture is applied once per shard.- Root workspace (
test/fixtures/root-rbac-bootstrap.yaml): controller-only rules (workspaces/content access + tenancy.kcp.io/workspaces read), applied via the front-proxy. - Dep-ctrl workspace (
test/fixtures/depctrl-rbac-bootstrap.yaml): apiexportendpointslices read + dep-ctrl APIExport content access for both the controller and webhook SAs.
The webhook installation in provider workspaces is authorized by the
validatingwebhookconfigurations permissionClaim on the dep-ctrl APIExport,
not by RBAC.
test/e2e/dependency_test.go is an Ordered Describe covering:
- Initial install (DependencyRule → ValidatingWebhookConfiguration appears)
- Block / unblock cycles with a single dependent and with multiple dependents
- Cross-shard isolation (
consumer2with no VMs) - Cross-shard protection (
consumer2with a VM, onshard1under multi-shard) skip-protectionannotation- DependencyRule deletion → webhook removal
- DependencyRule recreation → protection restored
- In-place rule update: patches
fieldRef.pathon a live rule and verifies the webhook re-evaluates without a recreate cycle (provesWebhookInstaller.reconcileWorkspaceWebhook's update branch and the webhook'sRuleCacheManagerre-indexing both work end-to-end).
Test fixtures are loaded from YAML files rather than constructed inline:
config/kcp/-- the generated dep-ctrl APIResourceSchemas and APIExport (same files used for real deployment)test/fixtures/-- test provider schemas (VPC, VirtualMachine), APIExports, RBAC bundles, and per-test VPC/VM resources
See docs/getting-started.md for the full step-by-step deployment guide using kcp-operator. The guide covers kcp-operator setup, multi-shard configuration, bootstrap RBAC, kubeconfig generation via kcp-operator Kubeconfig CRs, and Helm deployment.
- Deploy kcp via kcp-operator (RootShard, FrontProxy, optional additional Shards)
- Create the dep-ctrl workspace and apply
config/kcp/schemas - Apply bootstrap RBAC: per-shard
system:admin(webhook get/list), root workspace (controller), dep-ctrl workspace (APIExport access for both) - Generate component kubeconfigs via kcp-operator Kubeconfig CRs (use
rootShardRef) - Deploy with Helm
- Providers bind to the dep-ctrl APIExport (accepting permissionClaims) and create DependencyRules