diff --git a/docs/getstarted/platform-tutorial.md b/docs/getstarted/platform-tutorial.md new file mode 100644 index 00000000..74d0957c --- /dev/null +++ b/docs/getstarted/platform-tutorial.md @@ -0,0 +1,1054 @@ +--- +title: Build your first platform with Upbound Crossplane +description: Deploy a real app with a cloud database, observe drift detection, enforce policies, and change infrastructure live, all from a single control plane. +weight: 5 +validation: + type: walkthrough + owner: docs@upbound.io + environment: local-upbound + timeout: 30m +--- + +In this tutorial, you deploy an application with a PostgreSQL database on AWS. +You use Upbound Crossplane to manage resources, enforce security policy, and +change infrastructure. + +By the end of this tutorial, you can: + +- Deploy a composite resource that creates multiple AWS resources from a single manifest +- Explore the providers and ProviderConfigs that connect your platform to AWS +- Trigger drift detection and watch Crossplane correct an out-of-band change +- Block non-compliant requests with Kyverno before they reach Crossplane +- Update live infrastructure by changing desired state + +## Prerequisites + +Install the following tools before starting: + +- [`kubectl`][kubectl-install] +- [AWS CLI][aws-cli], configured with credentials for an account where you can create VPCs, IAM roles, and RDS instances +- [kind][kind] +- [`up CLI`][up-cli] v0.44.3 or later + +## Create the project + +Scaffold a new project with `up project init`. This creates the `app-w-db/` +directory with a valid `upbound.yaml` and the standard project layout +(`apis/`, `functions/`, `examples/`, `tests/`): + +```bash +up project init --scratch app-w-db +cd app-w-db +``` + +All commands from this point run from inside the `app-w-db` directory. + +The platform composes AWS resources and uses `function-auto-ready` so composite +resources report ready status. Add them as project dependencies: + +```bash +up dependency add 'xpkg.upbound.io/upbound/provider-family-aws:v2.4.0' +up dependency add 'xpkg.upbound.io/upbound/provider-aws-iam:v2.4.0' +up dependency add 'xpkg.upbound.io/upbound/provider-aws-rds:v2.4.0' +up dependency add 'xpkg.upbound.io/upbound/provider-aws-ec2:v2.4.0' +up dependency add 'xpkg.upbound.io/crossplane-contrib/function-auto-ready:v0.6.1' +``` + +`up dependency add` records each dependency in `upbound.yaml`. + +The platform exposes two APIs: `AppWDB` (a basic app with a database) and +`AppWDBSecure` (the same API with an optional security context, used later for +policy enforcement). + +Create the `AppWDB` XRD: + +```bash +mkdir -p apis/appwdb +cat > apis/appwdb/definition.yaml <<'EOF' +apiVersion: apiextensions.crossplane.io/v2 +kind: CompositeResourceDefinition +metadata: + name: appwdbs.demo.upbound.io +spec: + group: demo.upbound.io + names: + categories: + - crossplane + kind: AppWDB + plural: appwdbs + scope: Namespaced + versions: + - name: v1alpha1 + referenceable: true + schema: + openAPIV3Schema: + description: AppWDB is the Schema for the AppWDB API. + properties: + spec: + description: AppWDBSpec defines the desired state of AppWDB. + type: object + properties: + parameters: + type: object + description: AppWDB configuration parameters + properties: + replicas: + type: integer + default: 2 + description: Number of app replicas + dbSize: + type: string + default: db.t3.micro + enum: + - db.t3.micro + - db.t3.small + - db.t3.medium + description: RDS instance class + region: + type: string + default: us-east-1 + description: AWS region + required: + - parameters + status: + description: AppWDBStatus defines the observed state of AppWDB. + type: object + required: + - spec + type: object + served: true +EOF +``` + +Create the `AppWDBSecure` XRD: + +```bash +mkdir -p apis/appwdbsecure +cat > apis/appwdbsecure/definition.yaml <<'EOF' +apiVersion: apiextensions.crossplane.io/v2 +kind: CompositeResourceDefinition +metadata: + name: appwdbsecures.demo.upbound.io +spec: + group: demo.upbound.io + names: + categories: + - crossplane + kind: AppWDBSecure + plural: appwdbsecures + scope: Namespaced + versions: + - name: v1alpha1 + referenceable: true + schema: + openAPIV3Schema: + description: AppWDBSecure is the Schema for the AppWDBSecure API. + properties: + spec: + description: AppWDBSecureSpec defines the desired state of AppWDBSecure. + type: object + properties: + parameters: + type: object + description: AppWDBSecure configuration parameters + properties: + replicas: + type: integer + default: 2 + description: Number of app replicas + dbSize: + type: string + default: db.t3.micro + enum: + - db.t3.micro + - db.t3.small + - db.t3.medium + description: RDS instance class + region: + type: string + default: us-east-1 + description: AWS region + securityContext: + type: object + description: Optional security context for the application container + properties: + privileged: + type: boolean + description: Run container as privileged. Blocked by platform policy. + required: + - parameters + status: + description: AppWDBSecureStatus defines the observed state of AppWDBSecure. + type: object + required: + - spec + type: object + served: true +EOF +``` + +The composition function is a KCL program that maps the user's 10-line request +to the full set of AWS resources. + +```bash +mkdir -p functions/compose-resources +cat > functions/compose-resources/kcl.mod <<'EOF' +[package] +name = "compose-resources" +version = "0.1.0" +EOF +``` + +Create `main.k`. This file is the entire composition logic. It reads the +composite resource and outputs every managed resource Crossplane creates: + +```bash +cat > functions/compose-resources/main.k <<'EOF' +oxr = option("params").oxr +ocds = option("params").ocds + +params = oxr.spec.parameters +appName = oxr.metadata.name +region = params.region or "us-east-1" +dbSize = params.dbSize or "db.t3.micro" +replicas = params.replicas or 2 + +_is_deleting = bool(oxr.metadata?.deletionTimestamp) +_db_key = "${appName}-db" +_instance_still_exists = _db_key in ocds + +_metadata = lambda name: str -> any { + { + namespace: oxr.metadata.namespace + annotations: {"krm.kcl.dev/composition-resource-name": name} + } +} + +_defaults = { + managementPolicies: ["*"] + providerConfigRef: {kind: "ProviderConfig", name: "default"} +} + +_subnets = [ + {cidrBlock: "10.0.1.0/24", availabilityZone: "${region}a", suffix: "a"} + {cidrBlock: "10.0.2.0/24", availabilityZone: "${region}b", suffix: "b"} + {cidrBlock: "10.0.3.0/24", availabilityZone: "${region}c", suffix: "c"} +] + +_sg_items = [{ + apiVersion: "rds.aws.m.upbound.io/v1beta1" + kind: "SubnetGroup" + metadata: _metadata("${appName}-subnet-group") | {name: "${appName}-subnet-group"} + spec: _defaults | { + forProvider: { + region: region + description: "${appName} DB subnet group" + subnetIdSelector: {matchControllerRef: True} + } + } +}] if not _is_deleting or _instance_still_exists else [] + +_db_items = [{ + apiVersion: "rds.aws.m.upbound.io/v1beta1" + kind: "Instance" + metadata: _metadata("${appName}-db") | { + name: "${appName}-db" + annotations: {"crossplane.io/external-name": "${appName}-db"} + } + spec: _defaults | { + forProvider: { + region: region + identifier: "${appName}-db" + engine: "postgres" + engineVersion: "16.6" + instanceClass: dbSize + username: "demoadmin" + dbName: "appdb" + autoGeneratePassword: True + passwordSecretRef: {namespace: oxr.metadata.namespace, name: "${appName}-db-password", key: "password"} + applyImmediately: True + skipFinalSnapshot: True + allocatedStorage: 20 + storageType: "gp3" + storageEncrypted: False + publiclyAccessible: False + backupRetentionPeriod: 0 + dbSubnetGroupNameSelector: {matchControllerRef: True} + } + initProvider: {identifier: "${appName}-db"} + } +}] if not _is_deleting else [] + +_items = [ + { + apiVersion: "ec2.aws.m.upbound.io/v1beta1" + kind: "VPC" + metadata: _metadata("${appName}-vpc") | {name: "${appName}-vpc"} + spec: _defaults | { + forProvider: { + region: region + cidrBlock: "10.0.0.0/16" + enableDnsHostnames: True + enableDnsSupport: True + tags: {"Name": "${appName}-vpc"} + } + } + } +] + [ + { + apiVersion: "ec2.aws.m.upbound.io/v1beta1" + kind: "Subnet" + metadata: _metadata("${appName}-subnet-${s.suffix}") | {name: "${appName}-subnet-${s.suffix}"} + spec: _defaults | { + forProvider: { + region: region + cidrBlock: s.cidrBlock + availabilityZone: s.availabilityZone + vpcIdSelector: {matchControllerRef: True} + tags: {"Name": "${appName}-subnet-${s.suffix}"} + } + } + } for s in _subnets +] + _sg_items + _db_items + [ + { + apiVersion: "iam.aws.m.upbound.io/v1beta1" + kind: "Role" + metadata: _metadata("${appName}-role") | {name: "${appName}-role"} + spec: _defaults | { + forProvider: { + assumeRolePolicy: '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":"ec2.amazonaws.com"},"Action":"sts:AssumeRole"}]}' + } + } + } + { + apiVersion: "apps/v1" + kind: "Deployment" + metadata: _metadata("${appName}-deployment") | {name: appName} + spec: { + replicas: replicas + selector: {matchLabels: {app: appName}} + template: { + metadata: {labels: {app: appName}} + spec: { + containers: [ + { + name: "app" + image: "public.ecr.aws/nginx/nginx:stable-alpine" + ports: [{containerPort: 80}] + } | ({securityContext: {privileged: params.securityContext.privileged}} if params?.securityContext?.privileged != None else {}) + ] + } + } + } + } +] + +items = _items +EOF +``` + +Create the base example and the variants used in later steps: + +```bash +mkdir -p examples/appwdb +cat > examples/appwdb/example.yaml <<'EOF' +apiVersion: demo.upbound.io/v1alpha1 +kind: AppWDB +metadata: + name: demo-01 + namespace: demo +spec: + parameters: + replicas: 2 + dbSize: db.t3.micro + region: us-east-1 +EOF + +cat > examples/appwdb/variant-bigger-db.yaml <<'EOF' +apiVersion: demo.upbound.io/v1alpha1 +kind: AppWDB +metadata: + name: demo-01 + namespace: demo +spec: + parameters: + replicas: 2 + dbSize: db.t3.medium + region: us-east-1 +EOF + +cat > examples/appwdb/variant-more-replicas.yaml <<'EOF' +apiVersion: demo.upbound.io/v1alpha1 +kind: AppWDB +metadata: + name: demo-01 + namespace: demo +spec: + parameters: + replicas: 5 + dbSize: db.t3.micro + region: us-east-1 +EOF +``` + +Create the secure examples used in the policy enforcement step: + +```bash +mkdir -p examples/appwdbsecure +cat > examples/appwdbsecure/example-1.yaml <<'EOF' +apiVersion: demo.upbound.io/v1alpha1 +kind: AppWDBSecure +metadata: + name: kyverno-demo-01 + namespace: demo +spec: + parameters: + replicas: 2 + dbSize: db.t3.micro + region: us-east-1 + securityContext: + privileged: true +EOF + +cat > examples/appwdbsecure/example-2.yaml <<'EOF' +apiVersion: demo.upbound.io/v1alpha1 +kind: AppWDBSecure +metadata: + name: kyverno-demo-01 + namespace: demo +spec: + parameters: + replicas: 2 + dbSize: db.t3.micro + region: us-east-1 + securityContext: + privileged: false +EOF +``` + +The `ProviderConfig` tells the AWS providers where to find credentials. + +```bash +mkdir -p setup/config +cat > setup/config/aws-provider-config.yaml <<'EOF' +apiVersion: aws.m.upbound.io/v1beta1 +kind: ProviderConfig +metadata: + name: default + namespace: demo +spec: + credentials: + source: Secret + secretRef: + namespace: demo + name: aws-secret + key: creds +EOF +``` + +## Configure AWS credentials + +The demo creates real AWS resources. Create a file named `aws-credentials.txt` +in the project directory with credentials that have permissions to create VPCs, +subnets, IAM roles, and RDS instances: + +```ini +[default] +aws_access_key_id = +aws_secret_access_key = +``` + +:::warning +This tutorial uses static AWS credentials for convenience. Don't use static +credentials in production. Use IAM roles, IRSA, or another short-lived +credential mechanism instead. See [AWS authentication][aws-auth-docs] for +secure alternatives. +::: + +## Start the project + +Open a dedicated terminal window and run from inside `app-w-db`: + +```bash +up project run --local --ingress +``` + +This command: + +- Creates a kind cluster named `up-app-w-db` +- Installs UXP into the cluster +- Builds and deploys the KCL composition function +- Installs the AWS providers declared in `upbound.yaml` +- Applies the XRDs from `apis/` +- Installs an ingress controller for the UXP console + +Startup takes several minutes. Keep this terminal open throughout the tutorial. + +:::warning +`up project run --local` may print `traces export: context deadline exceeded`. +This message reports a telemetry timeout and doesn't affect the cluster setup. +::: + +Verify the connection: + +```bash +kubectl get nodes +``` + +Apply your AWS credentials so providers can authenticate: + +1. Create the `demo` namespace: + + ```bash + kubectl create namespace demo + ``` + +2. Create a secret with your AWS credentials: + + ```bash + kubectl create secret generic aws-secret \ + -n demo \ + --from-file=creds=./aws-credentials.txt + ``` + +Check that all four providers report healthy: + +```bash +kubectl get providers +``` + +Wait until all four providers show `HEALTHY: True` before continuing. + +:::warning +If this returns **No resources found**, `up project run --local` didn't +complete. Delete the cluster with `kind delete cluster --name up-app-w-db` and +restart. +::: + +Check that the composition function is healthy: + +```bash +kubectl get functions +``` + +The KCL function should show `HEALTHY: True`. + +:::warning +If this returns **No resources found**, the KCL function wasn't built or +deployed. Check the `up project run` terminal and restart. +::: + +Capture the function name assigned by `up project run`: + +```bash +FUNC_NAME=$(kubectl get functions --no-headers | grep -v 'crossplane-contrib' | awk '{print $1}') +echo $FUNC_NAME +``` + +Apply both Compositions using that name: + +```bash +cat > apis/appwdb/composition.yaml < apis/appwdbsecure/composition.yaml < w-kyverno/addon-kyverno.yaml <<'EOF' + apiVersion: pkg.upbound.io/v1beta1 + kind: AddOn + metadata: + name: upbound-addon-kyverno + spec: + package: xpkg.upbound.io/upbound/addon-kyverno:3.7.0 + EOF + ``` + +2. Apply it: + + ```bash + kubectl apply -f w-kyverno/addon-kyverno.yaml + ``` + +3. In the UXP console, select **AddOns** in the left navigation. The + `upbound-addon-kyverno` entry appears and becomes healthy in about two + minutes. Or watch from the terminal: + + ```bash + kubectl get addons.pkg.upbound.io upbound-addon-kyverno -w + ``` + + Wait until `HEALTHY: True` before continuing. Press Ctrl+C when it does. + + If it stays `HEALTHY: False` after 5 minutes, check + `kubectl describe addons.pkg.upbound.io upbound-addon-kyverno` for events. + +4. Create the no-privileged-containers policy: + + ```bash + cat > w-kyverno/policy-no-privileged.yaml <<'EOF' + apiVersion: kyverno.io/v1 + kind: ClusterPolicy + metadata: + name: disallow-privileged-containers + annotations: + policies.kyverno.io/title: Disallow Privileged Containers + policies.kyverno.io/category: Pod Security + policies.kyverno.io/severity: high + policies.kyverno.io/description: >- + Privileged containers have unrestricted access to the host system. + This policy blocks any AppWDBSecure request with securityContext.privileged: true + before Crossplane composes any resources, so nothing reaches AWS. + spec: + validationFailureAction: Enforce + background: false + rules: + - name: no-privileged-platform-api + match: + any: + - resources: + kinds: + - AppWDBSecure + validate: + message: "Privileged containers are not allowed on this platform. Remove securityContext.privileged: true from your request." + pattern: + spec: + parameters: + =(securityContext): + =(privileged): "false" + - name: no-privileged-deployment + match: + any: + - resources: + kinds: + - Deployment + validate: + message: "Privileged containers are not allowed on this platform. Remove securityContext.privileged: true from your request." + pattern: + spec: + template: + spec: + containers: + - =(securityContext): + =(privileged): "false" + EOF + ``` + +5. Apply the policy: + + ```bash + kubectl apply -f w-kyverno/policy-no-privileged.yaml + ``` + + You may see this warning: + + ``` + Warning: the kind defined in the all match resource is invalid: unable to convert GVK to GVR for kinds AppWDBSecure + ``` + + You can ignore this warning if Crossplane recently created the XRDs. Once + the CRD is ready, the policy enforces. + +6. Verify the policy is active: + + ```bash + kubectl get clusterpolicy disallow-privileged-containers + ``` + + `READY: True` means the policy is enforcing. + +Now confirm the policy blocks a privileged request and accepts a compliant one. + +:::warning +Kyverno can only check requests for resource types whose CRDs already exist in +the cluster. If you see `no matches for kind "AppWDBSecure"`, the XRD isn't +ready yet. Confirm `kubectl get xrds` shows both XRDs as `ESTABLISHED: True`. +::: + +1. Try to apply a request with `privileged: true`: + + ```bash + kubectl apply -f examples/appwdbsecure/example-1.yaml + ``` + + Kyverno blocks the request immediately. The error references + `disallow-privileged-containers`. Crossplane never sees the request, so + nothing reaches AWS. + + `demo-01`, which you deployed before adding Kyverno, still has a running + RDS instance. This request didn't start one. + +Now try the same request with `privileged: false`: + +1. Apply the compliant version: + + ```bash + kubectl apply -f examples/appwdbsecure/example-2.yaml + ``` + + The request passes the policy check and starts provisioning (~10 minutes). + +2. Watch the status: + + ```bash + kubectl get appwdbsecure -n demo -w + ``` + +## Change it live + +To change infrastructure, update the desired state. Crossplane figures out +what needs to change and does it. Try scaling the database first, then the +replicas. + +1. Scale the database by applying the larger-db variant: + + ```bash + kubectl apply -f examples/appwdb/variant-bigger-db.yaml + ``` + +2. `DESIRED` updates immediately; `ACTUAL` updates once AWS finishes (~5 minutes): + + ```bash + kubectl get instances.rds.aws.m.upbound.io demo-01-db -n demo -w \ + -o custom-columns='NAME:.metadata.name,DESIRED:.spec.forProvider.instanceClass,ACTUAL:.status.atProvider.instanceClass,SYNCED:.status.conditions[?(@.type=="Synced")].reason' + ``` + +3. In the AWS Console, check the **Status** and **Size** columns for `demo-01-db`. + +4. Confirm the change: + + ```bash + kubectl get appwdb demo-01 -n demo + ``` + +5. Scale the app replicas by applying the more-replicas variant: + + ```bash + kubectl apply -f examples/appwdb/variant-more-replicas.yaml + ``` + +6. Watch the `Deployment` scale (~30 seconds): + + ```bash + kubectl get deployment demo-01 -n demo -w \ + -o custom-columns='NAME:.metadata.name,DESIRED:.spec.replicas,READY:.status.readyReplicas' + ``` + +7. Confirm the change: + + ```bash + kubectl get appwdb demo-01 -n demo + ``` + +In the UXP console, navigate to `demo-01` to see the full resource tree with +your updated values. + +## Clean up + +Delete the composite resources. Crossplane deletes all composed AWS resources +before removing each composite resource. + +```shell +kubectl delete appwdbsecure kyverno-demo-01 -n demo +kubectl delete appwdb demo-01 -n demo +``` + +RDS deletion takes 5 to 10 minutes. Wait until both are fully removed: + +```shell +kubectl get appwdb -n demo -w +kubectl get appwdbsecure -n demo -w +``` + +Delete the cluster: + +```shell +kind delete cluster --name up-app-w-db +``` + +## Next steps + +In this tutorial, you: + +- Created a Crossplane project with XRDs, Compositions, and a KCL function +- Deployed a composite resource that created a VPC, subnets, IAM role, RDS + instance, and Kubernetes `Deployment` from a 10-line manifest +- Explored the providers and ProviderConfigs that connected your platform to AWS +- Watched Crossplane detect and correct an out-of-band change to a VPC tag +- Blocked a privileged container request with Kyverno before it reached the cluster +- Updated live infrastructure by changing desired state + +Continue with: + +- [Composite Resource Definitions][xrd-concept]: design your own platform APIs +- [Composition functions][fn-docs]: write the logic that maps user requests to resources +- [Provider authentication][auth-docs]: connect providers to your own cloud account +- [Upbound Marketplace][marketplace]: providers and add-ons for AWS, Azure, GCP, and more + +[up-cli]: /manuals/cli/overview/ +[kubectl-install]: https://kubernetes.io/docs/tasks/tools/ +[up-cli-releases]: https://github.com/upbound/up/releases +[aws-cli]: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html +[kind]: https://kind.sigs.k8s.io/docs/user/quick-start/#installation +[fn-go]: /manuals/cli/howtos/compositions/go/ +[fn-python]: /manuals/cli/howtos/compositions/python/ +[fn-go-template]: /manuals/cli/howtos/compositions/go-template/ +[xrd-concept]:/manuals/uxp/concepts/composition/composite-resource-definitions/ +[fn-docs]: /manuals/uxp/concepts/composition/overview/ +[auth-docs]: /manuals/packages/providers/authentication/ +[aws-auth-docs]: /manuals/packages/providers/authentication/#aws-authentication +[marketplace]: https://marketplace.upbound.io/ + diff --git a/docs/guides/intelligent-control-planes/ai-controller-tutorial.md b/docs/guides/intelligent-control-planes/ai-controller-tutorial.md new file mode 100644 index 00000000..bc1a6a85 --- /dev/null +++ b/docs/guides/intelligent-control-planes/ai-controller-tutorial.md @@ -0,0 +1,545 @@ +--- +title: Build an AI controller +description: Deploy a WatchOperation that uses a local LLM to enforce platform policy. +weight: {weight} +validation: + type: walkthrough + owner: docs@upbound.io + environment: local-upbound + timeout: 45m + variables: + HOST_IP: "" +--- + +In this tutorial, you run a Kubernetes controller with reconciliation logic in +plain English. A Crossplane `WatchOperation` watches an nginx `Deployment` and +calls a local LLM whenever it changes. The LLM reads the +current state, applies the rule in its `systemPrompt`, and returns a corrected +manifest. Crossplane applies it. + +By the end of this tutorial, you can: + +- Deploy a `WatchOperation` that calls a local LLM on every resource change +- Watch the controller detect and correct a policy violation automatically +- Update the enforcement rule by editing a single field in YAML + +The model in this tutorial is `qwen3.5:latest`, running locally via Ollama. +No cloud API key required. + +## Prerequisites + +Install the following before starting: + +- [Docker][docker-install], running locally +- [`kubectl`][kubectl-install] +- [`kind`][kind-install] +- [`up CLI`][up-cli] v0.44.3 or later + + +## Create the project + +Scaffold a new project with `up project init`. This creates the +`ai-controller/` directory with a valid `upbound.yaml` and the standard +project layout (`apis/`, `functions/`, `examples/`, `tests/`): + +```bash +up project init --scratch ai-controller +cd ai-controller +``` + +All commands from this point run from inside the `ai-controller` directory. + +The controller uses two Crossplane functions: `function-auto-ready` so the +`WatchOperation` reports ready status, and `function-openai` to call the LLM. +Add them as project dependencies: + +```bash +up dependency add 'xpkg.upbound.io/crossplane-contrib/function-auto-ready' +up dependency add 'xpkg.upbound.io/upbound/function-openai:v0.3.0' +``` + +`up dependency add` records each dependency in `upbound.yaml`. + +Create the starting nginx `Deployment` with 1 replica. The AI controller +corrects this after you deploy it later in the tutorial. + +```bash +mkdir -p examples +cat > examples/deployment.yaml <<'EOF' +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: nginx + name: nginx + namespace: default +spec: + replicas: 1 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - image: nginx + name: nginx +EOF +``` + +## Set up Ollama + + +Ollama runs the LLM locally. Install it, start it, and pull the model before +starting the cluster. The model is ~1 GB. + +1. Install Ollama: + + ```shell + curl -fsSL "https://ollama.com/install.sh" | sh + ``` + + If the install script doesn't work for your OS, download directly from + [ollama.com/download][ollama-download]. + + +2. Start Ollama. On Linux, the install script registers a `systemd` service + that starts Ollama automatically. On macOS, start it manually in a + separate terminal if `ollama list` returns "could not connect to ollama + server": + + + ```shell + ollama serve + ``` + +3. Pull the model: + + ```shell + ollama pull qwen3.5:latest + ``` + +4. Confirm the model downloaded: + + ```shell + ollama list + ``` + + You should see `qwen3.5:latest` in the output. + +## Start the project + +Run from inside the `ai-controller` directory: + +```bash +up project run --local --control-plane-version=2.1.4-up.2 +``` + +This creates a kind cluster, installs UXP, and deploys the function packages +declared in `upbound.yaml`. It exits when the cluster is ready. + +:::warning +`up project run --local` may print `traces export: context deadline exceeded`. +This message reports a telemetry timeout and doesn't affect the cluster setup. +::: + +Point kubectl at the new cluster: + +```bash +kind get kubeconfig --name up-ai-controller > ~/.kube/config +``` + +:::warning +This overwrites your existing `~/.kube/config`. To preserve existing contexts, +merge instead: + +```bash +kind get kubeconfig --name up-ai-controller > ~/.kube/config-upbound +KUBECONFIG=~/.kube/config:~/.kube/config-upbound \ + kubectl config view --flatten > ~/.kube/config.merged +mv ~/.kube/config.merged ~/.kube/config +``` +::: + +Verify the connection: + +```bash +kubectl get nodes +``` + +The kind cluster's pods need to reach Ollama running on your host. Create a +Kubernetes `Service` and `Endpoints` that route cluster traffic to your machine. + +1. Get the host's IPv4 address as seen from inside the cluster. This command + works on Linux, macOS, and Windows: + + ```bash + HOST_IP=$(docker run --rm --add-host=host.docker.internal:host-gateway alpine \ + getent hosts host.docker.internal | awk '$1 ~ /^[0-9.]+$/ {print $1; exit}') + echo "Host IP: $HOST_IP" + ``` + +2. Create the `ollama` namespace and register Ollama as a cluster service: + + ```bash + kubectl create namespace ollama --dry-run=client -o yaml | kubectl apply -f - + + kubectl apply -f - < operations/replicas/operation.yaml <<'EOF' + apiVersion: ops.crossplane.io/v1alpha1 + kind: WatchOperation + metadata: + name: replicas + spec: + concurrencyPolicy: Forbid + successfulHistoryLimit: 3 + failedHistoryLimit: 1 + operationTemplate: + spec: + mode: Pipeline + pipeline: + - functionRef: + name: upbound-function-openai + input: + apiVersion: openai.fn.upbound.io/v1alpha1 + kind: Prompt + systemPrompt: |- + You are a Kubernetes controller. Output raw YAML only — no markdown, no code fences, no backticks, no explanations. + + Rule: if spec.replicas is less than 3, set it to 3. Otherwise keep it unchanged. + userPrompt: |- + Inspect the nginx Deployment and output the corrected manifest. + Output only the Deployment manifest with the correct spec.replicas value. + Include apiVersion, kind, metadata (name: nginx, namespace: default), and spec. + Start your response with 'apiVersion:' + step: deployment-analysis + credentials: + - name: gpt + source: Secret + secretRef: + namespace: crossplane-system + name: gpt + watch: + apiVersion: apps/v1 + kind: Deployment + namespace: default + EOF + ``` + + :::info + The explicit output instructions in `userPrompt` are necessary for + `qwen3.5:latest`. With a larger model like `gpt-4o`, the `systemPrompt` can + contain just the rule itself, without format guidance. + ::: + +3. Apply the `WatchOperation`. It fires immediately because the `Deployment` + already exists: + + ```bash + kubectl apply -f operations/replicas/operation.yaml + ``` + +4. Watch the controller act: + + ```bash + kubectl get deployment nginx -w + ``` + + Within 60 to 90 seconds, replicas jump from 1 to 3. The LLM read the + `Deployment`, decided it violated the rule, and patched it. Press Ctrl+C + when replicas reach 3. + +5. Inspect the operation records. Each `Operation` object captures a single + invocation: + + ```bash + kubectl get watchoperations + kubectl get operations + ``` + +6. Describe one of the operations: + + ```bash + kubectl describe operation + ``` + + The `Events` section shows the exact YAML the model returned and what the + controller applied. + +## Watch it heal + +The `WatchOperation` re-evaluates on every change. If anything modifies the +`Deployment`, the rule re-applies. + +1. Scale nginx down to 1 replica: + + ```bash + kubectl scale deployment nginx --replicas=1 + ``` + +2. Watch the controller heal it: + + ```bash + kubectl get deployment nginx -w + ``` + + Within 30 to 60 seconds, replicas climb back to 3. The `WatchOperation` + fired because the `Deployment` changed. The LLM saw 1 replica, decided it + violated the rule, and patched it. Press Ctrl+C when replicas are back at 3. + +3. See what fired: + + ```bash + kubectl get watchoperations + kubectl get operations + ``` + + Each entry records what fired, what the model decided, and what changed. + The most recent one captured the scale-down event and the correction. + +4. See where the model runs: + + ```bash + kubectl get secret gpt -n crossplane-system -o yaml + ``` + + `OPENAI_BASE_URL` points to Ollama's OpenAI-compatible API running locally + on your machine, so no data leaves the machine. Change that URL to + `https://api.openai.com/v1` and update `OPENAI_MODEL`, and the + `WatchOperation` works identically. + +## Change the rules + +To change the policy, edit `systemPrompt` and re-apply. This example raises the +minimum from 3 to 5 replicas. + +1. Open `operations/replicas/operation.yaml`. Find the `systemPrompt` and + change the rule line from: + + ```text + Rule: if spec.replicas is less than 3, set it to 3. Otherwise keep it unchanged. + ``` + + to: + + ```text + Rule: if spec.replicas is less than 5, set it to 5. Otherwise keep it unchanged. + ``` + + Or edit in place. On macOS: + + ```bash + sed -i '' 's/less than 3, set it to 3/less than 5, set it to 5/' \ + operations/replicas/operation.yaml + ``` + + On Linux: + + ```bash + sed -i 's/less than 3, set it to 3/less than 5, set it to 5/' \ + operations/replicas/operation.yaml + ``` + + :::info + With `qwen3.5:latest`, keep the full `userPrompt` output instructions in + place. The explicit YAML template keeps the local model's output reliable. + With a larger model like `gpt-4o`, you can remove the `userPrompt` entirely + and keep only the rule in `systemPrompt`. + ::: + +2. Apply the updated operation: + + ```bash + kubectl apply -f operations/replicas/operation.yaml + ``` + +3. Trigger the rule by scaling nginx down to 1: + + ```bash + kubectl scale deployment nginx --replicas=1 + ``` + +4. Watch the updated rule enforce 5 replicas: + + ```bash + kubectl get deployment nginx -w + ``` + + This takes 30 to 45 seconds. Press Ctrl+C when you see 5 ready replicas. + +5. Inspect the operation history to verify the new rule fired: + + ```bash + kubectl get watchoperations + kubectl get operations + ``` + +:::tip +Try adding a conditional rule to the `systemPrompt`: + +``` +If the deployment name contains 'prod', require at least 5 replicas. +Otherwise, require at least 2. +``` + +The model interprets natural language conditions the same way it interprets +numeric rules. +::: + +## Clean up + +Delete the demo resources: + +```bash +kubectl delete watchoperation replicas +kubectl delete operations --all +kubectl delete deployment nginx +``` + +Delete the cluster: + +```bash +kind delete cluster --name up-ai-controller +``` + +## Next steps + +In this tutorial, you: + +- Created a Crossplane project with a `WatchOperation` and a KCL function +- Deployed a controller that calls a local LLM on every `Deployment` change +- Watched the controller detect and correct a replica count violation +- Updated the enforcement policy by editing a single field in YAML + +Continue with: + +- [WatchOperations reference][watchops-ref]: triggers, concurrency, history limits, and output handling +- [CronOperations reference][cronops-ref]: schedule-driven operations +- [Composition functions][fn-docs]: build custom logic for any resource +- [Provider authentication][auth-docs]: connect providers to your own cloud account +- [Upbound Marketplace][marketplace]: functions and providers for AWS, Azure, GCP, and more + +[up-cli]: /manuals/cli/overview/ +[docker-install]: https://docs.docker.com/get-docker/ +[kubectl-install]: https://kubernetes.io/docs/tasks/tools/ +[kind-install]: https://kind.sigs.k8s.io/docs/user/quick-start/#installation +[ollama-download]: https://ollama.com/download +[up-cli-releases]: https://github.com/upbound/up/releases +[uxp-releases]: /reference/release-notes/uxp +[cronops-ref]: /manuals/uxp/concepts/operations/cron-operation/ +[watchops-ref]: /manuals/uxp/concepts/operations/watch-operation/ +[fn-docs]: /manuals/uxp/concepts/composition/overview +[auth-docs]: /manuals/packages/providers/authentication/ +[marketplace]: https://marketplace.upbound.io/ + diff --git a/docs/guides/intelligent-control-planes/scale-database.md b/docs/guides/intelligent-control-planes/scale-database.md index e5646bca..57db3b18 100644 --- a/docs/guides/intelligent-control-planes/scale-database.md +++ b/docs/guides/intelligent-control-planes/scale-database.md @@ -1,109 +1,468 @@ --- -title: Dynamically scale an RDS Instance +title: Scale resources in an AI-powered control plane +description: Deploy an AI controller that reads live RDS metrics and scales a database automatically. +weight: 2 +validation: + type: walkthrough + owner: docs@upbound.io + environment: local-upbound + timeout: 60m + variables: + ANTHROPIC_API_KEY: "" --- -:::important +In this tutorial, you deploy an AI controller that manages an AWS RDS database. +A `CronOperation` runs every minute. It reads live CloudWatch metrics from the +database object, calls Claude, and decides whether to scale. If it scales, it +writes its reasoning back to the object as an annotation. -This guide requires an Upbound control plane instance running UXP v2.0 or later. -Upbound SaaS coming soon. +By the end of this tutorial, you can: -::: - - - -[Upbound Crossplane][upbound-crossplane] is capable of running [Intelligent Control Planes][intelligent-controlplanes], which define AI-augmented functions to perform tasks. This guide walks through a use case for using AI to intelligently scale an AWS RDS database instance. +- See live CloudWatch metrics surfaced directly on a Crossplane `SQLInstance` object +- Deploy an AI scaling controller with a single `kubectl apply` +- Read the model's reasoning from the Kubernetes object it acted on +- Trigger a load test and watch the AI decide to scale up in real time - ## Prerequisites -Before you begin make sure you have: +Install the following tools before starting: -* An Upbound Account -* The `up` CLI installed -* An Anthropic API key -* An AWS account +- [`kubectl`][kubectl-install] +- [AWS CLI][aws-cli], configured with credentials that can create VPCs and RDS instances +- [kind][kind] +- An [Anthropic API key][anthropic-console] with access to Claude +- [`up CLI`][up-cli] v0.44.3 or later -## Set up your environment + +The load test later uses `mysqlslap`, which ships with the MySQL client tools. + -Clone the repository [upbound/configuration-aws-database-ai][guide-repo] to your machine: +On macOS: ```shell -git clone git@github.com:upbound/configuration-aws-database-ai.git +brew install mysql-client +export PATH="$(brew --prefix mysql-client)/bin:$PATH" ``` -This repository contains a [control plane project][project] that defines a fully managed AWS database instances. This database instance project contains uses an [Intelligent Composition][intelligent-composition] function that scales the database in relation to performance metrics fetched from AWS CloudWatch. +On Linux (Debian/Ubuntu): -## Launch the local UXP cluster +```shell +apt-get install -y mysql-client +``` -In the root of the project directory, launch the control plane locally: +## Clone the project -```shell -up project run --local +```bash +git clone https://github.com/upbound/configuration-aws-database-ai demo +cd demo ``` -## Configure credentials and runtime settings +All commands from this point run from inside the `demo` directory. -Create a secret on the control plane which contains an Anthropic API key: +## Configure credentials -```shell -kubectl create secret generic claude \ - --from-literal=ANTHROPIC_API_KEY= \ - -n crossplane-system +Create a file named `aws-credentials.txt` in the project directory with your +AWS credentials in `INI` format: + +```ini +[default] +aws_access_key_id = +aws_secret_access_key = ``` -Create a second secret on the control plane which contains credentials for your AWS account: +:::warning +This tutorial uses static AWS credentials for convenience. Don't use static +credentials in production. Use IAM roles, IRSA, or another short-lived +credential mechanism instead. See [AWS authentication][aws-auth-docs] for +secure alternatives. +::: -```shell -kubectl create secret \ - generic aws-creds \ - -n crossplane-system \ - --from-file=aws-creds=./aws-credentials.txt +Export your Anthropic API key. The setup steps below use it to create a +Kubernetes secret: + +```bash +export ANTHROPIC_API_KEY= ``` -## Create a database +## Start the project -Deploy a network and database: +Open a dedicated terminal and run from inside the `demo` directory: -```shell -# Network dependency -kubectl apply -f examples/network-rds-metrics.yaml +```bash +up project run --local --ingress +``` + +This command: + +- Creates a kind cluster +- Installs UXP +- Builds and deploys the composition functions (`function-rds-metrics` and `function-claude`) +- Installs the AWS providers declared in `upbound.yaml` +- Applies the XRDs from `apis/` +- Installs an ingress controller for the UXP console + +Startup takes several minutes. The command exits when the cluster is ready. -# Database with scaling enabled -kubectl apply -f examples/mariadb-xr-rds-metrics.yaml +:::warning +`up project run --local` may print `traces export: context deadline exceeded`. +This message reports a telemetry timeout and doesn't affect the cluster setup. +::: + +Verify the connection: + +```bash +kubectl get nodes +``` + +Enable the alpha operations feature on the Crossplane deployment so that +`CronOperation` and `Operation` resources reconcile: + +```bash +kubectl patch deploy crossplane -n crossplane-system --type=json \ + -p='[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--enable-operations"}]' +kubectl rollout status deploy/crossplane -n crossplane-system ``` -The database gets created, the control plane periodically fetches performance metrics for it from AWS CloudWatch, and dynamically scales the database size accordingly. +Without this flag, `CronOperation` resources stay unreconciled (no status, +no schedule fires). - -To validate the intelligent scaling system, you can stress test the RDS instance to trigger high CPU utilization and observe AI-driven scaling decisions. The command below performs a stress test to mimic real usage: - +Create the namespace and load AWS credentials and the Anthropic API key into +the cluster: -```shell -# Trigger high CPU load with multiple concurrent MD5 hash computations -for i in {1..8}; do - mysql \ - --host=your-rds-endpoint.region.rds.amazonaws.com \ - --user=masteruser \ - --password=your-password \ - --default-auth=mysql_native_password \ - --execute="SELECT BENCHMARK(1000000000, MD5('trigger_scaling_$i'));" & -done +1. Create the `database-team` namespace: + + ```bash + kubectl apply -f examples/ns-database-team.yaml + ``` + +2. Create the AWS credentials secret in both namespaces. The `ProviderConfig` + and the `function-rds-metrics` function both read from this secret: + + ```bash + kubectl create secret generic aws-creds \ + --namespace database-team \ + --from-file=credentials=./aws-credentials.txt \ + --dry-run=client -o yaml | kubectl apply -f - + + kubectl create secret generic aws-creds \ + --namespace crossplane-system \ + --from-file=credentials=./aws-credentials.txt \ + --dry-run=client -o yaml | kubectl apply -f - + ``` + +3. Create the Anthropic API key secret used by `function-claude`: + + ```bash + kubectl create secret generic claude \ + --namespace crossplane-system \ + --from-literal=ANTHROPIC_API_KEY="${ANTHROPIC_API_KEY}" \ + --dry-run=client -o yaml | kubectl apply -f - + ``` + +Wait for both AWS providers and both functions to become healthy: + +```bash +kubectl get providers +kubectl get functions +``` + +All four should show `HEALTHY: True` before continuing. + +:::warning +If `kubectl get providers` or `kubectl get functions` returns **No resources found**, +`up project run --local` didn't complete. Delete the cluster and restart from +[Start the project](#start-the-project). +::: + +Apply the `ProviderConfig`, then the network, then the database: + +1. Apply the `ProviderConfig`: + + ```bash + kubectl apply -f examples/providerconfig-aws-static.yaml + ``` + +2. Provision the network: + + ```bash + kubectl apply -f examples/network-rds-metrics.yaml + ``` + + Wait for the network composite resource to become ready (~5 minutes): + + ```bash + kubectl get network rds-metrics-database-ai-scale -n database-team -w + ``` + + Press Ctrl+C once it shows `READY: True`. + +3. Provision the database: + + ```bash + kubectl apply -f examples/mariadb-xr-rds-metrics.yaml + ``` + + RDS provisioning takes 10 to 15 minutes. Watch the status: + + ```bash + kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team -w + ``` + + Press Ctrl+C once it shows `READY: True` before continuing. + +:::info +While you wait, the `function-rds-metrics` composition step is already +collecting CloudWatch data and writing it onto the object. By the time the +database is ready, `status.performanceMetrics` contains live data. +::: + +Open the UXP console for a visual view of the resources: + + ```bash + up uxp web-ui open + ``` + +## Review the database + +An RDS MariaDB instance is running on AWS, managed by Crossplane. Before +wiring the AI into the loop, explore what the system already knows. + +1. List the database object: + + ```bash + kubectl get sqlinstance -n database-team + ``` + + You should see `rds-metrics-database-ai-mysql` with `READY: True`. That's a + real AWS RDS instance, managed as a Kubernetes object. + + In the UXP console, click **View all Composite Resources**. The + `rds-metrics-database-ai-mysql` entry appears in the list. Click + **Relationship View** to see the resources Crossplane provisioned. + +2. Verify the AWS resource. In the [AWS Console, RDS in `us-east-1`][aws-rds], + find `rds-metrics-database-ai-mysql`. + +3. Find the performance metrics: + + ```bash + kubectl describe sqlinstance rds-metrics-database-ai-mysql -n database-team + ``` + + Find the `status.performanceMetrics` block. This block contains live + CloudWatch data such as CPU utilization, active connections, and free + storage. `function-rds-metrics` collects this data and writes it into the + object. The AI reads only this block and never queries CloudWatch directly. + + Or fetch just the metrics: + + ```bash + kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.status.performanceMetrics}' | jq . + ``` + + +4. Open `operations/rds-intelligent-scaling-cron/operation.yaml` in your + editor. That file is the entire scaling controller. The `systemPrompt` + defines the scaling logic, including thresholds, instance class progression, + and cooldown. + + +5. Apply the controller: + + ```bash + kubectl apply -f operations/rds-intelligent-scaling-cron/operation.yaml + ``` + +6. Watch the first decision: + + ```bash + kubectl get cronoperation + ``` + + The `CronOperation` takes 30 to 45 seconds to start. Once it's running, + watch for the first operation: + + ```bash + kubectl get operations -w + ``` + + Wait until an operation shows `SUCCEEDED: True`, then press Ctrl+C and + describe it: + + ```bash + kubectl describe operation + ``` + + The `Events` section shows the AI's reasoning and decision. + +7. Check the annotation written back to the database object: + + ```bash + kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.metadata.annotations}' | jq . + ``` + + In the UXP console, navigate to `rds-metrics-database-ai-mysql` and open + the **YAML** tab. The `intelligent-scaling/last-scaled-decision` annotation + contains the model's last decision. + +## Watch the controller idle + +The `CronOperation` runs every minute. CPU is low, so watch what the AI decides +when there's nothing to do. + +1. Watch operations run: + + ```bash + kubectl get operations -w + ``` + + A new operation appears every minute. Press Ctrl+C after several have run. + In the UXP console, select **Operations** in the left navigation to see the + same list visually. + +2. Read one of the decisions: + + ```bash + kubectl describe operation + ``` + + Look at the `Events` section. At low CPU, the AI decides to hold. The + cooldown logic is also in the prompt, so it doesn't flip the instance class + every minute even if usage crosses the thresholds. + + +3. Look at the current metrics: + + ```bash + kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.status.performanceMetrics}' | jq . + ``` + + The AI reads this same data before making a decision. + +4. Confirm the current instance class: + +```bash +kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.spec.parameters.instanceClass}' ``` +It's `db.t3.micro`. + +You can also confirm the current instance type in the [AWS Console, RDS in +`us-east-1`][aws-rds]. + +## Trigger a scale + +Run a load test that drives CPU above the scaling threshold so the AI decides +to act. + +1. Confirm the starting instance class: + + ```bash + kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.spec.parameters.instanceClass}' + ``` + + It should be `db.t3.micro`. + +2. In a second terminal, run the load test from inside the `demo` directory: + + ```bash + bash perf-scale-demo.sh + ``` + + The script sends CPU-intensive queries to the database for 5 to 10 minutes. + If it finishes without triggering a scale, run it again. + +4. Watch the controller act: + + ```bash + kubectl get operations -w + ``` + + When CPU crosses the threshold (~60%), the next `CronOperation` decides to + scale up. Press Ctrl+C once you see a new operation start. + +5. Check the new instance class: + + ```bash + kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.spec.parameters.instanceClass}' + ``` + + It should now be `db.t3.small`. + +6. Check the reasoning: + + ```bash + kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.metadata.annotations.intelligent-scaling/last-scaled-decision}' + ``` + + In the [AWS Console, RDS in `us-east-1`][aws-rds], refresh the database + list. The instance class change is in progress, and RDS is modifying the + live database. + ## Clean up -Clean up the local control plane to prevent it from continuing to invoke your LLM. Run the following command: +Delete the composite resources. Crossplane deletes all composed AWS resources +(VPC, subnets, RDS instance) before removing the composite resources. -```shell -up project stop +```bash +kubectl delete sqlinstance rds-metrics-database-ai-mysql -n database-team +kubectl delete network rds-metrics-database-ai-scale -n database-team ``` -## Next steps +RDS deletion takes 5 to 10 minutes. Wait until the `sqlinstance` is fully removed: + +```bash +kubectl get sqlinstance -n database-team -w +``` -Read the concept documentation for [Intelligent Control Planes][intelligent-controlplanes] to learn more about using AI-powered functions in your function pipelines. +Once it's gone, delete the `CronOperation` and its history: + +```bash +kubectl delete cronoperation rds-intelligent-scaling-cron +kubectl delete operations --all +``` + +Delete the cluster: + +```bash +CLUSTER_NAME=$(kind get clusters | grep "^up-" | head -1) +kind delete cluster --name "${CLUSTER_NAME}" +``` + +## Next steps -[upbound-crossplane]: /manuals/uxp/overview -[intelligent-controlplanes]: /manuals/uxp/concepts/intelligent-control-planes/ -[guide-repo]: https://github.com/upbound/configuration-aws-database-ai -[project]: /manuals/cli/concepts/projects -[intelligent-composition]: /manuals/uxp/concepts/composition/intelligent-compositions +In this tutorial, you: + +- Provisioned a real AWS RDS instance managed as a Crossplane `SQLInstance` +- Observed live CloudWatch metrics surfaced directly on the Kubernetes object +- Deployed an AI scaling controller with a single `kubectl apply` +- Read the model's reasoning from the annotation it wrote back to the object +- Ran a load test and watched the AI scale the database automatically + +Continue with: + +- [CronOperations reference][cronops-ref]: schedules, history limits, concurrency +- [WatchOperations reference][watchops-ref]: event-driven operations +- [Composition functions][fn-docs]: build custom logic for any resource +- [Provider authentication][auth-docs]: connect providers to your own cloud account +- [Upbound Marketplace][marketplace]: providers and functions for AWS, Azure, GCP, and more + +[up-cli]: /manuals/cli/overview/ +[kubectl-install]: https://kubernetes.io/docs/tasks/tools/ +[aws-cli]: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html +[kind]: https://kind.sigs.k8s.io/docs/user/quick-start/#installation +[anthropic-console]: https://console.anthropic.com/ +[aws-rds]: https://us-east-1.console.aws.amazon.com/rds/home?region=us-east-1#databases: +[cronops-ref]: /manuals/uxp/concepts/operations/cron-operation/ +[watchops-ref]: /manuals/uxp/concepts/operations/watch-operation/ +[fn-docs]: /manuals/uxp/concepts/composition/overview +[auth-docs]: /manuals/packages/providers/authentication/ +[aws-auth-docs]: /manuals/packages/providers/authentication/#aws-authentication +[marketplace]: https://marketplace.upbound.io/ diff --git a/utils/vale/styles/Upbound/spelling-exceptions.txt b/utils/vale/styles/Upbound/spelling-exceptions.txt index f82746d1..dedf72e5 100644 --- a/utils/vale/styles/Upbound/spelling-exceptions.txt +++ b/utils/vale/styles/Upbound/spelling-exceptions.txt @@ -204,4 +204,10 @@ Traefik Traefik's HTTPRoute TLSRoute - +VPC +VPCs +VPC's +Ollama +ollama +Ollama's +ollama's