This repository contains the Helm chart that can be used to deploy a FEDeRATED node.
A FEDeRATED node has the following internal architecture:
| Component | Description |
|---|---|
| Identity Hub | Manages EDC identities, DID documents, and verifiable credentials. Provides issuance capabilities. Based on the EDC Identity Hub: https://github.com/eclipse-edc/IdentityHub. |
| Control Plane | Manages the catalog, contract negotiation, transfer management, DSP APIs, and message dispatchers. |
| HTTP Data Plane | Proxies HTTP messages and provides token validation mechanisms. |
| DIP | Provides functionality for the Distribution Orchestrator, such as obtaining the participant list, destination URLs, and instructing the control plane to manage federated transfers. |
| Registration Service | Registers participants and provides the participant list. Deployed only by the Dataspace Governance Authority. |
| Vault | Securely manages secrets for the EDC components. |
| PostgreSQL Database | Stores raw event data for the Event API and various data for the EDC components. |
| GraphDB | Stores incoming and outgoing federated messages for the Event API app. |
| Event API App | Acts as the initial entry point for outgoing events and handles incoming events. |
| Distribution Orchestrator | Dispatches outgoing events and handles incoming events. |
This guide offers a practical approach to deploying the FEDeRATED node on Kubernetes using this Helm Chart. It begins with the necessary prerequisites, including local tools, and the setup of a decentralized identifier. It then walks through the deployment process using Helm.
This guide assumes that the example chart values are used as a starting point. Copy/download secrets_template.yaml and values_template.yaml to the folder from where you want to execute the deployment and rename to secrets.yaml and values.yaml respectively.
A basic understanding of Kubernetes, Helm, Decentralized Identifiers (in particular did:web) and Verifiable Credentials is required. It is assumed that the following tools are installed and readily available:
- IDE or CLI Environment: Use a terminal application or IDE with terminal support (e.g., Visual Studio Code).
kubectlhelmopenssl(used for secure key generation)K9s(optional): Terminal Kubernetes IDE. Or another Kubernetes IDE of choice such as Lens.
Furthermore, a Kubernetes cluster with the following components is required:
- An Ingress controller
- A cert-manager instance with a cluster issuer with name
letsencrypt
The entire node can be deployed using a single Helm chart. The Helm chart contains several subcharts that can be configured through their respective values files.
The following sections provide detailed instructions on how to configure each subchart.
Note: When deploying more than one node, they must be deployed in different namespaces to prevent name clashes. Also make sure that there are no clashing URL's. This can be prevented entirely by using different subdomains for each node e.g. node1.* and node2.*. The environment can be configured such that the Vault, PostgreSQL and GraphDB are deployed only for one node such that they can be re-used by the other.
Before starting the deployment, determine your decentralized identifier (DID) in the form of did:web. This is based on the domain where you’ll host the node. If you plan to host the node at example.com, then your DID will be:
did:web:example.com
This DID will be used as your unique identity within the dataspace, and other nodes will recognize you by this identifier.
Several secrets have to be created before deployment that are used throughout the configuration process. All secrets that are in secrets.yaml must be configured before deployment in the secrets.yaml now. Secrets that need to be deployed in the Vault need to be configured after deployment has finished during the setup process of the vault. The table below provides a complete overview of all secrets that must be configured, either through the secrets.yaml file or through the vault. Apart from the pullsecret they must all be newly created. Make sure to complete configuration of the secrets.yaml file before proceeding. Store all secrets securely.
| Key (dot-path) | Location | Purpose / used by | Format / size |
|---|---|---|---|
pullSecret.password |
secrets.yaml |
Registry pull password | |
postgres.password |
secrets.yaml |
App DB user (PostgreSQL) | |
postgres.postgresPassword |
secrets.yaml |
Postgres super-user (postgres) |
|
pgadmin.password |
secrets.yaml |
PGAdmin4 web login | |
dip.apiKey |
secrets.yaml (Distribution Orchestrator) |
DiP → Control-Plane API key | |
security.users[*].password |
secrets.yaml (Distribution Orchestrator) |
Distribution Orchestrator | bcrypt |
security.apiKeys[*].key |
secrets.yaml (Distribution Orchestrator) |
Service API keys (Orchestrator REST) | |
api.security.xapikey.key |
secrets.yaml (event Api App) |
Header-based API key (external callers) | |
api.security.userpass.password |
secrets.yaml (event Api App) |
Basic-auth password (Event-API UI) | bcrypt |
control-plane-apikey |
Vault | Same token as above, read by Control Plane | |
<participant>-password |
Vault | Identity-Hub participant login | bcrypt |
Sufficiently secure tokens can be created using:
openssl rand -base64 20
Some secrets must be secured using bcrypt. The following can be used to create a secret and then obtain the bcrypt version of it:
PASSWORD=$(openssl rand -base64 20); echo "Plaintext password: $PASSWORD"; echo -n "Bcrypt hash: "; htpasswd -bnBC 12 "" "$PASSWORD" | tr -d ':\n'; echo
Note: Both of these commands create a token with length of 20 bytes encoded as base64. The number of bytes (20) can be increased for improved security.
Note: In the case of the Bcrypt encoded password, the plaintext password must be stored securely. If it is lost, the password cannot be recovered.
Note: Since
secrets.yamlcontains sensitive information, it should not be checked into source control, unlikevalues.yaml.
This section provides configuration instructions for each chart. These steps must be completed before deployment.
The node configuration is managed via the values.yaml file. The default setup is designed to require minimal configuration, meaning many options are omitted. However, this setup assumes certain configurations (e.g., the Helm release name must be set to federated).
For a comprehensive list of configurable values, refer to the federated-node Helm chart's values.yaml file.
This chart deploys a PostgreSQL database alongside PGAdmin4 for database management. The PostgreSQL deployment utilizes the Bitnami PostgreSQL Helm chart, while the PGAdmin4 deployment is based on runix/pgadmin4.
Both components must be configured through the postgres and pgadmin sections of values.yaml.
- PGAdmin4 is optional but useful for manual database interactions.
- To disable either component, set
enabled=falseinvalues.yaml.
If not using the provided PostgreSQL deployment, you must update values.yaml to ensure all components are aware of the database's location. For advanced setups, refer to the respective chart documentation.
Modify values.vault.yaml to match your environment. Ensure the hostname is correctly set.
- To disable Vault deployment, set
enabled=false. - If not using the provided Vault deployment, update
values.yamlto ensure all components can locate the Vault instance.
The Distribution Orchestrator requires specific secrets to be configured in secrets.yaml. No additional configuration is needed in values.yaml.
The Agent Event API can be configured using example values from values.yaml. Required secrets must be specified in secrets.yaml.
The Connector is deployed using the tno-edc-connector subchart, which includes:
- A Control Plane
- An Identity Hub
- An HTTP Data Plane (for receiving and verifying incoming Http Api calls)
Each component is configured via the values.yaml.
Each values file should be configured based on your environment, with inline documentation provided.
Note: The ingress host for the
edc-identity-hubdeployment must match the decentralized identifier (DID) hostname.
Example: If your DID isdid:web:example.com, the ingress host must beexample.com.
The Helm deployment includes a GraphDB instance, which requires no additional configuration.
Deploy the node using:
helm upgrade --install federated \
federated-node \
--namespace <namespace> \
--version 0.8.0 \
--repo https://nexus.dataspac.es/repository/tsg-helm \
-f values.yaml \
-f secrets.yaml
Replace <namespace with the namespace you want to deploy in.
Not all pods will run without errors right away, this is to be expected.
Note: It is important to use the release name exactly as stated above (
federated). It is possible to use a different release name, but such configuration is not covered by this setup guide.
HashiCorp Vault provides a secure way to store secrets for the EDC connector.
After deployment, the Vault must be unsealed. The unsealing instructions can be found here: Initialize and Unseal Vault. This guide assumes a single secret share and threshold of 1. For more detailed instructions on the vault operator init command, refer to the documentation on Vault Operator Init if you want to use a different number of secret shares or thresholds. Using more shares and a higher threshold provides better security, but imposes a higher configuration burden.
To initialize the vault with a single secret share and threshold of one, use the following command:
kubectl exec -n <namespace> -ti federated-vault-server-0 -- vault operator init -key-shares=1 -key-threshold=1
Note: The resulting unseal key and root token should be stored securely.
Once this is done, you can unseal the vault using:
kubectl exec -n <namespace> -ti federated-vault-server-0 -- vault operator unseal <unsealKey>
Replace <namespace> with the actual namespace of the deployment. Replace <unsealKey> with the unseal key obtained from executing vault operator init.
After unsealing the Vault, you should be able to access the dashboard at the configured address. Use the root token to log in.
Note: Alternatively, you can use the "auto unseal" mechanism to unseal the Vault automatically: Auto Unseal. This uses an external service, such as the Azure Key Vault, to store the unseal keys.
First, we must create a policy that allows a user to enable new secret engines. This can be done by navigating to Policies -> Create ACL Policy. Give the policy a name such as manage-secret-engines and copy and paste the following policy:
path "*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
Click Create Policy.
Next, create a new Vault user to use instead of the root account:
- Go to Access -> Authentication Methods and click Enable New Method.
- Click on Username & Password.
- The path value can be left at its default value
userpass.
- The path value can be left at its default value
- Click Enable Method.
- Return to the Authentication Methods page and click on the newly created
userpassmethod. - Click Create User.
- Select a username and a secure password.
- The password hash can be left empty unless you want to supply the bcrypt encoded password instead of the plain text password.
- Expand the Tokens section.
- In the Generated Token's Policies field, enter the name of the policy you just created (e.g.,
manage-secret-engines).
- In the Generated Token's Policies field, enter the name of the policy you just created (e.g.,
- Click Save.
Now, log out from the root account and log in using your newly created account (ensure that the Username authentication method is selected).
The next step is to create a new secret engine to store secrets:
- Go to Secret Engines -> Enable New Engine.
- Click on KV.
- For the path, use
federated.
- For the path, use
- Click Enable Engine.
The Vault has been correctly set up. You can retrieve an access token for the current account by clicking on the user icon in the menu and selecting Copy Token. The token should be deployed to the cluster as a kubernetes secret by executing:
kubectl create secret generic vault-token --from-literal=token=<token> --namespace=<namespace>
Replace <token> with the copied vault token and <namespace> with the namespace of the deployment.
Note: After adding the vault token, you might need to manually restart the edc-* pods as these will be failing as they are unable to mount the vault kubernetes secret.
Two secrets have to be pre-configured in the Vault, the api key for the control plane and the password for the Identity Hub participant that will be created. Both of these should have been generated previously. To create a new secret in the vault, navigate to Secret Engines and click on the secret engine you just created and click on Create secret.
For the control plane apikey, the secret path should be control-plane-apikey, the key field of the secret data should be set to content and the data field should be set to control plane api secret created before.
For the password, the path should be set to <participant-name>-password where <participant-name> should be set to the intended participant name as configured in the participant configuration of the Identity Hub. the key field of the secret data should be set to content and the data field should be set to the bcrypt password created before.
The next sections provide instructions for participants to join the dataspace, i.e. issue a membership credential, as well as instructions for the Dataspace Governance Authority to onboard new members. For this you need the host of your Identity Hub and the apikey for the management API. The api key can be found in the vault under the key federated-apikey.
After deploying the stack, you need to become a dataspace member. This can be achieved by contacting the Dataspace Authority and adhering to the dataspace rules. Once onboarded, the Dataspace Authority will send you a membership credential that should be stored in your identity hub.
To store the credential in your identityhub the api key for your account with the IdentityHub has to be obtained from the vault. The api key can be found under the name federated-apikey. Use the following curl command to add the credential to the Identity Hub, replacing any values between {{}} according to your configuration and replacing the verifiableCredentialContainer example credential with the received credential:
curl --request POST \
--url 'https://{{identityHubManagementApi}}/api/management/v1alpha/participants/ZmVkZXJhdGVk/credentials' \
--header 'x-api-key: {{yourApiKey}}' \
--header 'content-type: application/json' \
--data '{
"id": "123e4567-e89b-12d3-a456-426614174000",
"participantId": "federated",
"verifiableCredentialContainer": {
"rawVc": "eyJraWQiOiJkaWQ6d2ViOmN1c3RvbXMuZmVkZXJhdGVkLWluZnJhc3RydWN0dXJlLm5ldCNjdXN0b21zLWtleSIsImFsZyI6IkVkMjU1MTkifQ.eyJzdWIiOiJkaWQ6d2ViOnRuby5mZWRlcmF0ZWQtaW5mcmFzdHJ1Y3R1cmUubmV0IiwibmJmIjoxNzMzNDAxMDc2LCJpc3MiOiJkaWQ6d2ViOmN1c3RvbXMuZmVkZXJhdGVkLWluZnJhc3RydWN0dXJlLm5ldCIsImV4cCI6MTczMzQwMTEzNiwiaWF0IjoxNzMzNDAxMDc2LCJ2YyI6eyJjcmVkZW50aWFsU3ViamVjdCI6W3siaWQiOiJkaWQ6d2ViOnRuby5mZWRlcmF0ZWQtaW5mcmFzdHJ1Y3R1cmUubmV0IiwibWVtYmVyc2hpcFR5cGUiOiJGdWxsTWVtYmVyIiwid2Vic2l0ZSI6InRuby5mZWRlcmF0ZWQtaW5mcmFzdHJ1Y3R1cmUubmV0IiwiY29udGFjdCI6InRub0BmZWRlcmF0ZWQtaW5mcmFzdHJ1Y3R1cmUubmV0Iiwic2luY2UiOiIyMDIzLTAxLTAxVDAwOjAwOjAwWiJ9XSwiaXNzdWFuY2VEYXRlIjoiMjAyNC0xMi0wNVQxMjoxNzo1Ni40NDU4MjY1NDBaIiwibmFtZSI6bnVsbCwiZGVzY3JpcHRpb24iOm51bGwsImlkIjoiNTc0MDUwYzMtOWQ3Ni00OGFmLTgyNTMtYTQ4MzRmYTNiYTlkIiwidHlwZSI6WyJWZXJpZmlhYmxlQ3JlZGVudGlhbCIsIk1lbWJlcnNoaXBDcmVkZW50aWFsIl0sIkBjb250ZXh0IjpbImh0dHBzOi8vdzNpZC5vcmcvdHJhY3R1c3gtdHJ1c3QvdjAuOCIsImh0dHBzOi8vd3d3LnczLm9yZy8yMDE4L2NyZWRlbnRpYWxzL3YxIl0sImlzc3VlciI6eyJpZCI6ImRpZDp3ZWI6Y3VzdG9tcy5mZWRlcmF0ZWQtaW5mcmFzdHJ1Y3R1cmUubmV0IiwiYWRkaXRpb25hbFByb3BlcnRpZXMiOnt9fSwiZXhwaXJhdGlvbkRhdGUiOiIyMDI3LTA5LTAxVDEyOjE3OjU2LjQ0NTgyNjkwMVoiLCJjcmVkZW50aWFsU3RhdHVzIjpbXX0sImp0aSI6IjM1ZTBhMTRiLWZiMWUtNDg5YS1iNWZmLTg4OGRlZTRmZTZmMSJ9.m_XMAGrJVtUnoqeckXVaWo7FGuabz_2gOl4WNEy8F9CQCCowNCwbw3GU6SRyKHNjE3JRsvcaSBLq9K0vw1PxDQ",
"format": "JWT",
"credential": {
"credentialSubject": [
{
"id": "did:web:tno.federated-infrastructure.net",
"membershipType": "FullMember",
"website": "tno.federated-infrastructure.net",
"contact": "tno@federated-infrastructure.net",
"since": "2023-01-01T00:00:00Z"
}
],
"id": "574050c3-9d76-48af-8253-a4834fa3ba9d",
"type": [
"VerifiableCredential",
"MembershipCredential"
],
"issuer": {
"id": "did:web:customs.federated-infrastructure.net",
"additionalProperties": {}
},
"issuanceDate": "2024-12-05T12:17:56.445826540Z",
"expirationDate": "2027-09-01T12:17:56.445826901Z",
"credentialStatus": [],
"description": null,
"name": null
}
},
"issuancePolicy": {},
"reissuancePolicy": {}
}'
Note: After deploying the credential, you will have to restart the
edc-control-plane podso that it can register itself with theRegistration Service(it only does this upon startup).
| Issue | Description | Solution |
|---|---|---|
| Incorrect Vault Setup Order | Vault setup (including adding secrets) must be completed before creating the Vault token as a Kubernetes secret. If done in the wrong order, EDC components will fail to function. | Use pgAdmin (or psql) to delete all tables, then restart all EDC pods. |
| Failing EDC pods | Failing EDC pods is normal before completing the vault setup. After the vault setup make sure to create the vault-token secret |
Complete Vault setup and create vault-token secret. |
| Vault Token Refresh Failure | EDC components may sometimes fail to refresh the Vault token, resulting in repeated HTTP 403 status code errors in the logs. | Replace the Vault token with a new one and restart the EDC pods. |
