Install on Kubernetes
Install App on a Kubernetes cluster.
Prerequisites
Before you begin, make sure you have:
kubectlinstalled and configured with administrative access to your target Kubernetes cluster- Helm version 3.0 or later installed on your deployment machine
- A Docker Hub Personal Access Token (PAT) from Atlan
- App specific helm chart name
- The
{app-helm-chart}name is provided in the connector-specific installation guide for self-deployed runtime.- Replace
{app-helm-chart}placeholder with the one specified in your app’s Install self-deployed runtime documentation.
- Replace
- The
- Object storage: AWS S3, Google Cloud Storage, or Azure Blob Storage with read/write permissions
- Secret store access: AWS Secret Manager, Azure Key Vault, GCP Secret Manager, HashiCorp Vault, or Kubernetes Secrets with read permissions
Generate client credentials
OAuth client credentials are required for the App to authenticate successfully to the Atlan tenant. Follow these steps to generate client credentials:
-
Generate the API token by following the steps in API access documentation.
-
Create client credentials using the Atlan API.
-
Replace
{{tenant}}with your Atlan tenant name -
Replace
<API token>with the token you generated in step 1 -
Replace
{{App Name}}with any descriptive namecurl --location 'https://{{tenant}}.atlan.com/api/service/oauth-clients' \
--header 'Content-Type: application/json' \
--header 'Authorization: <API token>' \
--data '{
"displayName": "{{App Name}}-agent-client",
"description": "Client for agent oauth for {{App Name}}",
"scopes": ["events-app-permission-scope","temporal-app-permissions-scope"]
}'Example API response:
{
"clientId": "oauth-client-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"clientSecret": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"createdAt": "1756112939595",
"createdBy": "john.doe",
"description": "Client for agent oauth for {{App Name}}",
"displayName": "{{App Name}}-agent-client",
"id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"tokenExpirySeconds": 600
}
Save the clientId and clientSecret values securely. You need these for the deployment configuration.
Prepare deployment environment
Start by setting up the Docker environment and downloading the necessary deployment files.
-
Use the Personal Access Token (PAT) provided by Atlan to authenticate with Docker Hub:
docker login -u atlanhq
# When prompted for password, enter the PAT provided by AtlanA "Login Succeeded" message confirms successful authentication.
-
Download the Helm charts:
helm pull oci://registry-1.docker.io/atlanhq/{app-helm-chart} --untarThis command downloads the following files:
./Chart.yaml
./templates
./templates/servicemonitor.yaml
./templates/deployment.yaml
./templates/service.yaml
./templates/dapr-components-cm.yaml
./templates/hpa.yaml
./templates/service-account.yaml
./templates/extra-manifests.yaml
./templates/_helpers.tpl
./values.yaml -
Optional Depending on organizational requirements, you may need to replicate images from Docker Hub to a private image repository. The specific steps vary by organization, here's one approach:
-
Pull the required connector image via Docker CLI:
docker pull atlanhq/{app-image-name}:{app-image-tag}The command requires the same Docker Hub PAT from Atlan support that you used in step 1 for authentication
-
Push the image to your enterprise's registry. Note down the repository name and image tag generated.
-
Configure values.yaml
Copy the {app-helm-chart}/values.yaml file to a convenient location (for example /opt).
cp {app-helm-chart}/values.yaml /opt/values.yaml
Customize the deployment by making changes described below to the copied file.
Configure general settings
Follow these steps to edit the values.yaml file and update the details:
-
Optional Update container image settings if the Kubernetes cluster is configured with an Enterprise specific private image repository:
image:
repository: atlanhq/{app-image-name} # Update only if using private registry
tag: {app-image-tag} # Update only if using private registry
pullPolicy: Always -
Update Atlan tenant URL and app credentials:
global:
# Base URLs
atlanBaseUrl: "<tenant-name>.atlan.com"
# ClientId/Secret generated as part of "Generate client credentials"
clientId: "<client-id>"
clientSecret: "<client-secret>"- Replace
<client-id>and<client-secret>that you generated in Generate client credentials section.
- Replace
-
Update the name to identify the deployment. This name appears in the Atlan UI when configuring workflows and helps identify this specific App deployment.
deploymentName: "<deployment-name>" # Choose a deployment name for easier identification
Configure object storage
Self-Deployed Runtime needs a store for reading or writing files. Configure the object storage that matches your environment:
Dapr supports additional object stores which aren't mentioned below. For more information, see Dapr object store documentation for other configurations.
- AWS S3
- Google Cloud Storage
- Azure Blob Storage
- Locate the
objectstoreattribute invalues.yaml. - Add AWS S3 configuration. For more information, see AWS S3 Binding Spec.
objectstore:
enabled: true
spec:
type: bindings.aws.s3
version: v1
metadata:
- name: accessKey #optional, leave this empty for iam authentication
value: ""
- name: secretKey #optional, leave this empty for iam authentication
value: ""
- name: bucket #required, name of the bucket where application can write
value: "<bucket-name>"
- name: region #required, region of the bucket where application can write
value: "<bucket-region>"
- name: forcePathStyle
value: "true"
- Locate the
objectstoreattribute invalues.yaml. - Add Google Cloud Storage configuration. For more information, see GCP Storage Bucket binding spec
objectstore:
enabled: true
spec:
type: bindings.gcp.bucket
version: v1
metadata:
- name: bucket
value: "your-gcs-bucket-name"
- name: type
value: "service_account"
- name: project_id
value: "your-gcp-project-id"
- Locate the
objectstoreattribute invalues.yaml. - Add Azure Blob Storage configuration. For more information, see Azure Blob Storage binding spec
objectstore:
enabled: true
spec:
type: bindings.azure.blobstorage
version: v1
metadata:
- name: accountName
value: "your-storage-account-name"
- name: accountKey
value: "your-storage-account-key"
- name: containerName
value: "your-container-name"
Configure secret storage
Self-Deployed Runtime fetches secrets from a secret store to connect to the source systems. The secret store references are used to configure the workflow. Configure the secret store that aligns with your security infrastructure:
Dapr supports additional secret stores which aren't mentioned below. For more information, see Dapr secret store documentation for other configurations.
- AWS Secret Manager
- Azure Key Vault
- GCP Secret Manager
- HashiCorp Vault
- Kubernetes Secrets
- Environment Variables
- Locate the
secretstoreattribute invalues.yaml. - Add AWS Secret Manager configuration. For more information, see AWS Secrets Manager
secretstore:
enabled: true
spec:
type: secretstores.aws.secretmanager
version: v1
metadata:
- name: region # required, region in which secret is hosted
value: <secret-region>
# Needed if IAM authentication is not used
- name: accessKey
value: ""
- name: secretKey
value: ""
- Locate the
secretstoreattribute invalues.yaml. - Add Azure Key Vault configuration. For more information, see Azure Key Vault secret store
secretstore:
enabled: true
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "<your-keyvault-name>"
- name: azureTenantId
value: "<your-tenant-id>"
- name: azureClientId
value: "<your-client-id>"
- name: azureClientSecret
value: "<your-client-secret>"
- name: azureEnvironment
value: "AZUREPUBLICCLOUD" # Optional: AZUREPUBLICCLOUD, AZURECHINACLOUD, AZUREUSGOVERNMENTCLOUD, AZUREGERMANCLOUD
Azure Key Vault supports multiple authentication methods:
- Client Secret: Use
azureClientSecretwith tenant ID and client ID - Certificate: Use
azureCertificateFileinstead of client secret - Managed Identity: Omit authentication fields and use Azure managed identity
For detailed authentication setup, see the Authenticating to Azure documentation.
- Locate the
secretstoreattribute invalues.yaml. - Add Google Cloud Secret Manager configuration. For more information, see GCP Secret Manager
secretstore:
enabled: true
spec:
type: secretstores.gcp.secretmanager
version: v1
metadata:
- name: type
value: "service_account"
- name: project_id
value: "<project-id>"
- name: private_key_id
value: "<private-key-id>"
- name: private_key
value: "<private-key>"
- name: client_email
value: "<client-email>"
- name: client_id
value: "<client-id>"
- name: auth_uri
value: "https://accounts.google.com/o/oauth2/auth"
- name: token_uri
value: "https://oauth2.googleapis.com/token"
- name: auth_provider_x509_cert_url
value: "https://www.googleapis.com/oauth2/v1/certs"
- name: client_x509_cert_url
value: "https://www.googleapis.com/robot/v1/metadata/x509/<client-email>"
- Locate the
secretstoreattribute invalues.yaml. - Add HashiCorp Vault configuration. For more information, see HashiCorp Vault
secretstore:
enabled: true
spec:
type: secretstores.hashicorp.vault
version: v1
metadata:
- name: vaultAddr
value: "[vault_address]" # Optional. Default: "https://127.0.0.1:8200"
- name: caCert # Optional. This or caPath or caPem
value: "[ca_cert]"
- name: caPath # Optional. This or CaCert or caPem
value: "[path_to_ca_cert_file]"
- name: caPem # Optional. This or CaCert or CaPath
value: "[encoded_ca_cert_pem]"
- name: skipVerify # Optional. Default: false
value: "[skip_tls_verification]"
- name: tlsServerName # Optional.
value: "[tls_config_server_name]"
- name: vaultTokenMountPath # Required if vaultToken not provided. Path to token file.
value: "[path_to_file_containing_token]"
- name: vaultToken # Required if vaultTokenMountPath not provided. Token value.
value: "[vault_token]"
- name: vaultKVPrefix # Optional. Default: "dapr"
value: "[vault_prefix]"
- name: vaultKVUsePrefix # Optional. default: "true"
value: "[true/false]"
- name: enginePath # Optional. default: "secret"
value: "secret"
- name: vaultValueType # Optional. default: "map"
value: "map"
- Locate the
secretstoreattribute invalues.yaml. - Add Kubernetes native secret storage. For more information, see Kubernetes secrets
secretstore:
enabled: true
spec:
type: secretstores.kubernetes
version: v1
metadata:
- name: defaultNamespace
value: "default" # Optional: Default namespace to retrieve secrets from
- name: kubeconfigPath
value: "/path/to/kubeconfig" # Optional: Path to kubeconfig file
When Dapr is deployed to a Kubernetes cluster, a secret store with the name kubernetes is automatically provisioned. You can use this native Kubernetes secret store with no need to create, deploy or maintain a component configuration file.
- Locate the
secretstoreattribute invalues.yaml. - Add Local environment variables as the spec. For more information, see Local Environment Variables
secretstore:
enabled: true
spec:
type: secretstores.local.env
version: v1
Deploy app
Follow these steps to deploy secure App:
-
Install the helm chart to deploy your app based on the
values.yamlfile that was copied and modified:helm install {app-release-name} {app-helm-chart} -f /opt/values.yaml -n {namespace}- Replace
{app-release-name}with your preferred Helm release name. - Replace
{namespace}with your target Kubernetes namespace.
- Replace
The deployment process typically takes a few minutes to complete depending on factors like Kubernetes cluster resource availability, private image repository download time etc.
Verify app
Follow these steps to verify deployment:
Verify deployment status on cluster
-
Check pod status:
kubectl get pods -n {namespace}The output appears similar to:
NAME READY STATUS RESTARTS AGE
{app-release-name}-{app-helm-chart}-5dff95cd85-cvk5z 1/1 Running 0 2mThe pod status shows
Runningwith1/1ready containers. -
Verify logs for the running pod
kubectl logs -n {namespace} -l app.kubernetes.io/name={app-helm-chart} --tail=50 -fExample application logs: Look for these key log messages that confirm successful deployment:
Uvicorn running on http://0.0.0.0:8000- Web server startedWorkflow engine initialized- Workflow processing readydapr initialized. Status: Running- Dapr runtime activeStarting worker with task queue: atlan-{app-name}-DEPLOYMENT_NAME- Worker process startedGET /server/ready 200- Health check endpoint responding
Verify registration via Atlan UI
Once the App is successfully deployed, it communicates to the Atlan tenant and registers itself. Verify App registration is successful in Atlan:
- Sign in to your Atlan tenant as an administrator (for example,
https://tenant-name.atlan.com). - Navigate to Workflows > Agent.
- Look for an entry with
{App name}-{Deployment Name}. - Confirm the status shows as Active.
Agent registration and status takes a couple of minutes to reflect in Atlan UI
Next steps
- Configure Secure Agent for workflow execution: Set up workflow execution settings and permissions for your deployed agent