Amazon Redshift runtime on Kubernetes
Install Amazon Redshift Self-Deployed Runtime App on your Kubernetes cluster.
Prerequisites
Before you begin, make sure you have:
kubectlinstalled and configured with administrative access to your target Kubernetes cluster- Helm version 3.0 or later installed on your deployment machine
- A Docker Hub Personal Access Token (PAT) from Atlan
- Object storage: AWS S3, Google Cloud Storage, or Azure Blob Storage with read/write permissions
- Secret store access: AWS Secret Manager, Azure Key Vault, GCP Secret Manager, HashiCorp Vault, or Kubernetes Secrets with read permissions
Generate client credentials
OAuth client credentials are required for the Self-Deployed Runtime app deployment to authenticate successfully to the Atlan tenant. Follow these steps to generate client credentials:
-
Generate the API token by following the steps in API access documentation.
-
Create client credentials for App authentication using the Atlan API. Replace
{{tenant}}with your Atlan tenant name and{{App Name}}with your application identifier:curl --location 'https://{{tenant}}.atlan.com/api/service/oauth-clients' \
--header 'Content-Type: application/json' \
--header 'Authorization: <API token>' \
--data '{
"displayName": "{{App Name}}-agent-client",
"description": "Client for agent oauth for {{App Name}}",
"scopes": ["events-app-permission-scope","temporal-app-permissions-scope"]
}'- Replace
<API token>with the token you generated in step 1.
Example API response:
{
"clientId": "oauth-client-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"clientSecret": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"createdAt": "1756112939595",
"createdBy": "john.doe",
"description": "Client for agent oauth for {{App Name}}",
"displayName": "{{App Name}}-agent-client",
"id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"tokenExpirySeconds": 600
} - Replace
- Save the
clientIdandclientSecretvalues securely. You need these for the deployment configuration.
Prepare deployment environment
Start by setting up the Docker environment and downloading the necessary deployment files.
-
Use the Personal Access Token (PAT) provided by Atlan to authenticate with Docker Hub:
docker login -u atlanhq
# When prompted for password, enter the PAT provided by AtlanA "Login Succeeded" message confirms successful authentication.
-
Download the Helm charts:
helm pull oci://registry-1.docker.io/atlanhq/redshift-app --version 0.1.0 --untarThis command downloads the following files:
./Chart.yaml
./templates
./templates/servicemonitor.yaml
./templates/deployment.yaml
./templates/service.yaml
./templates/dapr-components-cm.yaml
./templates/hpa.yaml
./templates/service-account.yaml
./templates/extra-manifests.yaml
./templates/_helpers.tpl
./values.yaml -
Optional Depending on organizational requirements, you may need to replicate images from Docker Hub to a private image repository. The specific steps vary by organization, here's one approach:
-
Pull the required connector image via Docker CLI:
docker pull atlanhq/atlan-redshift-app:main-20b504dabcdThe command requires the same Docker Hub PAT from Atlan support that you used in step 1 for authentication
-
Push the image it to your enterprise's registry. Note down the repository name and image tag generated.
-
Configure values.yaml
Customize the deployment by modifying the values.yaml file in the redshift-app directory:
Configure general settings
Follow these steps to edit the values.yaml file and update the details:
-
Update container image settings if the Kubernetes cluster is configured with an Enterprise specific private image repository.:
image:
repository: atlanhq/atlan-redshift-app # Update if using private registry
tag: main-20b504dabcd # Update if using different tag
pullPolicy: Always -
Update Atlan tenant URL and app credentials:
global:
# Base URLs
atlanBaseUrl: "<tenant-name>.atlan.com"
# Authentication - credentials from Pre-Requisites
clientId: "<client-id>"
clientSecret: "<client-secret>"- Replace
<client-id>and<client-secret>that you generated in Generate client credentials section.
- Replace
-
Update the name to identify the deployment. This name appears in the Atlan UI when configuring workflows and helps identify this specific App deployment.
env:
- name: ATLAN_DEPLOYMENT_NAME
value: "redshift-k8s-prod" # Replace with your preferred deployment name
Configure object storage
Self-Deployed Runtime needs a store for reading or writing files. Configure the object storage that matches your environment:
Dapr supports additional objects stores which aren't mentioned below. For more information, see Dapr object store documentation for other configurations.
- AWS S3
- Google Cloud Storage
- Azure Blob Storage
- Locate the
objectstoreattribute invalues.yaml. - Add AWS S3 configuration. For more information, see AWS S3 Binding Spec.
objectstore:
enabled: true
spec:
type: bindings.aws.s3
version: v1
metadata:
- name: accessKey #optional, leave this empty for iam authentication
value: ""
- name: secretKey #optional, leave this empty for iam authentication
value: ""
- name: bucket #required, name of the bucket where application can write
value: "<bucket-name>"
- name: region #required, region of the bucket where application can write
value: "<bucket-region>"
- name: forcePathStyle
value: "true"
- Locate the
objectstoreattribute invalues.yaml. - Add Google Cloud Storage configuration. For more information, see GCP Storage Bucket binding spec
objectstore:
enabled: true
spec:
type: bindings.gcp.bucket
version: v1
metadata:
- name: bucket
value: "your-gcs-bucket-name"
- name: type
value: "service_account"
- name: project_id
value: "your-gcp-project-id"
- Locate the
objectstoreattribute invalues.yaml. - Add Azure Blob Storage configuration. For more information, see Azure Blob Storage binding spec
objectstore:
enabled: true
spec:
type: bindings.azure.blobstorage
version: v1
metadata:
- name: accountName
value: "your-storage-account-name"
- name: accountKey
value: "your-storage-account-key"
- name: containerName
value: "your-container-name"
Configure secret storage
Self-Deployed Runtime fetches secrets from a secret store to connect to the source systems. The secret store references are used to configure the workflow. Configure the secret store that aligns with your security infrastructure:
Dapr supports additional secret stores which aren't mentioned below. For more information, see Dapr secret store documentation for other configurations.
- AWS Secret Manager
- Azure Key Vault
- GCP Secret Manager
- HashiCorp Vault
- Kubernetes Secrets
- Locate the
secretstoreattribute invalues.yaml. - Add AWS Secret Manager configuration. For more information, see AWS Secrets Manager
secretstore:
enabled: true
spec:
type: secretstores.aws.secretmanager
version: v1
metadata:
- name: region # required, region in which secret is hosted
value: <secret-region>
# Needed if IAM authentication is not used
- name: accessKey
value: ""
- name: secretKey
value: ""
- Locate the
secretstoreattribute invalues.yaml. - Add Azure Key Vault configuration. For more information, see Azure Key Vault secret store
secretstore:
enabled: true
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "<your-keyvault-name>"
- name: azureTenantId
value: "<your-tenant-id>"
- name: azureClientId
value: "<your-client-id>"
- name: azureClientSecret
value: "<your-client-secret>"
- name: azureEnvironment
value: "AZUREPUBLICCLOUD" # Optional: AZUREPUBLICCLOUD, AZURECHINACLOUD, AZUREUSGOVERNMENTCLOUD, AZUREGERMANCLOUD
Azure Key Vault supports multiple authentication methods:
- Client Secret: Use
azureClientSecretwith tenant ID and client ID - Certificate: Use
azureCertificateFileinstead of client secret - Managed Identity: Omit authentication fields and use Azure managed identity
For detailed authentication setup, see the Authenticating to Azure documentation.
- Locate the
secretstoreattribute invalues.yaml. - Add Google Cloud Secret Manager configuration. For more information, see GCP Secret Manager
secretstore:
enabled: true
spec:
type: secretstores.gcp.secretmanager
version: v1
metadata:
- name: type
value: "service_account"
- name: project_id
value: "<project-id>"
- name: private_key_id
value: "<private-key-id>"
- name: private_key
value: "<private-key>"
- name: client_email
value: "<client-email>"
- name: client_id
value: "<client-id>"
- name: auth_uri
value: "https://accounts.google.com/o/oauth2/auth"
- name: token_uri
value: "https://oauth2.googleapis.com/token"
- name: auth_provider_x509_cert_url
value: "https://www.googleapis.com/oauth2/v1/certs"
- name: client_x509_cert_url
value: "https://www.googleapis.com/robot/v1/metadata/x509/<client-email>"
- Locate the
secretstoreattribute invalues.yaml. - Add HashiCorp Vault configuration. For more information, see HashiCorp Vault
secretstore:
enabled: true
spec:
type: secretstores.hashicorp.vault
version: v1
metadata:
- name: vaultAddr
value: "[vault_address]" # Optional. Default: "https://127.0.0.1:8200"
- name: caCert # Optional. This or caPath or caPem
value: "[ca_cert]"
- name: caPath # Optional. This or CaCert or caPem
value: "[path_to_ca_cert_file]"
- name: caPem # Optional. This or CaCert or CaPath
value: "[encoded_ca_cert_pem]"
- name: skipVerify # Optional. Default: false
value: "[skip_tls_verification]"
- name: tlsServerName # Optional.
value: "[tls_config_server_name]"
- name: vaultTokenMountPath # Required if vaultToken not provided. Path to token file.
value: "[path_to_file_containing_token]"
- name: vaultToken # Required if vaultTokenMountPath not provided. Token value.
value: "[vault_token]"
- name: vaultKVPrefix # Optional. Default: "dapr"
value: "[vault_prefix]"
- name: vaultKVUsePrefix # Optional. default: "true"
value: "[true/false]"
- name: enginePath # Optional. default: "secret"
value: "secret"
- name: vaultValueType # Optional. default: "map"
value: "map"
- Locate the
secretstoreattribute invalues.yaml. - Add Kubernetes native secret storage. For more information, see Kubernetes secrets
secretstore:
enabled: true
spec:
type: secretstores.kubernetes
version: v1
metadata:
- name: defaultNamespace
value: "default" # Optional: Default namespace to retrieve secrets from
- name: kubeconfigPath
value: "/path/to/kubeconfig" # Optional: Path to kubeconfig file
When Dapr is deployed to a Kubernetes cluster, a secret store with the name kubernetes is automatically provisioned. You can use this native Kubernetes secret store with no need to create, deploy or maintain a component configuration file.
Deploy app
Follow these steps to deploy secure App:
-
Install the helm chart to deploy your app:
helm install redshift-agent redshift-app -f redshift-app/values.yaml -n NAMESPACE- Replace
NAMESPACEwith your target Kubernetes namespace.
- Replace
The deployment process typically takes a few minutes to complete depending on factors like Kubernetes cluster resource availability, private image repository download time etc.
Verify deployment
Follow these steps to verify deployment:
Verify cluster
-
Check pod status:
kubectl get pods -n NAMESPACEThe output appears similar to:
NAME READY STATUS RESTARTS AGE
redshift-agent-redshift-app-5dff95cd85-cvk5z 1/1 Running 0 2mThe pod status shows
Runningwith1/1ready containers. -
Verify logs for the running pod
kubectl logs -n NAMESPACE -l app.kubernetes.io/name=redshift-app --tail=50 -fExample application logs: Look for these key log messages that confirm successful deployment:
Uvicorn running on http://0.0.0.0:8000- Web server startedWorkflow engine initialized- Workflow processing readydapr initialized. Status: Running- Dapr runtime activeStarting worker with task queue: atlan-redshift-DEPLOYMENT_NAME- Worker process startedGET /server/ready 200- Health check endpoint responding
Verify via Atlan UI
Verify runtime registration in Atlan:
- Sign in to your Atlan tenant as an administrator (for example,
https://tenant-name.atlan.com). - Navigate to Workflows > Agent.
- Search for your deployment name.
- Confirm the agent status shows as Active.
Agent registration and status takes a couple of minutes to reflect in Atlan UI
Next steps
- Configure Secure Agent for workflow execution: Set up workflow execution settings and permissions for your deployed agent
- Set up Amazon Redshift crawler: Create and configure a crawler to extract metadata from your Amazon Redshift data warehouse