A brief about how the release process works at Atlan
How to Update a Release
Follow the steps below to update a release.
STEP 1: Log into the release portal
Visit the release portal endpoint, and enter the password. For AWS, the release URL and password is available in your Cloud Setup output.
Log into release portal
STEP 2: Check for updates
Once you log into the release portal, click on the "Version history" tab in the top navigation bar. Then click on "Check for updates" to get the latest release.
Check for updates
If there is a new release available, you will see a new version available in your portal.
Check for new version
STEP 3: Click on "Deploy" to release
To perform a release, click the "Deploy" button on the "Version History" tab.
Click on "Deploy" to release
At this point, the current cluster will be updated to the new version, and the Deployed status will show on that version.
How to roll back a release
In case of a bug or installation failure for a new release, you can roll back a release by clicking on the "Rollback" button on the release portal.
How to enable auto-release
If you don't want to manually deploy the latest releases, there is an option to enable "Auto Release". Once enabled, the latest releases will be automatically fetched and deployed at a specified time in your local time zone.
To enable the auto-release, follow these three steps:
Go to the configuration section in the Atlan Admin Console.
Check the "Activate Auto Release" option. It is enabled by default.
Specify the CRON expression to set the time when you want to auto-deploy your releases. Ideally, you would choose a time when the number of active users on the platform is lowest. By default, the time is set to 3:00 am.
👀 Note: The time zone will be automatically picked up, based on the region where the cluster is deployed.
Introduction to the KOTS configuration page
We will go through the KOTS Admin Console configuration page. Below is a list with insights about every configuration variable.
Cloud Platform: This specifies the cloud platform where the infrastructure is deployed. Allowed values:
Deployment Type: This specifies the type of environment of deployment. Allowed values:
Deployment Strategy: This specifies the kind of deployment strategy being followed. Allowed values:
Airgapped: Upon checking the airgapped value, these variables are visible:
URL of Private Docker Registry: This contains the URL for the private Docker registry to use for Docker images (ECR).
2. Domain configurations
Domain Name: This contains the domain name for the product.
Master password for services: This contains the password to be used for various services and internal purposes (e.g. Grafana default password, Keycloak password, and/or Postgres password).
Release Portal Domain: This contains the domain name for the Admin Console Portal.
Restrict Product to Certain IP Ranges: If enabled, the product domain will be only reachable from a certain IP or range of IPs. When this option is checked, the following variable is visible:
IP range to whitelist: This contains the IP or range of IPs from which the product is accessible. You can provide multiple, comma-separated IP ranges.
3. TLS configuration
Enable TLS: This specifies whether to enable TLS for the product domain. Allowed values:
Enable: When this option is selected, the following options are visible.
Private Key: The private key generated for the product's domain. Upload the file.
Certificate: The certificate generated for the product's domain. Upload the file.
Disable: When this option is selected, the following option is visible:
Use ACM for SSL: Check this option if you are using ACM to enable SSL on the product domain. When this option is enabled, the following variable is visible:
ARN of ACM Certificate: This contains the ARN value of ACM certificate. Click here for documentation on setting up SSL using ACM.
4. Storage configurations
Object Storage Type: This specifies the type of object storage being used. Allowed values:
AWS S3: When the above option is selected, the following field is visible:
AWS Bucket Name: This contains the name of the AWS S3 bucket to be used for storing the images, logs, service backups, and other internal data.
5. Storage configuration for cluster backups
Bucket Name for Backup: This contains the name of the AWS S3 bucket to be used for storing cluster backups, especially Velero backups.
Region of Backup Bucket: This contains the region where the above S3 bucket is being hosted. This is usually the same as the region where the stack is deployed.
IAM Role to Use to Push Backups to the Bucket: This contains the ARN value of the IAM role, which have read-write access to the backup S3 bucket. This will be used by Velero for privilege escalation.
6. Cloud section
EKS Cluster Name: This contains the name of the EKS cluster where the product is running. This is deployed along with the stack.
AWS Region of Deployment: This contains the region where the EKS cluster is being hosted. This is the same as the region where the stack is deployed.
Default Warehouse for Running Snowflake DQ Profile: This contains the name of the warehouse for running the Snowflake data quality profiling.
7. Product configurations
Keycloak Client Secret: This contains the UUID that will be used as a client secret by Keycloak.
Rows Limit for Query: This contains the maximum number of rows allowed in query results.
Snowflake Queries CSV Dump Path: This contains the S3 path where Snowflake query dumps will be stored.
Enable Query: When set to true, this enables querying in the product.
Enable Query Cache: When set to true, this enables query caching in the product.
Enable Preview of Assets: When set to true, this enables asset previews in the product.
Minimum Role Allowed to Query: This specifies the minimum role allowed to do querying in the product. Allowed values:
Names of Integrations Where Query Should Be Disabled: This contains a comma-separated list of integrations where querying should be disabled.
Enable Business Metadata UI: This specifies whether to enable the business metadata UI in the product.
8. Advanced options
This section specifies whether to show advanced options or not. When it is enabled, the following fields are visible:
Deploy NGINX Ingress: This specifies whether to deploy the NGINX-Ingress Helm chart.
Deploy AWS Node Termination Handler: This specifies whether to deploy the AWS Node Termination Handler Helm chart.
Deploy Cluster Autoscaler: This specifies whether to deploy the Cluster Autoscaler Helm chart.
Deploy Stakater Reloader: This specifies whether to deploy the Stakater Reloader Helm chart.
Custom Compute Configurations for Atlas: Check this to use a custom resource configuration for Atlas, Elasticsearch, and Cassandra. When this option is enabled, the following fields are available.
Atlas CPU: The CPU resource to use as a limit and request for Atlas.
Atlas Memory: The memory resource to use as a limit and request for Atlas.
Elasticsearch CPU: The CPU resource to use as a limit and request for Elasticsearch.
Elasticsearch Memory: The memory resource to use as a limit and request for Elasticsearch.
9. Helper jobs
Activate Auto-Release Job: This specifies whether to enable the auto-release job. This job will auto-deploy the latest releases at a specified time in the local time zone. The following option is visible when the option is checked true.
Cron for Auto Release: The cron expression to use in the auto-release job for deploying the latest releases. Note that this cron will be executed in the time zone where the EKS cluster is running.
Schedule of the Request Cleanup Job: This contains the cron string schedule for the request cleanup job.
10. Danger zone
Launch Reset Atlan Instance Job: When this option is checked true, a Kubernetes job will be launched to reset the Atlan instance. This will delete all the PVCs and crawlers in the product, and the product will revert to a fresh installation. Read more about this here.
👀 Note: After a reset, the Atlan backups will still be available in the S3 bucket.
Launch Migration Job: When this option is checked true, a Kubernetes job will be launched. This job can be used to migrate from an existing instance to this instance.