πŸ“œ
Our Manifesto
🧰
Backup & Disaster Recovery
πŸ‘¨β€πŸ‘©β€πŸ‘§β€πŸ‘¦ Customer Success & Supporty
πŸ‘¨β€πŸ‘©β€πŸ‘§β€πŸ‘¦ Community
Powered By GitBook
Migrate Atlan Stack
A step-by-step guide on how to migrate one Atlan stack to another stack in AWS
πŸ‘€ Note: Here's how this article refers to the different stacks and buckets in the cloning/migration process:
    stack-1: Old stack
    stack-2: New stack
    bucket-1: S3 bucket for the old stack
    bucket-2: S3 bucket for the new stack

πŸ“œ Prerequisites

    Kubectl access to "stack-2". Refer to this document for preliminary instructions.
    Provide the access of bucket-1 to NodeInstanceRole of stack-2. Refer to NodeInstanceRole documentation here for preliminary instructions.
    Admin console access of both stacks.

πŸ› οΈ A Step-by-Step Guide to Restore the Stack

STEP 1: Log in to the release portal of stack-2.

Release Console Login Page

STEP 2: Go to the config section

In the configuration section, scroll down to the end.
Config section

STEP 3: Check the "Launch migration job" option

Launch migration job

STEP 4: Provide values to the fields required by the job for migration

Once the option is checked true, few fields will be visible.
Here are the list variables and what should be their value.
    Bucket name of the existing stack: The name of bucket-1.
    Bucket region of the existing stack: The region of stack-1.
    ARN value of role with Read-Write Access to the bucket of the existing stack: The ARN value of NodeInstanceRole of stack-1 with read-write access.
    Postgres password of the existing stack: The Postgres password of stack-1, will be the same as the release portal password of stack-1. You will get it in AWS CloudFormation output.
    Keycloak client secret of the existing stack: You can get this value from the release portal of stack-1. Here is how you can get it:
    Visit the release portal of stack-1 and go to the config section.
    Copy the value of the field named Keycloak client secret.
Keycloak credentials
    Tag value of Cassandra backup to be restored: This value can be retrieved from this path:
    s3://bucket-1/backup/cassandra/atlas/cassandra/atlas/78ce5c/. Select the latest folder name. For example, 20210323030010.
    Tag value of ElasticSearch backup to be restored: This value will be the date on which you want to restore the data in ddmmyyyy format. For example, 23032021. As the backup jobs run at 3:00 AM UTC, so keep that in mind while choosing this value. There could be chances where the backup job didn't run yet for the day and you provided that date. In this case, the job will fail.
    File name of Postgres backup to be restored: This value can be retrieved from this path s3://bucket-1/postgres/backup/. Select the latest file name. For example, postgres-backup-2021-03-23_03-00-09.gz. Please provide the value along with .gz extension.
    File name of scheduled workflow backup to be restored: This value can be retrieved from this path s3://bucket-1/backup/argo/. Select the latest file name. For example, scheduled-workflows-backup-2021-03-23_03-00-07.yaml. Please provide the value along with .yaml extension.

STEP 5: Deploy the job

Once you have entered the requisite values, scroll down to the bottom of the page and save the config.
Your new product version has now been created. To implement the changes, click on the "Go to the new version" button.
New Version Window
The system will perform the preflight checks. Click on "Deploy", once successful without any error.
Preflight Checks Window

STEP 6: Track the job progress

Once the job is deployed from the admin console, we can fetch its logs and track its progress. Here is how you can do it:
    Get Kubectl access to stack-2.
    Get the name of the migration job pod. Run this command, and copy the name of the pod starting with migration-job. For example, migration-job-fh24y5.
    1
    $ kubectl get pods -n kots
    Copied!
    Tailing the logs of the migration pod.
    1
    $ kubectl logs migration-job-fh24y5 -n kots -f
    Copied!
    Wait for the job to finish.
πŸ‘€ Note: The migration process can take up to 20-30 mins to finish completely.

STEP 7: Verify the product

Once you get the job completion log from the pod, wait for some time for all the pods to get up. Post that, you can access the product and verify.
    Check if all the pods are up, using the command given below.
$ kubectl get pods -A
    Verify the product by logging in.

STEP 8: Cleanup

Once everything completed, Follow these steps for cleanup.
    Visit the release portal of stack-2.
    Go to the config section and uncheck the "Launch migration job" option.
    Save the config changes and deploy the release created.
    Once the release is successfully deployed, go to the terminal with Kubectl access to stack-2.
    [ Optional ] Now run the commands given below for password change of admin console for stack-2.
    1
    $ curl https://kots.io/install | bash
    Copied!
    1
    $ kubectl kots reset-password kots
    Copied!
    The above command will ask for a new password, here pass the release portal password of stack-1.
    Verify by logging in on the release portal of stack-2 with the new password.
If you face any issues in following these steps, you can always reach out to us at [email protected].
Last modified 3d ago