Kubectl access to "stack-2". Refer to this document for preliminary instructions.
Provide the access of bucket-1 to NodeInstanceRole of stack-2. Refer to NodeInstanceRole documentation here for preliminary instructions.
Admin console access of both stacks.
In the configuration section, scroll down to the end.
Once the option is checked true, few fields will be visible.
Here are the list variables and what should be their value.
Bucket name of the existing stack: The name of bucket-1.
Bucket region of the existing stack: The region of stack-1.
ARN value of role with Read-Write Access to the bucket of the existing stack: The ARN value of NodeInstanceRole of stack-1 with read-write access.
Postgres password of the existing stack: The Postgres password of stack-1, will be the same as the release portal password of stack-1. You will get it in AWS CloudFormation output.
Keycloak client secret of the existing stack: You can get this value from the release portal of stack-1. Here is how you can get it:
Visit the release portal of stack-1 and go to the config section.
Copy the value of the field named Keycloak client secret.
Tag value of Cassandra backup to be restored: This value can be retrieved from this path:
s3://bucket-1/backup/cassandra/atlas/cassandra/atlas/78ce5c/. Select the latest folder name. For example, 20210323030010.
Tag value of ElasticSearch backup to be restored: This value will be the date on which you want to restore the data in ddmmyyyy format. For example, 23032021. As the backup jobs run at 3:00 AM UTC, so keep that in mind while choosing this value. There could be chances where the backup job didn't run yet for the day and you provided that date. In this case, the job will fail.
File name of Postgres backup to be restored: This value can be retrieved from this path
s3://bucket-1/postgres/backup/. Select the latest file name. For example,
postgres-backup-2021-03-23_03-00-09.gz. Please provide the value along with
File name of scheduled workflow backup to be restored: This value can be retrieved from this path
s3://bucket-1/backup/argo/. Select the latest file name. For example,
scheduled-workflows-backup-2021-03-23_03-00-07.yaml. Please provide the value along with
Once you have entered the requisite values, scroll down to the bottom of the page and save the config.
Your new product version has now been created. To implement the changes, click on the "Go to the new version" button.
The system will perform the preflight checks. Click on "Deploy", once successful without any error.
Once the job is deployed from the admin console, we can fetch its logs and track its progress. Here is how you can do it:
Get Kubectl access to stack-2.
Get the name of the migration job pod. Run this command, and copy the name of the pod starting with migration-job. For example,
$ kubectl get pods -n kots
Tailing the logs of the migration pod.
$ kubectl logs migration-job-fh24y5 -n kots -f
Wait for the job to finish.
Once you get the job completion log from the pod, wait for some time for all the pods to get up. Post that, you can access the product and verify.
Check if all the pods are up, using the command given below.
$ kubectl get pods -A
Verify the product by logging in.
Once everything completed, Follow these steps for cleanup.
Visit the release portal of stack-2.
Go to the config section and uncheck the "Launch migration job" option.
Save the config changes and deploy the release created.
Once the release is successfully deployed, go to the terminal with Kubectl access to stack-2.
[ Optional ] Now run the commands given below for password change of admin console for stack-2.
$ curl https://kots.io/install | bash
$ kubectl kots reset-password kots
The above command will ask for a new password, here pass the release portal password of stack-1.
Verify by logging in on the release portal of stack-2 with the new password.
If you face any issues in following these steps, you can always reach out to us at [email protected].