A guide to how to access logs and troubleshoot

The logs of any pod can be accessed in 3 ways: 1. Using Grafana and Loki 2. From AWS S3 bucket 3. Using kubectl cli tool

1. Using Grafana and Loki

The logs of all the pods are stored in loki locally for a period of 72 Hours. If you want to check the logs of past 72 hours then it can be view through grafana.

Steps to view logs from grafana:

  • Go to grafana url of respective instance and go to explore section.

  • Select Loki as datasource.

  • Type in the query, Eg: {pod="atlas-1"}

  • You can also change the time range from the top right corner for which you want to see logs.


2. From AWS S3 bucket

Logs older than 72 Hours can be access from the s3 bucket which is launched along with the atlan-stack through Cloudformation template. The bucket name is same as stack name. Eg: atlan-361996608881

  • Path for logs of argo jobs:

  • Path for logs of any other pod:


    Download the .gz file and extract it to get the logs of the pods.

3. Using kubectl cli tool

Using kubectl you can fetch the logs of pod which is currently present in the cluster. Also you can use kubectl cli to debug various issues with kubernetes cluster and atlan product. You must have knowledge about kubernetes and how to use kubectl cli.

For installing the tool you can refer this documentationโ€‹

Once you have kubectl cli installed on your machine, follow this documentation to configure the tool to access the EKS cluster where Atlan is deployed.

For checking the logs of any pod, run this command:

kubectl logs <pod-name> -n <namespace>

To follow the logs use this command:

kubectl logs <pod-name> -n <namespace> -f