πŸ“œ
Our Manifesto
🧰
Backup & Disaster Recovery
πŸ‘¨β€πŸ‘©β€πŸ‘§β€πŸ‘¦ Customer Success & Supporty
πŸ‘¨β€πŸ‘©β€πŸ‘§β€πŸ‘¦ Community
Powered By GitBook
Fix Crashing Pod

What is the CrashLoopBackOff error?

A pod stuck in a CrashLoopBackOff is a very common error faced while deploying applications to Kubernetes. While in CrashLoopBackOff, the pod keeps crashing at one point right after it is deployed and run. It usually occurs because the pod is not starting correctly.
πŸ“œ Prerequisites
Only Kubectle access required.

# How to resovle the CrashLoopBackOff error?

    1.
    Run the following command to check the pods status.
    kubectl get pods -n
    2.
    Once you have narrowed down the pods in CrashLoopBackOff, run the following command:
    kubectl describe po -n
      Check for the events section if any of the probes(liveness, readiness, startup) are failing.
      Check for the events section for the event - OOM Killed.
      Look in the status section of the pod and spot if there is β€˜error’ displayed along with the error code
    The output you get will be similar to the below examples and output information will help you get to the root of the error.
Podsoutput
This error can be caused due to different reasons. But, there are a few commonly spotted reasons:
    Probe failure The kubelet uses liveness, readiness, and startup probes to keep checks on the container. If the liveness or the startup probe fails, the container gets restarted at that point.
    To solve this, First, check if the probes have been properly configured, and ensure that all the specs (endpoint, port, SSL config, timeout, command) are correctly specified.
    Out of memory failure (OOM) Every pod has a specified memory space and when it tries to consume more memory than what has been allocated to it, the pod will keep crashing. This can occur if the pod is allocated less memory than it actually requires to run or if there an error in the pod and it keeps on consuming all the memory space while in its run state.
    To solve this error, you can increase the ram allocated to the pod. This would do the trick in usual cases. But, in case the pod is consuming excessive amounts of RAM, you will have to look into the application and look for the cause. If it is a Java application, check the heap configuration.
    The application failure At times, the application within the container itself keeps crashing because of some error and that can cause the pod to crash on repeat. In this case you will have to look at the application code and debug it. Run the following command:
    Kubectl logs -n <namespace> <podName> -c <containerName> --previous
    1.
    As per log output take the desire action and restart the pods to fix the issue with the help of below command.
    kubectl delete pods <pods name> -n <name spaces> --force --grace-period=0
Last modified 3d ago