i have a pod that is in CrashLoopBackOff. the reason is clear, i will have to migrate a database-schema to fix that before the container may retry looping.
i understand that the pod is supposed to crash-loop but i need to pause this. if kubectl rollout pause
and then kubectl delete pod
, the pod is terminating and still the deployment's replicaset seems to start a new one.
is there any way to just "suspend the deployment or replicaset" to have a chance to repair infrastructure without the pods constantly loop-crashing and/or getting recreated?