Kubernetes pods terminating with error after job is done

BQ Qiu
3 min readApr 20, 2021

I was debugging a behaviour that was not present before we migrated Spark from a custom-provisioned cluster to a Kubernetes cluster. After a job is run, the pods on kubectl show Error status. They hang around for a while and are cleaned up only eventually.

Use this command to check pods and their statuses:

kubectl get pods -n <your_namespace>

And you may see something like:

NAME           READY   STATUS    RESTARTS   AGE
<pod-name-1> 0/1 Error 0 <age>
<pod-name-2> 0/1 Error 0 <age>
<pod-name-3> 0/1 Error 0 <age>

To understand what causes the error, I checked the logs from the Spark executors. In my case, Spark is running in stand-alone mode where the Spark master is hosted separately and Spark workers are run on Kubernetes, with one Kubernetes pod for each Spark executor process. The logs show something similar to the following:

ERROR CoarseGrainedExecutorBackend: Executor self-exiting due to : Driver <ip>:<port> disassociated! Shutting down.
WARN ShutdownHookManager: ShutdownHook '$anon$2' timeout, java.util.concurrent.TimeoutException
at org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
...
java.util.concurrent.TimeoutException
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
....
ERROR Utils: Uncaught exception in thread shutdown-hook-0
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
at org.apache.spark.rpc.netty.NettyRpcEnv.cleanup(NettyRpcEnv.scala:324)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
...

For pods that terminated cleanly, the Spark executor logs look like this:

INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown
INFO ShutdownHookManager: Deleting directory /var/data/spark-<hash>
INFO ShutdownHookManager: Shutdown hook called.

Seems like the problem was that in the error case, the Spark driver did not command a shutdown of the Spark executor properly. I searched on Google as well as the Spark JIRA issues, however no ready solution appeared.

I did find some related suggestions on Google Cloud Platform’s spark-on-k8s-operator, however I am not using GCP nor its provided Spark on k8s operator. It is possible though that some of the best practices mentioned in these issues are applicable to Spark on Kubernetes regardless of cloud platform, as they may pertain to how Spark is run on Kubernetes itself.

Such suggestions include: how to release k8s resources at the end of a Spark job, and Spark application continuing in Running status and hold up cluster resources even after the main thread is completed. There is also this handy StackOverflow suggestion, even though my issue is pods ending in Error rather than stuck in Running.

All suggestions point to adding a simple sparkContext.stop() at the end of the job to manually signal to the Spark driver to end the Spark context and release resources. I did that and the error pods terminated cleanly with the right logs. The pods’ status also showed Completed and were cleaned up (removed) much quicker.

I am still unsure what is the difference between running Spark standalone mode on custom-provided cluster vs. running on Kubernetes cluster that causes the driver to not signal the end of the job properly to the executors. Note that the Spark driver itself is not hosted on Kubernetes. Could it be that Kubernetes has a mechanism to keep its pods running until explicitly shut down? This is a question left to ponder.

Do let me know if you have the answer to this question. :)

--

--

BQ Qiu

Engineering data products, data pipelines, web backend, infrastructure as code