Summary
With DSE releases prior to 4.7.4 the cluster may not be able to elect a spark master.
Symptoms
The /var/log/cassandra/system.log show repeated occurrences of this message:
INFO [SPARK-WORKER-INIT-0] 2015-10-14 13:54:52,984 SparkWorkerRunner.java:51 - Spark Master not ready at (no configured master)
INFO [SPARK-WORKER-INIT-0] 2015-10-14 13:54:53,984 SparkWorkerRunner.java:51 - Spark Master not ready at (no configured master)
INFO [SPARK-WORKER-INIT-0] 2015-10-14 13:54:54,985 SparkWorkerRunner.java:51 - Spark Master not ready at (no configured master)
Even setting the master manually using dsetool will just set it as a primary inactive
DC JobTracker
SparkDC1-PRIMARY 10.240.0.20
Cause
The most common cause of this is decommissioning over half of the Spark Cluster. The high availability subsystem will be unable to achieve a quorum of nodes to reelect a new spark master, and will just give up until the system is restarted (fixed in DSP-6786 - Exceptions in leader manager are swallowed and silently disable leader manager).
Solution
Upgrade to DSE 4.7.4 or later, or restart all of your spark nodes to reactivate the LeaderManager.
If this does not work for you, please include the results of 'grep LeaderManager' (in the Cassandra system log) and 'dsetool ring' in your support request.