DataStax Help Center

Hadoop mapreduce job fails with: unable to create new native thread


By default, DSE Hadoop does not have a limit on spawn threads when executing a job and for medium to large jobs this could potentially go over the OS limits


The job would fail with the following exceptions:

37392 ERROR [Thread-68] 2014-06-23 09:48:09,364 (line 198) Exception in thread Thread[Thread-68,5,main] 
37393 java.lang.OutOfMemoryError: unable to create new native thread
37401 FATAL [IPC Server handler 21 on 42528] 2014-06-23 09:48:09,624 (line 3557) Task: attempt_201406201730_0001_m_000008_0 - Killed : unable to create new native thread
37405 INFO [pool-10-thread-1] 2014-06-23 09:48:09,799 (line 542) IPC Server listener on 42528: readAndProcess threw exception Connection reset by peer. Count of bytes read: 0


Without a throttle mechanism to limit the number of threads, they could grown without bound eventually overwhelming the server
Limits are initially set on /etc/security/limits.conf but if after looking at the limits for the DSE java process shows we have unlimited or high limits
for max open files then we need to set a limit on the number of connections to avoid depleting server's resouces


Set the following on cassandra.yaml

  • rpc_max_threads: 2048  Or set a limit that is within the OS settings for max open files - By default this is unlimited
  • rpc_server_type: hsha    Use this to allow for multiplexion of the rpc thread pool connections across the different clients

Then perform a rolling restart the nodes for the setting to be applied
For further reference see:

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request


Powered by Zendesk