DataStax Help Center

ReadTimeoutException seen when using the java driver caused by excessive tombstones

Summary

When reading from a cluster with the java driver the user will see a ReadTimeoutException. This can be caused by tombstones.

Symptoms

The customer was trying to read some data from some rather large rows in a table with many partitions. They were seeing the following exception:

ERROR - my_keyspace - Cassandra timeout during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded) 
com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded) 
at com.datastax.driver.core.exceptions.ReadTimeoutException.copy(ReadTimeoutException.java:69) 
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:258)

 

Cause

The customer had a large number of tombstones present in the column family and this was causing the timeout. The following error was seen in the cassandra system.log file

ERROR [ReadStage:93968] 2014-12-02 14:16:14,283 SliceQueryFilter.java (line 200) Scanned over 100000 tombstones in my_keyspace.my_table ; query aborted (see tombstone_failure_threshold)

 

Workaround

Increasing the following setting in the cassandra.yaml file to allow the read query to work

tombstone_failure_threshold: 

Solution

Clean up tombstones by ensuring gc_grace_seconds is set to run at a more frequent time to suit your application or use TTLs for certain data. For example the default gc_grace_seconds is 864000 (10 days). If your TTL data is set to 6 days then you might want to change gc_grace_seconds to 604800 (7 days) to remove tombstones sooner.

Note: if you do reduce the gc_grace_seconds, be aware that repairs have to complete within this window too. For more info please see the following documentation link: http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_repair_nodes_c.html

Example to change gc_grace_seconds on a table

cqlsh:results> desc table example; CREATE TABLE example (
col1 text,
col2 text,
PRIMARY KEY ((col1))
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.100000 AND
gc_grace_seconds=864000 AND
index_interval=128 AND
read_repair_chance=0.000000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};
cqlsh:results> alter table example with gc_grace_seconds = 10000;
cqlsh:results> desc table example;
CREATE TABLE example (
col1 text,
col2 text,
PRIMARY KEY ((col1))
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.100000 AND
gc_grace_seconds=10000 AND
index_interval=128 AND
read_repair_chance=0.000000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};
Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request

Comments

Powered by Zendesk