Summary
When inserting a timestamp into a table in Cassandra releases 2.1 (DataStax Enterprise 4.8) or earlier, an incorrect value in a cell could produce an error that prevents reading or deleting that row.
Symptoms
Consider the following table:
CREATE TABLE markc.ts (
key text PRIMARY KEY,
ts timestamp
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
Inserting the following value will input a timestamp that is too large:
cqlsh:markc> select * from ts ;
key | ts
-----+--------------------------
k1 | 2017-02-13 10:52:49+0000
k2 | 2017-02-13 10:52:50+0000
(2 rows)
cqlsh:markc> insert into ts (key, ts ) VALUES ( 'k3', 139694880629824000);
Reading back the value will result in the following error:
cqlsh:markc>
cqlsh:markc> select * from ts ;
Traceback (most recent call last):
File "/usr/bin/cqlsh", line 1124, in perform_simple_statement
rows = self.session.execute(statement, trace=self.tracing_enabled)
File "/usr/share/dse/cassandra/lib/cassandra-driver-internal-only-2.7.2-2fc8a2b.zip/cassandra-driver-2.7.2-2fc8a2b/cassandra/cluster.py", line 1602, in execute
result = future.result()
File "/usr/share/dse/cassandra/lib/cassandra-driver-internal-only-2.7.2-2fc8a2b.zip/cassandra-driver-2.7.2-2fc8a2b/cassandra/cluster.py", line 3347, in result
raise self._final_exception
OverflowError: days=1616838896; must have magnitude <= 999999999
Cause
The timestamp field is not validated when it is written for this potential overflow problem. However reading the timestamp field back can generate this error.
Workaround
The row can be overwritten with a correct value:
cqlsh:markc> insert into markc.ts (key, ts ) VALUES ( 'k3', 1486983170000);
cqlsh:markc> select * from ts ;
key | ts
-----+--------------------------
k1 | 2017-02-13 10:52:49+0000
k3 | 2017-02-13 10:52:50+0000
k2 | 2017-02-13 10:52:50+0000
(3 rows)
Solution
This problem was reported in CASSANDRA-10625
Upgradie to DSE5.0 (Cassandra 3.0.7) or later implicitly fixes this problem. You can reduce risks and effort by employing a continual upgrade strategy. See Upgrading DataStax Enterprise