ERROR Message Example
ERROR 10:26:07 Fatal error parsing row: org.codehaus.jackson.JsonGenerationException: Can not start an object, expecting field name
Typically this would produce a stack trace similar to that below:
org.codehaus.jackson.JsonGenerationException: Can not start an object, expecting field name at org.codehaus.jackson.impl.JsonGeneratorBase._reportError(JsonGeneratorBase.java:480) ~[jackson-core-asl-1.9.2.jar:1.9.2] at org.codehaus.jackson.impl.WriterBasedGenerator._verifyValueWrite(WriterBasedGenerator.java:836) ~[jackson-core-asl-1.9.2.jar:1.9.2] at org.codehaus.jackson.impl.WriterBasedGenerator.writeStartObject(WriterBasedGenerator.java:273) ~[jackson-core-asl-1.9.2.jar:1.9.2] at org.apache.cassandra.tools.JsonTransformer.serializePartition(JsonTransformer.java:181) ~[main/:na] at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) ~[na:1.8.0_77] at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) ~[na:1.8.0_77] at java.util.Iterator.forEachRemaining(Iterator.java:116) ~[na:1.8.0_77] at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) ~[na:1.8.0_77] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[na:1.8.0_77] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[na:1.8.0_77] at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) ~[na:1.8.0_77] at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) ~[na:1.8.0_77] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[na:1.8.0_77] at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) ~[na:1.8.0_77] at org.apache.cassandra.tools.JsonTransformer.toJson(JsonTransformer.java:99) ~[main/:na] at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:237) ~[main/:na]
What does this ERROR message mean?
This error occurs when serializing a row from a sstable, Cassandra has encountered an IOException. This error message would print out the row that has encountered the exception, and the IOException itself.
Why does this ERROR occur
This issue is caused by a bug in old versions of Cassandra when serializing a row and encounters a row level deletion. The reported Jira is https://issues.apache.org/jira/browse/CASSANDRA-12418 , where function serializeDeletion(row.deletion().time()); has a missing call of function json.writeEndObject()
In problematic code:
private void serializeRow(Row row) { try { json.writeStartObject(); String rowType = row.isStatic() ? "static_block" : "row"; json.writeFieldName("type"); json.writeString(rowType); json.writeNumberField("position", this.currentPosition); // Only print clustering information for non-static rows. if (!row.isStatic()) { serializeClustering(row.clustering()); } LivenessInfo liveInfo = row.primaryKeyLivenessInfo(); if (!liveInfo.isEmpty()) { objectIndenter.setCompact(false); json.writeFieldName("liveness_info"); objectIndenter.setCompact(true); json.writeStartObject(); json.writeFieldName("tstamp"); json.writeString(dateString(TimeUnit.MICROSECONDS, liveInfo.timestamp())); if (liveInfo.isExpiring()) { json.writeNumberField("ttl", liveInfo.ttl()); json.writeFieldName("expires_at"); json.writeString(dateString(TimeUnit.SECONDS, liveInfo.localExpirationTime())); json.writeFieldName("expired"); json.writeBoolean(liveInfo.localExpirationTime() < (System.currentTimeMillis() / 1000)); } json.writeEndObject(); objectIndenter.setCompact(false); } // If this is a deletion, indicate that, otherwise write cells. if (!row.deletion().isLive()) { serializeDeletion(row.deletion().time()); } json.writeFieldName("cells"); json.writeStartArray(); for (ColumnData cd : row) { serializeColumnData(cd, liveInfo); } json.writeEndArray(); json.writeEndObject(); } catch (IOException e) { logger.error("Fatal error parsing row.", e); } }
so there are 2 calls of function json.writeStartObject(), thus there should be 2 calls of function json.writeEndObject(). However, the first call of json.writeEndObject() is placed in the else condition, so when the if (!row.deletion().isLive() is satisfied, it will skip this first call of function json.writeEndObject() and thus lead to this error.
In the code fix, the else condition has been removed and the first call of json.writeEndObject() has been placed right after the if condition, so 2 calls of json.writeEndObject() would be executed.
How to fix this ERROR
This issue is fixed in cassandra version 3.0.9 and 3.11.1 and later versions, so you would need to upgrade to these and later versions to overcome this issue.