Cassandra Tombstoning warning and failure thresholds breached

Rohit picture Rohit · Mar 10, 2015 · Viewed 9.1k times · Source

We are running a Titan Graph DB server backed by Cassandra as a persistent store and are running into an issue with reaching the limit on Cassandra tombstone thresholds that is causing our queries to fail / timeout periodically as data accumulates. It seems like the compaction is unable to keep up with the number of tombstones being added.

Our use case supports:

  1. High read / write throughputs.
  2. High sensitivity to reads.
  3. Frequent updates to node values in Titan. causing rows to be updated in Cassandra.

Given the above use cases, we are already optimizing Cassandra to aggressively do the following:

  1. Aggressive compaction by using the levelled compaction strategies
  2. Using tombstone_compaction_interval as 60 seconds.
  3. Using tombstone_threshold to be 0.01
  4. Setting gc_grace_seconds to be 1800

Despite the following optimizations, we are still seeing warnings in the Cassandra logs similar to: [WARN] (ReadStage:7510) org.apache.cassandra.db.filter.SliceQueryFilter: Read 0 live and 10350 tombstoned cells in .graphindex (see tombstone_warn_threshold). 8001 columns was requested, slices=[00-ff], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}

Occasionally as time progresses, we also see the failure threshold breached and causes errors.

Our cassandra.yaml file has the tombstone_warn_threshold to be 10000, and the tombstone_failure_threshold to be much higher than recommended at 250000, with no real noticeable benefits.

Any help that can point us to the correct configurations would be greatly appreciated if there is room for further optimizations. Thanks in advance for your time and help.

Answer

Curtis Allen picture Curtis Allen · Mar 10, 2015

Sounds like the root of your problem is your data model. You've done everything you can do to mitigate getting TombstoneOverwhelmingException. Since your data model requires such frequent updates causing tombstone creation a eventual consistent store like Cassandra may not be a good fit for your use case. When we've experience these types of issues we had to change our data model to fit better with Cassandra strengths.

About deletes http://www.slideshare.net/planetcassandra/8-axel-liljencrantz-23204252 (slides 34-39)