How to do basic aggregation with DynamoDB?

prem kumar picture prem kumar · May 24, 2017 · Viewed 11.7k times · Source

How is aggregation achieved with dynamodb? Mongodb and couchbase have map reduce support.

Lets say we are building a tech blog where users can post articles. And say articles can be tagged.

user
{
    id : 1235,
    name : "John",
    ...
}

article
{
    id : 789,
    title: "dynamodb use cases",
    author : 12345 //userid
    tags : ["dynamodb","aws","nosql","document database"]
}

In the user interface we want to show for the current user tags and the respective count.

How to achieve the following aggregation?

{
    userid : 12,
    tag_stats:{
        "dynamodb" : 3,
        "nosql" : 8
    }
}

We will provide this data through a rest api and it will be frequently called. Like this information is shown in the app main page.

  • I can think of extracting all documents and doing aggregation at the application level. But I feel my read capacity units will be exhausted
  • Can use tools like EMR, redshift, bigquery, aws lambda. But I think these are for datawarehousing purpose.

I would like to know other and better ways of achieving the same. How are people achieving dynamic simple queries like these having chosen dynamodb as primary data store considering cost and response time.

Answer

Ivan Mushketyk picture Ivan Mushketyk · Jul 29, 2017

Long story short: Dynamo does not support this. It's not build for this use-case. It's intended for quick data access with low-latency. It simply does not support any aggregating functionality.

You have three main options:

  • Export DynamoDB data to Redshift or EMR Hive. Then you can execute SQL queries on a stale data. The benefit of this approach is that it consumes RCUs just once, but you will stick with outdated data.

  • Use DynamoDB connector for Hive and directly query DynamoDB. Again you can write arbitrary SQL queries, but in this case it will access data in DynamoDB directly. The downside is that it will consume read capacity on every query you do.

  • Maintain aggregated data in a separate table using DynamoDB streams. For example you can have a table UserId as a partition key and a nested map with tags and counts as an attribute. On every update in your original data DynamoDB streams will execute a Lambda function or some code on your hosts to update aggregate table. This is the most cost efficient method, but you will need to implement additional code for each new query.

Of course you can extract data at the application level and aggregate it there, but I would not recommend to do it. Unless you have a small table you will need to think about throttling, using just part of provisioned capacity (you want to consume, say, 20% of your RCUs for aggregation and not 100%), and how to distribute your work among multiple workers.

Both Redshift and Hive already know how to do this. Redshift relies on multiple worker nodes when it executes a query, while Hive is based on top of Map-Reduce. Also, both Redshift and Hive can use predefined percentage of your RCUs throughput.