What is partition key in AWS Kinesis all about?

Suhail Gupta picture Suhail Gupta · Jan 23, 2018 · Viewed 21k times · Source

I was reading about AWS Kinesis. In the following program, I write data into the stream named TestStream. I ran this piece of code 10 times, inserting 10 records into the stream.

var params = {
    Data: 'More Sample data into the test stream ...',
    PartitionKey: 'TestKey_1',
    StreamName: 'TestStream'
};

kinesis.putRecord(params, function(err, data) {
   if (err) console.log(err, err.stack); // an error occurred
   else     console.log(data);           // successful response
});

All the records were inserted successfully. What does partition key really mean here? What is it doing in the background? I read its documentation but did not understand what it meant.

Answer

kdgregory picture kdgregory · Jan 23, 2018

Partition keys only matter when you have multiple shards in a stream (but they're required always). Kinesis computes the MD5 hash of a partition key to decide what shard to store the record on (if you describe the stream you'll see the hash range as part of the shard decription).

So why does this matter?

Each shard can only accept 1,000 records and/or 1 MB per second (see PutRecord doc). If you write to a single shard faster than this rate you'll get a ProvisionedThroughputExceededException.

With multiple shards, you scale this limit: 4 shards gives you 4,000 records and/or 4 MB per second. Of course, there are caveats.

The biggest is that you must use different partition keys. If all of your records use the same partition key then you're still writing to a single shard, because they'll all have the same hash value. How you solve this depends on your application: if you're writing from multiple processes then it might be sufficient to use the process ID, server's IP address, or hostname. If you're writing from a single process then you can either use information that's in the record (for example, a unique record ID) or generate a random string.

Second caveat is that the partition key counts against the total write size, and is stored in the stream. So while you could probably get good randomness by using some textual component in the record, you'd be wasting space. On the other hand, if you have some random textual component, you could calculate your own hash from it and then stringify that for the partition key.

Lastly, if you're using PutRecords (which you should, if you're writing a lot of data), individual records in the request may be rejected while others are accepted. This happens because those records went to a shard that was already at its write limits, and you have to re-send them (after a delay).