I have created a Cosmos DB collection with the partition key. Since it is a dev environment I have reduced the throughput to 1000. Now I'm getting the below error.
Message:
"Errors":["Partition key reached maximum size of 10 GB"]
Azure Cosmos DB containers can be created as fixed or unlimited. Fixed-size containers have a maximum limit of 10 GB and 10,000 RU/s throughput. To create a container as unlimited, you must specify a minimum throughput of 2,500 RU/s.
Now I have increased the throughput to 2500. But still, I'm getting the same error.
UPDATE - 11 May, 2020
Microsoft has recently increased the capacity of a logical partition from 10 GB to 20 GB. Please see this for more details: https://docs.microsoft.com/en-us/azure/cosmos-db/concepts-limits
I emailed Aravind Krishna, who is an engineer on the Azure Cosmos DB team and asked for clarification on this point. This is a summary of his answer:
In Cosmos DB, there are physical and logical partitions. Within a Collection, all documents that share the same value for the partition key will live within the same logical partition. One or more logical partitions occupy a physical partition. As developers, the physical partitioning is irrelevant; we only have control over what belongs in a logical partition.
Regardless of whether a Collection is Fixed (10GB) or Unlimited, the 10GB limit applies to a logical partition. Period.
So Sarva, you will need to either rethink your partition key or implement rolling logs to ensure that data within your debug log partition doesn't exceed the 10GB partition limit.