We experience throttling (429) due to burst of high traffic for a period of time. To mitigate this issue, we currently increase the RU in azure portal, and decrease it later.
I want to scale Up/Down based on the metrics, but, it does not expose the # of physical partitions created for the document DB container.
I would not go to physical partition level at all as the load probably does not distribute evenly across partitions anyway. I assume you probably don't care about average partition throughput but need to take care of the worst one.
So, if you need full auto-scale, then I would concentrate on tracking throttling events (occurs after the fact) or monitoring the total RU usage (partitioning magic). Both paths can go really complex to get true auto-scale and probably a combination of those would be needed. While upscaling seems achievable then deciding when to come back down and to what level is trickier.
It is hard to expect the unexpected and reliably react to things before they happen. Definitely consider if it's worth it in your scenario compared to simpler solutions.
An even simpler solution would be to just set the RU limit by a prepared schedule (i.e. weekday + time of day) following the average peak load trends.
This will not autoscale for unexpected peaks or fall-offs and would require some monitoring to adjust to the unexpected, but you have that anyway, right? What such simple solution would give you, is a flexible throughput limit and predictable cost for the average day, with minimal effort.
Once you know WHAT RU limit you want at any given time, then executing it is easy. The increasing-decreasing or RU limit could be programmed and for example ran through Azure functions. C# example for actually changing the limit would be along the lines of:
var offer = client.CreateOfferQuery()
.Where(r => r.ResourceLink == collection.SelfLink).Single();
offer = new OfferV2(offer, newthroughput);
client.ReplaceOfferAsync(offer);
Your Azure function could tick periodically and depending on your configured schedule or gathered events adjust the newthroughput
accordingly.
Whatever autoscale solution you implement, do think about setting reasonable hard limits for how high you are willing to go. Otherwise you could get unexpected bills from Azure in case of mishaps or malicious activity(DDOS). Having throttling is better at some point.