What kind of an architecture is needed to store 100 TB data and query it with aggregation? How many nodes? Disk size per node? What can the best practice be?
Every day 240GB will be written but the size will remain same because the same amount data will be deleted.
Or any different thoughts about storing the data and fast group queries?
Kindly refer to related question,
Quoting from the the top answer:
The "production deployments" page on MongoDB's site may be of interest to you. Lots of presentations listed with infrastructure information. For example:
http://blog.wordnik.com/12-months-with-mongodb says they're storing 3 TB per node.