Bulk insert performance in MongoDB for large collections

mich8bsp picture mich8bsp · Jun 9, 2015 · Viewed 7.3k times · Source

I'm using the BulkWriteOperation (java driver) to store data in large chunks. At first it seems to be working fine, but when the collection grows in size, the inserts can take quite a lot of time.

Currently for a collection of 20M documents, bulk insert of 1000 documents could take about 10 seconds.

Is there a way to make inserts independent of collection size? I don't have any updates or upserts, it's always new data I'm inserting.

Judging from the log, there doesn't seem to be any issue with locks. Each document has a time field which is indexed, but it's linearly growing so I don't see any need for mongo to take the time to reorganize the indexes.

I'd love to hear some ideas for improving the performance

Thanks

Answer

glytching picture glytching · Jun 23, 2017

You believe that the indexing does not require any document reorganisation and the way you described the index suggests that a right handed index is ok. So, indexing seems to be ruled out as an issue. You could of course - as suggested above - definitively rule this out by dropping the index and re running your bulk writes.

Aside from indexing, I would …

  • Consider whether your disk can keep up with the volume of data you are persisting. More details on this in the Mongo docs
  • Use profiling to understand what’s happening with your writes