Write-through RAM disk, or massive caching of file system?

Will picture Will · Feb 17, 2010 · Viewed 37.8k times · Source

I have a program that is very heavily hitting the file system, reading and writing to a set of working files. The files are several gigabytes in size, but not so large as to not fit on a RAM disk. The machines this program runs on are typically Ubuntu Linux boxes.

Is there a way to configure the file manager to have a very very large cache, and even to cache writes so they hit the disk later?

Or is there a way to create a RAM disk that writes-through to real disk?

Answer

Thomas Pornin picture Thomas Pornin · Feb 17, 2010

By default, Linux will use free RAM (almost all of it) to cache disk accesses, and will delay writes. The heuristics used by the kernel to decide the caching strategy are not perfect, but beating them in a specific situation is not easy. Also, on journalling filesystems (i.e. all the default filesystems nowadays), actual writes to the disk will be performed in a way which is resilient the crashes; this implies a bit of overhead. You may want to try to fiddle with filesystem options. E.g., for ext3, try mounting with data=writeback or even async (these options may improve filesystem performance, at the expense of reduced resilience towards crashes). Also, use noatime to reduce filesystem activity.

Programmatically, you might also want to perform disk accesses through memory mappings (with mmap). This is a bit hand-on, but it gives more control about data management and optimization.