I have a program which uses mmap()
and shared memory to efficiently access a large database file. I would like to experiment with huge pages to see if it speeds things up.
I thought that a quick and easy way would be to copy the database file into Linux's hugetlbfs directory and make a symlink to it in the old location.
However, this does not work because the cp
command cannot write to the file. I suspect that files can only be created by calling the ftrunc()
and mmap()
system calls to write into it. I will probably try writing a copy tool that does this, unless I get an answer describing an existing tool.
I'm looking for any other good methods to do shared memory maps using huge pages in Linux.
An old question now. But seeing as no one has answered and I'm actually wanting to experiment with huge page support too (for different reasons). I'll provide an answer.
Although huge pages are now transparent in modern kernels you can still get more control.
These functions may be what you're looking for.
get_huge_pages(), free_huge_pages(), get_hugepage_region(), free_hugepage_region()
You'll need to install libhugetlbfs which is a wrapper for the hugetlbfs.
Here's a Linux Weekly article that you may find helpful. Huge pages - part 1 (Introduction)