I have a program that saves many large files >1GB using fwrite
It works fine, but unfortunately due to the nature of the data each call to fwrite
only writes 1-4bytes. with the result that the write can take over an hour, with most of this time seemingly due to the syscall overhead (or at least in the library function of fwrite). I have a similar problem with fread
.
Does anyone know of any existing / library functions that will buffer these writes and reads with an inline function, or is this another roll your own?
First of all, fwrite()
is a library and not a system call. Secondly, it already buffers the data.
You might want to experiment with increasing the size of the buffer. This is done by using setvbuf()
. On my system this only helps a tiny bit, but YMMV.
If setvbuf()
does not help, you could do your own buffering and only call fwrite()
once you've accumulated enough data. This involves more work, but will almost certainly speed up the writing as your own buffering can be made much more lightweight that fwrite()
's.
edit: If anyone tells you that it's the sheer number of fwrite()
calls that is the problem, demand to see evidence. Better still, do your own performance tests. On my computer, 500,000,000 two-byte writes using fwrite()
take 11 seconds. This equates to throughput of about 90MB/s.
Last but not least, the huge discrepancy between 11 seconds in my test and one hour mentioned in your question hints at the possibility that there's something else going on in your code that's causing the very poor performance.