Consider an application that is CPU bound, but also has high-performance I/O requirements.
I'm comparing Linux file I/O to Windows, and I can't see how epoll will help a Linux program at all. The kernel will tell me that the file descriptor is "ready for reading," but I still have to call blocking read() to get my data, and if I want to read megabytes, it's pretty clear that that will block.
On Windows, I can create a file handle with OVERLAPPED set, and then use non-blocking I/O, and get notified when the I/O completes, and use the data from that completion function. I need to spend no application-level wall-clock time waiting for data, which means I can precisely tune my number of threads to my number of cores, and get 100% efficient CPU utilization.
If I have to emulate asynchronous I/O on Linux, then I have to allocate some number of threads to do this, and those threads will spend a little bit of time doing CPU things, and a lot of time blocking for I/O, plus there will be overhead in the messaging to/from those threads. Thus, I will either over-subscribe or under-utilize my CPU cores.
I looked at mmap() + madvise() (WILLNEED) as a "poor man's async I/O" but it still doesn't get all the way there, because I can't get a notification when it's done -- I have to "guess" and if I guess "wrong" I will end up blocking on memory access, waiting for data to come from disk.
Linux seems to have the starts of async I/O in io_submit, and it seems to also have a user-space POSIX aio implementation, but it's been that way for a while, and I know of nobody who would vouch for these systems for critical, high-performance applications.
The Windows model works roughly like this:
Steps 1/2 are typically done as a single thing. Steps 3/4 are typically done with a pool of worker threads, not (necessarily) the same thread as issues the I/O. This model is somewhat similar to the model provided by boost::asio, except boost::asio doesn't actually give you asynchronous block-based (disk) I/O.
The difference to epoll in Linux is that in step 4, no I/O has yet happened -- it hoists step 1 to come after step 4, which is "backwards" if you know exactly what you need already.
Having programmed a large number of embedded, desktop, and server operating systems, I can say that this model of asynchronous I/O is very natural for certain kinds of programs. It is also very high-throughput and low-overhead. I think this is one of the remaining real shortcomings of the Linux I/O model, at the API level.
(2020) If you're using a 5.1 or above kernel you can use the io_uring
interface for file-like I/O and get excellent asynchronous operation.
Compared to the existing libaio
/KAIO interface, io_uring
has the following advantages:
liburing
helper library)recvmsg()
/sendmsg()
are supported from >=5.3, see messages mentioning the word support in io_uring.c's git history)read
/write
(e.g. fsync
(>=5.1), fallocate
(>=5.6), splice
(>=5.7) and more)Compared to glibc's POSIX AIO, io_uring
has the following advantages:
io_uring
most certainly can!The "Efficient IO with io_uring" document is periodically updated and goes into far more detail as to io_uring
's benefits and usage. The "What's new with io_uring" document describes new features added to io_uring
since its inception, while The rapid growth of io_uring
LWN article describes which features were available in each of the 5.1 - 5.5 kernels with a forward glance to what was going to be in 5.6. There's also a "Faster IO through io_uring" videoed presentation (slides) from late 2019 by io_uring
author Jens Axboe. Finally, the Lord of the io_uring guide gives a introductory tutorial on io_uring
usage.
Re "support partial I/O in the sense of recv()
vs read()
": a patch went into the 5.3 kernel that will automatically retry io_uring
short reads and a further commit went into the 5.4 kernel that tweaks the behaviour to only automatically take care of short reads when working with "regular" files on requests that haven't set the REQ_F_NOWAIT
flag (it looks like you can request REQ_F_NOWAIT
via IOCB_NOWAIT
or by opening the file with O_NONBLOCK
). Thus you can get recv()
style- "short" I/O behaviour from io_uring
too.
io_uring
Though the interface is still new (its first incarnation arrived in May 2019), some open-source software is using io_uring
"in the wild":
io_uring
engine to the libaio
engine on an Optane device.io_uring
backend for MultiRead in Dec 2019 and was part of its 6.7.3 release. Jens states io_uring
helped to dramatically cut latency.io_uring
backend in Dec 2019 but the libev author is waiting for 5.6+ kernels before improving it further (the libev author's notes make it sound like all of the kernel issues/concerns will have been addressed by 5.7)io_uring
backend outperforming the threads
and aio
backends on one workload of random 16K blocks.io_uring
VFS backend in Feb 2020 (and it was part of the Samba 4.12 release). In the "Linux io_uring VFS backend." Samba mailing list thread, Stefan Metzmacher (the commit author) says the io_uring
module was able to push roughy 19% more throughput (compared to some unspecified backend) in a synthetic test. You can also read the "Async VFS Future" PDF presentation by Stefan for some of the motivation behind the changes.io_uring
more accessible to pure rust. rio is one library talked about a bit and the author says they achieved higher throughput compared to using sync calls wrapped in threads. The author gave a presentation about his database and library at FOSDEM 2020 which included a section extolling the virtues of io_uring
.io_uring
io_uring
improvements (e.g. the workaround to reduce for filesystem inode contention). There is a presentation "Asynchronous IO for PostgreSQL" (be aware the video is broken until the 5 minute mark) (PDF) motivating the need for PostgreSQL changes and demonstrating some experimental results. He has expressed hope of getting his optional io_uring
support into PostgreSQL 14 and seems acutely aware of what does and doesn't work even down to the kernel level.io_uring
(although they claim to be hitting a bottleneck with it)io_uring
supportio_uring
support via a GsoC project and has active branch aimed at finishing off that workio_uring
but due to lack of time its development is on hold (plus it looks like the person who was developing the io_uring
integration now works for a different company on a different project). Former ScyllaDB developer Glauber Costa mentions early results using Seastar io_uring
backend showed faster perf (than the libaio
backend) with a very specific workload but in a different article he notes "[on] closer inspection, that made clear that this is because our implementation of linux-aio was not as good as it could be" and fixing their linux-aio implementation made the difference disappear.liburing
helper library but you can easily build it for yourself.liburing
.liburing
.io_uring
to RHEL 8 (link is behind a paywall). The RHEL 8.3 beta release notes mention "Added support for io-uring [sic] related system calls" under new features.Hopefully io_uring
will usher in a better asynchronous file-like I/O story for Linux.
(To add a thin veneer of credibility to this answer, apparently at some point in the past Jens Axboe (Linux kernel block layer maintainer and inventor of io_uring
) thought this answer might be worth upvoting :-)