Is there really no asynchronous block I/O on Linux?

Jon Watte picture Jon Watte · Nov 15, 2012 · Viewed 14.3k times · Source

Consider an application that is CPU bound, but also has high-performance I/O requirements.

I'm comparing Linux file I/O to Windows, and I can't see how epoll will help a Linux program at all. The kernel will tell me that the file descriptor is "ready for reading," but I still have to call blocking read() to get my data, and if I want to read megabytes, it's pretty clear that that will block.

On Windows, I can create a file handle with OVERLAPPED set, and then use non-blocking I/O, and get notified when the I/O completes, and use the data from that completion function. I need to spend no application-level wall-clock time waiting for data, which means I can precisely tune my number of threads to my number of cores, and get 100% efficient CPU utilization.

If I have to emulate asynchronous I/O on Linux, then I have to allocate some number of threads to do this, and those threads will spend a little bit of time doing CPU things, and a lot of time blocking for I/O, plus there will be overhead in the messaging to/from those threads. Thus, I will either over-subscribe or under-utilize my CPU cores.

I looked at mmap() + madvise() (WILLNEED) as a "poor man's async I/O" but it still doesn't get all the way there, because I can't get a notification when it's done -- I have to "guess" and if I guess "wrong" I will end up blocking on memory access, waiting for data to come from disk.

Linux seems to have the starts of async I/O in io_submit, and it seems to also have a user-space POSIX aio implementation, but it's been that way for a while, and I know of nobody who would vouch for these systems for critical, high-performance applications.

The Windows model works roughly like this:

  1. Issue an asynchronous operation.
  2. Tie the asynchronous operation to a particular I/O completion port.
  3. Wait on operations to complete on that port
  4. When the I/O is complete, the thread waiting on the port unblocks, and returns a reference to the pending I/O operation.

Steps 1/2 are typically done as a single thing. Steps 3/4 are typically done with a pool of worker threads, not (necessarily) the same thread as issues the I/O. This model is somewhat similar to the model provided by boost::asio, except boost::asio doesn't actually give you asynchronous block-based (disk) I/O.

The difference to epoll in Linux is that in step 4, no I/O has yet happened -- it hoists step 1 to come after step 4, which is "backwards" if you know exactly what you need already.

Having programmed a large number of embedded, desktop, and server operating systems, I can say that this model of asynchronous I/O is very natural for certain kinds of programs. It is also very high-throughput and low-overhead. I think this is one of the remaining real shortcomings of the Linux I/O model, at the API level.

Answer

Anon picture Anon · Aug 11, 2019

(2020) If you're using a 5.1 or above kernel you can use the io_uring interface for file-like I/O and get excellent asynchronous operation.

Compared to the existing libaio/KAIO interface, io_uring has the following advantages:

  • Retains asynchronous behaviour when doing buffered I/O (and not just when doing direct I/O)
  • Easier to use (especially when using the liburing helper library)
  • Can optionally work in a polled manner (but you'll need higher privileges to enable this mode)
  • Less bookkeeping space overhead per I/O
  • Lower CPU overhead due to less userspace/kernel syscall context switches (a big deal these days due to the impact of spectre/meltdown mitigations)
  • File descriptors and buffers can be pre-registered to save mapping/unmapping time
  • Faster (can achieve higher aggregate throughput, I/Os have a lower latency)
  • "Linked mode" that can be used to express dependencies between groups of I/Os (>=5.3 kernel)
  • Rapidly improving support for socket based I/O (recvmsg()/sendmsg() are supported from >=5.3, see messages mentioning the word support in io_uring.c's git history)
  • Supports attempted cancellation of queued I/O (>=5.5)
  • Growing support for performing asynchronous operations beyond read/write (e.g. fsync (>=5.1), fallocate (>=5.6), splice (>=5.7) and more)
  • Doesn't become blocking each time the stars aren't perfectly aligned

Compared to glibc's POSIX AIO, io_uring has the following advantages:

The "Efficient IO with io_uring" document is periodically updated and goes into far more detail as to io_uring's benefits and usage. The "What's new with io_uring" document describes new features added to io_uring since its inception, while The rapid growth of io_uring LWN article describes which features were available in each of the 5.1 - 5.5 kernels with a forward glance to what was going to be in 5.6. There's also a "Faster IO through io_uring" videoed presentation (slides) from late 2019 by io_uring author Jens Axboe. Finally, the Lord of the io_uring guide gives a introductory tutorial on io_uring usage.

Re "support partial I/O in the sense of recv() vs read()": a patch went into the 5.3 kernel that will automatically retry io_uring short reads and a further commit went into the 5.4 kernel that tweaks the behaviour to only automatically take care of short reads when working with "regular" files on requests that haven't set the REQ_F_NOWAIT flag (it looks like you can request REQ_F_NOWAIT via IOCB_NOWAIT or by opening the file with O_NONBLOCK). Thus you can get recv() style- "short" I/O behaviour from io_uring too.

Software using io_uring

Though the interface is still new (its first incarnation arrived in May 2019), some open-source software is using io_uring "in the wild":

Software investigating using io_uring

Linux distributions shipping a new enough kernel

  • Ubuntu 18.04's latest HWE enablement kernel is 5.4. This distro doesn't pre-package the liburing helper library but you can easily build it for yourself.
  • Ubuntu 20.04's initial kernel is 5.4. As above, the distro doesn't pre-package liburing.
  • Fedora 32's initial kernel is 5.6. It does have a packaged liburing.
  • It looks like someone has asked Red Hat to backport io_uring to RHEL 8 (link is behind a paywall). The RHEL 8.3 beta release notes mention "Added support for io-uring [sic] related system calls" under new features.

Hopefully io_uring will usher in a better asynchronous file-like I/O story for Linux.

(To add a thin veneer of credibility to this answer, apparently at some point in the past Jens Axboe (Linux kernel block layer maintainer and inventor of io_uring) thought this answer might be worth upvoting :-)