Thursday, August 6, 2020

Why QEMU should move from C to Rust

My KVM Forum 2018 presentation titled Security in QEMU: How Virtual Machines provide Isolation (pdf) (video) reviewed security bugs in QEMU and found the most common causes were C programming bugs. This includes buffer overflows, use-after-free, uninitialized memory, and more. In this post I will argue for using Rust as a safer language that prevents these classes of bugs.

In 2018 the choice of a safer language was not clear. C++ offered safe abstractions without an effective way to prohibit unsafe language features. Go also offered safety but with concerns about runtime costs. Rust looked promising but few people had deep experience with it. In 2018 I was not able to argue confidently for moving away from C in QEMU.

Now in 2020 the situation is clearer. C programming bugs are still the main cause of CVEs in QEMU. Rust has matured, its ecosystem is growing and healthy, and there are virtualization projects like Crosvm, Firecracker, and cloud-hypervisor that prove Rust is an effective language for writing Virtual Machine Monitors (VMM). In the QEMU community Paolo Bonzini and Sergio Lopez's work on rust-vmm and vhost-user code inspired me to look more closely at moving away from C.

Do we need to change programming language?

Most security bugs in QEMU are C programming bugs. This is easy to verify by looking through the CVE listings. Although I have only reviewed CVEs it seems likely that non-security bugs are also mostly C programming bugs.

Eliminating C programming bugs does not necessarily require switching programming languages. Other approaches to reducing bug rates in software include:

  • Coding style rules that forbid unsafe language features.
  • Building safe abstractions and prohibiting unsafe language features or library APIs.
  • Static checkers that scan source code for bugs.
  • Dynamic sanitizers that run software with instrumentation to identify bugs.
  • Unit testing and fuzzing.

The problem is, the QEMU community has been doing these things for years but new bugs are still introduced despite these efforts. It is certainly possible to spend more energy on these efforts but the evidence shows that bugs continue to slip through.

There are two issues with these approaches to reducing bugs. First, although these approaches help find existing bugs, eliminating classes of bugs so they cannot exist in the first place is a stronger approach. This is hard to do with C since the language is unsafe, placing the burden of safety on the programmer.

Second, much of the ability to write safe C code comes with experience. Custom conventions, APIs, tooling, and processes to reduce bugs is a hurdle for one-time contributors or newcomers. It makes the codebase inaccessible unless we accept lower standards for some contributors. Code quality should depend as little on experience as possible but C is notorious for being a programming language that requires a lot of practice before you can write production-quality code.

Why Rust?

Safe languages eliminate memory safety bugs (and other classes like concurrency bugs). Rust made this a priority in its design:

  • Use-after-free, double-free, memory leaks, and other lifetime bugs are prevented at compile-time by the borrow checker where the compiler checks ownership of data.
  • Buffer overflows and other memory corruptions are prevented by compile-time and runtime bounds-checking.
  • Pointer deference bugs are prevented by the absense of NULL pointers and strict ownership rules.
  • Uninitialized memory is prevented because all variables and fields must be initialized.

Rust programs can still "panic" at runtime when safety cannot be proven at compile time but this does not result in undefined behavior as seen in C programs. The program simply aborts with a backtrace. Bugs that could have resulted in arbitrary code execution in C become at most denial-of-service bugs in Rust. This reduces the severity of bugs.

As a result of this language design most C programming bugs that plague QEMU today are either caught by the compiler or turn into a safe program termination. It is reasonable to expect CVEs to reduce in number and in severity when switching to Rust.

At the same time Rust eliminates the need for many of the measures that the QEMU community added onto C because the Rust programming language and its compiler already enforce safety. This means newcomers and one-time contributors will not need QEMU-specific experience, can write production-quality code more easily, and can get their code merged more quickly. It also means reviewers will have to spend less time pointing out C programming bugs or asking for changes that comply with QEMU's way of doing things.

That said, Rust has a reputation for being a scary language due to the borrow checker. Most programmers have not thought about object lifetimes and ownership as systematically and explicitly as required by Rust. This raises the bar to learning the language, but I look at it this way: learning Rust is humanly possible, writing bug-free C code is not.

How can we change programming language?

When I checked in 2018 QEMU was 1.5 million lines of code. It has grown since then. Moving a large codebase to a new programming language is extremely difficult. If people want to convert QEMU to Rust that would be great, but I personally don't have the appetite to do it because I think the integration will be messy, result in a lot of duplication, and there is too much un(der)maintained code that is hard to convert.

The reason I am writing this post is because device emulation, the main security attack surface for VMMs, can be done in a separate program. That program can be written in any language and this is where Rust comes in. For vhost devices it is possible to write Rust device backends today and I hope this will become the default approach to writing new devices.

For non-vhost devices the vfio-user project is working on an interface out-of-process device emulation. It will be possible to implement devices in Rust there too.

If you are implementing new device emulation code please consider doing it in Rust!

Conclusion

Most security bugs in QEMU today are C programming bugs. Switching to a safer programming language will significantly reduce security bugs in QEMU. Rust is now mature and proven enough to use as the language for device emulation code. Thanks to vhost-user and vfio-user using Rust for device emulation does not require a big conversion of QEMU code, it can simply be done in a separate program. This way attack surfaces can be written in Rust to make them less susceptible to security bugs going forward.

Saturday, July 18, 2020

Rethinking event loop integration for libraries

APIs for operations that take a long time are often asynchronous so that applications can continue with other tasks while an operation is running. Asynchronous APIs initiate an operation and then return immediately. The application is notified when the operation completes through a callback or by monitoring a file descriptor for activity (for example, when data arrives on a TCP socket).

Asynchronous applications are usually built around an event loop that waits for the next event and invokes a function to handle the event. Since the details of event loops differ between applications, libraries need to be designed carefully to integrate well with a variety of event loops.

The current model

A popular library with asynchronous APIs is the libcurl file transfer library that is used for making HTTP requests. It has the following (slightly simplified) event loop integration API:

#define CURL_WAIT_POLLIN    0x0001   /* Ready to read? */
#define CURL_WAIT_POLLOUT   0x0004   /* Ready to write? */

int socket_callback(CURL *easy,      /* easy handle */
                    int fd,          /* socket */
                    int what,        /* describes the socket */
                    void *userp,     /* private callback pointer */
                    void *socketp);  /* private socket pointer */

libcurl invokes the applications socket_callback() to start or stop monitoring file descriptors. When the application's event loop detects file descriptor activity, the application invokes libcurl's curl_multi_socket_action() API to let the library process the file descriptor.

There are variations on this theme but generally libraries expose file descriptors and event flags (read/write/error) so the application can monitor file descriptors from its own event loop. The library then performs the read(2) or write(2) call when the file descriptor becomes ready.

How io_uring changes the picture

The Linux io_uring API (pdf) can be used to implement traditional event loops that monitor file descriptors. But it also supports asynchronous system calls like read(2) and write(2) (best used when IORING_FEAT_FAST_POLL is available). The latter is interesting because it combines two syscalls into a single efficient syscall:

  1. Waiting for file descriptor activity.
  2. Reading/writing the file descriptor.

Existing applications use syscalls like epoll_wait(2), poll(2), or the old select(2) to wait for file descriptor activity. They can also use io_uring's IORING_OP_POLL_ADD to achieve the same effect.

After the file descriptor becomes ready, a second syscall like read(2) or write(2) is required to actually perform I/O.

io_uring's asynchronous IORING_OP_READ or IORING_OP_WRITE (including variants for vectored I/O or sockets) only requires a single io_uring_enter(2) call. If io_uring sqpoll is enabled then a syscall may not even be required to submit these operations!

To summarize, it's more efficient to perform a single asynchronous read/write instead of first monitoring file descriptor activity and then performing a read(2) or write(2).

A new model

Existing library APIs do not fit the asynchronous read/write approach because they expect the application to wait for file descriptor activity and then for the library to invoke a syscall to perform I/O. A new model is needed where the library tells the application about I/O instead of asking the application to monitor file descriptors for activity.

The library can use a new callback API that lets the application perform asynchronous I/O:

/*
 * The application invokes this callback when an aio operation has completed.
 *
 * @cb_arg: the cb_arg passed to a struct aio_operations function by the library
 * @ret: the return value of the aio operation (negative errno for failure)
 */
typedef void aio_completion_fn(void *cb_arg, ssize_t ret);

/*
 * Asynchronous I/O operation callbacks provided to the library by the
 * application.
 *
 * These functions initiate an I/O operation and then return immediately. When
 * the operation completes the @cb callback is invoked with @cb_arg. Note that
 * @cb may be invoked before the function returns (typically in the case of an
 * early error).
 */
struct aio_operations {
    void read(int fd, void *data, size_t len, aio_completion_fn *cb,
              void *cb_arg);
    void write(int fd, void *data, size_t len, aio_completion_fn *cb,
               void *cb_arg);
    ...
};

The concept of monitoring file descriptor activity is gone. Instead the API focusses on asynchronous I/O operations that can be implemented by the application however it sees fit.

Applications using io_uring can use IORING_OP_READ and IORING_OP_WRITE to implement asynchronous operations efficiently. Traditional applications can still use their event loops but now also perform the read(2), write(2), etc syscalls on behalf of the library.

Some libraries don't need a full set of struct aio_operations callbacks because they only perform I/O in limited ways. For example, a library that only has a Linux eventfd can instead present this simplified API:

/*
 * Return an eventfd(2) file descriptor that the application must read from and
 * call lib_eventfd_fired() when a non-zero value was read.
 */
int lib_get_eventfd(struct libobject *obj);

/*
 * The application must call this function when the eventfd returned by
 * lib_get_eventfd() read a non-zero value.
 */
void lib_eventfd_fired(struct libobject *obj);

Although this simplified API is similar to traditional event loop integration APIs it is now the application's responsibility to perform the eventfd read(2), not the library's. This way applications using io_uring can implement the read efficiently.

Does an extra syscall matter?

Whether it is worth eliminating the extra syscall depends on one's performance requirements. When I/O is relatively infrequent then the overhead of the additional syscall may not matter.

While working on QEMU I found that the extra read(2) on eventfds causes a measurable overhead.

Conclusion

Splitting file descriptor monitoring from I/O is suboptimal for Linux io_uring applications. Unfortunately, existing library APIs are often designed in this way. Letting the application perform asynchronous I/O on behalf of the library allows a more efficient implementation with io_uring while still supporting applications that use older event loops.

Thursday, July 2, 2020

Avoiding bitrot in C macros

A common approach to debug messages that can be toggled at compile-time in C programs is:

#ifdef ENABLE_DEBUG
#define DPRINTF(fmt, ...) do { fprintf(stderr, fmt, ## __VA_ARGS__); } while (0)
#else
#define DPRINTF(fmt, ...)
#endif

Usually the ENABLE_DEBUG macro is not defined in normal builds, so the C preprocessor expands the debug printfs to nothing. No messages are printed at runtime and the program's binary size is smaller since no instructions are generated for the debug printfs.

This approach has the disadvantage that it suffers from bitrot, the tendency for source code to break over time when it is not actively built and used. Consider what happens when one of the variables used in the debug printf is not updated after being renamed:

- int r;
+ int radius;
  ...
  DPRINTF("radius %d\n", r);

The code continues to compile after r is renamed to radius because the DPRINTF() macro expands to nothing. The compiler does not syntax check the debug printf and misses that the outdated variable name r is still in use. When someone defines ENABLE_DEBUG months or years later, the compiler error becomes apparent and that person is confronted with fixing a new bug on top of whatever they were trying to debug when they enabled the debug printf!

It's actually easy to avoid this problem by writing the macro differently:

#ifndef ENABLE_DEBUG
#define ENABLE_DEBUG 0
#endif
#define DPRINTF(fmt, ...) do { \
        if (ENABLE_DEBUG) { \
            fprintf(stderr, fmt, ## __VA_ARGS__); \
        } \
    } while (0)

When ENABLE_DEBUG is not defined the macro expands to:

do {
    if (0) {
        fprintf(stderr, fmt, ...);
    }
} while (0)

What is the difference? This time the compiler parses and syntax checks the debug printf even when it is disabled. Luckily compilers are smart enough to eliminate deadcode, code that cannot be executed, so the binary size remains small.

This applies not just to debug printfs. More generally, all preprocessor conditionals suffer from bitrot. If an #if ... #else ... #endif can be replaced with equivalent unconditional code then it's often worth doing.

Friday, May 22, 2020

How to check VIRTIO feature bits inside Linux guests

VIRTIO devices have feature bits that indicate the presence of optional features. The feature bit space is divided into core VIRTIO features (e.g. notify on empty), transport-specific features (PCI, MMIO, CCW), and device-specific features (e.g. virtio-net checksum offloading). This article shows how to check whether a feature is enabled inside Linux guests.

The feature bits are used during VIRTIO device initialization to negotiate features between the device and the driver. The device reports a fixed set of features, typically all the features that the device implementors wanted to offer from the VIRTIO specification version that they developed against. The driver also reports features, typically all the features that the driver developers wanted to offer from the VIRTIO specification version that they developed against.

Feature bit negotiation determines the subset of features supported by both the device and the driver. A new driver might not be able to enable all the features it supports if the device is too old. The same is true vice versa. This offers compatibility between devices and drivers. It also means that you don't know which features are enabled until the device and driver have negotiated them at runtime.

Where to find feature bit definitions

VIRTIO feature bits are listed in the VIRTIO specification. You can also grep the linux/virtio-*.h header files:

$ grep VIRTIO.*_F_ /usr/include/linux/virtio_*.h
virtio_ring.h:#define VIRTIO_RING_F_INDIRECT_DESC 28
virtio_ring.h:#define VIRTIO_RING_F_EVENT_IDX  29
virtio_scsi.h:#define VIRTIO_SCSI_F_INOUT                    0
virtio_scsi.h:#define VIRTIO_SCSI_F_HOTPLUG                  1
virtio_scsi.h:#define VIRTIO_SCSI_F_CHANGE                   2
...

Here the VIRTIO_SCSI_F_INOUT (0) constant is for the 1st bit (1ull << 0). Bit-numbering can be confusing because different standards, vendors, and languages express it differently. Here it helps to think of a bit shift operation like 1 << BIT.

How to check feature bits inside the guest

The Linux virtio.ko driver that is used for all VIRTIO devices has a sysfs file called features. This file contains the feature bits in binary representation starting with the 1st bit on the left and more significant bits to the right. The reported bits are the subset that both the device and the driver support.

To check if the virtio-blk device /dev/vda has the VIRTIO_RING_F_EVENT_IDX (29) bit set:

$ python -c "print('$(</sys/block/vda/device/driver/virtio*/features)'[29])"
01100010011101100000000000100010100

Other device types can be found through similar sysfs paths.

Thursday, April 30, 2020

How the Linux VFS, block layer, and device drivers fit together

The Linux kernel storage stack consists of several components including the Virtual File System (VFS) layer, the block layer, and device drivers. This article gives an overview of the main objects that a device driver interacts with and their relationships to each other. Actual I/O requests are not covered, instead the focus is on the objects representing the disk.

Let's start with a diagram of the key data structures and then an explanation of how they work together.

The Virtual File System (VFS) layer

The VFS layer is where file system concepts like files and directories are handled. The VFS provides an interface that file systems like ext4, XFS, and NFS implement to register themselves with the kernel and participate in file system operations. The struct file_operations interface is the most interesting for device drivers as we are about to see.

System calls like open(2), read(2), etc are handled by the VFS and dispatched to the appropriate struct file_operations functions.

Block device nodes like /dev/sda are implemented in fs/block_dev.c, which forms a bridge between the VFS and the Linux block layer. The block layer handles the actual I/O requests and is aware of disk-specific information like capacity and block size.

The main VFS concept that device drivers need to be aware of is struct block_device_operations and the struct block_device instances that represent block devices in Linux. A struct block_device connects the VFS inode and struct file_operations interface with the block layer struct gendisk and struct request_queue.

In Linux there are separate device nodes for the whole device (/dev/sda) and its partitions (/dev/sda1, /dev/sda2, etc). This is handled by struct block_device so that a partition has a pointer to its parent in bd_contains.

The block layer

The block layer handles I/O request queues, disk partitions, and other disk-specific functionality. Each disk is represented by a struct gendisk and may have multiple struct hd_struct partitions. There is always part0, a special "partition" covering the entire block device.

I/O requests are placed into queues for processing. Requests can be merged and scheduled by the block layer. Ultimately a device driver receives a request for submission to the physical device. Queues are represented by struct request_queue.

The device driver

The disk device driver registers a struct genhd with the block layer and sets up the struct request_queue to receive requests that need to be submitted to the physical device.

There is one struct genhd for the entire device even though userspace may open struct block_device instances for multiple partitions on the disk. Disk partitions are not visible at the driver level because I/O requests have already had their Logical Block Address (LBA) adjusted with the partition start offset.

How it all fits together

The VFS is aware of the block layer struct gendisk. The device driver is aware of both the block layer and the VFS struct block_device. The block layer does not have direct connections to the other components but the device driver provides callbacks.

One of the interesting aspects is that a device driver may drop its reference to struct gendisk but struct block_device instances may still have their references. In this case no I/O can occur anymore because the driver has stopped the disk and the struct request_queue, but userspace processes can still call into the VFS and struct block_device_operations callbacks in the device driver can still be invoked.

Thinking about this case is why I drew the diagram and ended up writing about this topic!

Monday, April 20, 2020

virtio-fs has landed in QEMU 5.0!

The virtio-fs shared host<->guest file system has landed in QEMU 5.0! It consists of two parts: the QEMU -device vhost-user-fs-pci and the actual file server called virtiofsd. Guests need to have a virtio-fs driver in order to access shared file systems. In Linux the driver is called virtiofs.ko and has been upstream since Linux v5.4.

Using virtio-fs

Thanks to libvirt virtio-fs support, it's possible to share directories trees from the host with the guest like this:

<filesystem type='mount' accessmode='passthrough'>
    <driver type='virtiofs'/>
    <binary xattr='on'>
       <lock posix='on' flock='on'/>
    </binary>
    <source dir='/path/on/host'/>
    <target dir='mount_tag'/>
</filesystem>

The host /path/on/host directory tree can be mounted inside the guest like this:

# mount -t virtiofs mount_tag /mnt

Applications inside the guest can then access the files as if they were local files. For more information about virtio-fs, see the project website.

How it works

For the most part, -device vhost-user-fs-pci just facilitates the connection to virtiofsd where the real work happens. When guests submit file system requests they are handled directly by the virtiofsd process on the host and don't need to go via the QEMU process.

virtiofsd is a FUSE file system daemon with virtio-fs extensions. virtio-fs is built on top of the FUSE protocol and therefore supports the POSIX file system semantics that applications expect from a native Linux file system. The Linux guest driver shares a lot of code with the traditional fuse.ko kernel module.

Resources on virtio-fs

I have given a few presentations on virtio-fs:

Future features

A key feature of virtio-fs is the ability to directly access the host page cache, eliminating the need to copy file contents into guest RAM. This so-called DAX support is not upstream yet.

Live migration is not yet implemented. It is a little challenging to transfer all file system state to the destination host and seamlessly continue file system operation without remounting, but it should be doable.

There is a Rust implementation of virtiofsd that is close to reaching maturity and will replace the C implementation. The advantage is that Rust has better memory and thread safety than C so entire classes of bugs can be eliminated. Also, the codebase is written from scratch whereas the C implementation was a combination of several existing pieces of software that were not designed together.

Saturday, February 15, 2020

An introduction to GDB scripting in Python

Sometimes it's not humanly possible to inspect or modify data structures manually in a debugger because they are too large or complex to navigate. Think of a linked list with hundreds of elements, one of which you need to locate. Finding the needle in the haystack is only possible by scripting the debugger to automate repetitive steps.

This article gives an overview of the GNU Debugger's Python scripting support so that you can tackle debugging tasks that are not possible manually.

What scripting GDB in Python can do

GDB can load Python scripts to automate debugging tasks and to extend debugger functionality. I will focus mostly on automating debugging tasks but extending the debugger is very powerful though rarely used.

Say you want to search a linked list for a particular node:

(gdb) p node.next
...
(gdb) p node.next.next
...
(gdb) p node.next.next.next

Doing this manually can be impossible for lists with too many elements. GDB scripting support allows this task to be automated by writing a script that executes debugger commands and interprets the results.

Loading Python scripts

The source GDB command executes files ending with the .py extension in a Python interpreter. The interpreter has access to the gdb Python module that exposes debugging APIs so your script can control GDB.

$ cat my-script.py
print('Hi from Python, this is GDB {}'.format(gdb.VERSION))
$ gdb
(gdb) source my-script.py
Hi from Python, this is GDB Fedora 8.3.50.20190824-28.fc31

Notice that the gdb module is already imported. See the GDB Python API documentation for full details of this module.

It's also possible to run ad-hoc Python commands from the GDB prompt:

(gdb) py print('Hi')
Hi

Executing commands

GDB commands are executed using gdb.execute(command, from_tty, to_string). For example, gdb.execute('step') runs the step command. Output can be collected as a Python string by setting to_string to True. By default output goes to the interactive GDB session.

Although gdb.execute() is fundamental to GDB scripting, at best it allows screen-scraping (interpreting the output string) rather than a Pythonic way of controlling GDB. There is actually a full Python API that represents the debugged program's types and values in Python. Most scripts will use this API instead of simply executing GDB commands as if simulating an interactive shell session.

Navigating program variables

The entry point to navigating program variables is gdb.parse_and_eval(expression). It returns a gdb.Value.

When a gdb.Value is a struct its fields can be indexed using value['field1']['child_field1'] syntax. The following example iterates a linked list:

elem = gdb.parse_and_eval('block_backends.tqh_first')
while elem:
    name = elem['name'].string()
    if name == 'drive2':
        print('Found {}'.format(elem['dev']))
        break
    elem = elem['link']['tqe_next']

This script iterates the block_backends linked list and checks the name field of each element against "drive2". When it finds "drive2" it prints the dev field of that element.

There is a lot more that GDB Python scripts can do but you'll have to check out the API documentation to learn about it.

Conclusion

Python scripts can automate tedious debugging tasks in GDB. Having the full power of Python and access to file I/O, HTTP requests, etc means pretty much any debugging task can be turned into a full-blown program. A subset of this was possible in the past through GDB command scripts, but Python is a much more flexible programming language and familiar to many developers (more so than GDB's own looping and logic commands!).