Thursday, September 16, 2021

KVM Forum 2021 Highlights

Here are highlights from this year's KVM Forum conference, the yearly event around QEMU, KVM, and related communities like VIRTIO and rust-vmm.

Video recordings will be posted soon. In the meantime, here are short summaries of what I learnt. You can find slides to many of these talks in the links below.

vDPA

vDPA is a Linux driver framework for developing hybrid hardware/software VIRTIO devices.

In Hyperscale vDPA, Jason Wang covered ways to create fine-grained virtual devices for virtual machines and containers using vDPA. This means offering a way to slice up a physical device into many virtual devices with some aspects of the virtual device handled in hardware and others in software. This is the direction that networking and accelerator devices are heading in and is actively being discussed in the VIRTIO community. Many different approaches are possible and Jason's talk enumerates some of them:

  • An interface for management commands (a virtio-pci capability, a management virtqueue)
  • DMA isolation (PCIe PASID, a platform-independent device MMU)
  • More than 2048 MSI-X interrupts (virtio-pci capability for VIRTIO-specific MSI-X tables)

Another new vDPA project was presented by Yongji Xie. VDUSE - vDPA Device in Userspace showed how vDPA devices can be implemented in userspace. Although this is roughly the same use case as vhost-user, it has the unique advantage of allowing containers and bare metal to attach devices. An untrusted userspace process emulates the device and the host kernel can either present a vhost device to virtual machines or attach to the userspace device. This is a neat way to develop software devices that can also benefit container workloads.

Stefano Garzarella covered the new unified virtio-blk storage stack in vdpa-blk: Unified Hardware and Software Offload for virtio-blk. The goal is to support hardware virtio-blk devices, an optimized host kernel software device, and still offer QEMU block layer features like qcow2 images. This allows the fast path to go directly to hardware or an optimized in-kernel device while software storage features can still be used when desired via a slow path.

vfio-user

VFIO User - Using VFIO as the IPC Protocol in Multi-process QEMU focussed on the new out-of-process device interface that John Johnson, Jagannathan Raman, and Elena Ufimtseva have been working on together with others. This new protocol allows PCI (and perhaps other busses in the future) devices to be implemented as separate processes. QEMU communicates with the device over a UNIX domain socket. The approach is similar to vhost-user except the protocol messages are based on the Linux VFIO ioctl interface instead of the vhost ioctls.

While vhost-user has been in use for a number of years for VIRTIO-based devices, vfio-user now makes it possible to implement non-VIRTIO devices as separate processes. There were several other talks about vfio-user at KVM Forum 2021 that you can also check out:

virtiofs

In Towards High-availability for Virtio-fs, Jiachen Zheng and Yongji Xie explained how they extended virtiofs to handle crash recovery and live updates. These features are challenging for any program with a lot of state because care must to taken to maintain a consistent snapshot to resume from in the case of a restart. They tackled this by storing Linux file handles and a journal in a shm file. This required some changes to QEMU's virtiofsd data structures that makes them suitable for storing in shm and a journal that makes it possible to provide idempotency for operations like mkdir that would otherwise fail if replayed.

Virtual IOMMUs

Suravee Suthikulpanit and Wei Huang gave a talk titled Analysis of AMD HW-assisted vIOMMU Implementation and Performance. AMD is working on a hardware implementation of a virtual IOMMU that allows guests to specify DMA permissions for guest memory. This functionality is important for VFIO device assignment within guests, for example. Although it can be done in software via emulation of real IOMMUs or the virtio-iommu device that was designed specifically for virtual machines, implementing the vIOMMU in real hardware has performance advantages. One interesting feature of the hardware-assisted vIOMMU is that it natively supports encrypted memory for AMD SEV-SNP guests, something that is slow and clumsy to do in software.

Friday, July 2, 2021

Slides available for "Bring Your Own Virtual Devices: Frameworks for Software and Hardware Device Virtualization"

The PDF slides for my "Bring Your Own Virtual Devices: Frameworks for Software and Hardware Device Virtualization" talk from the 16th Workshop on Virtualization in High-Performance Cloud Computing are now available.

This talk covers out-of-process device interfaces including vhost (kernel), vhost-user, Linux VFIO, mdev, vfio-user, vDPA, and VDUSE. It gives a brief overview of each interface, how it works, and how to develop your own devices.

The growing number of out-of-process device interfaces available in QEMU/KVM can make it hard to understand and compare them. Each of these interfaces is designed for different use cases. For example, whether you want to pass through hardware or implement the device in software, if you want to implement your device in the host kernel or in host userspace, etc. This talk will give you the necessary knowledge to compare these interfaces yourself so you can decide which one is most appropriate for your use case.

For more information about the design of out-of-process device interfaces, see also my previous blog post about requirements for out-of-process devices.

Sunday, June 20, 2021

My performance benchmarking workflow (2021)

Benchmarking computer systems is time-consuming because setting up the necessary environment involves a lot of work. Over time I have built a workflow that mitigates the cost of setting up benchmarks and allows me to analyze performance more effectively. This blog post covers my workflow as of 2021.

Performance investigations often follow these steps:

  1. Set up hardware and software.
  2. Run initial benchmarks to verify that the bottleneck under investigation is being triggered.
  3. Collect a full set of benchmark results and monitoring data.
  4. Analyze results and form a hypothesis about the bottleneck.
  5. Implement a proof-of-concept optimization to test the hypothesis.
  6. Go to Step 3 until the desired benchmark results are reached, keeping those optimizations that helped.

This is a long process that is costly to pause/resume or replicate again in the future. Setting up hardware and software manually is both time-consuming and error-prone. Therefore we don't want to do it more than once. There is a risk that replicating the benchmark on another machine will fail to produce identical results due to differences in environments.

The consequence of high-overhead processes is that we minimize their use since we cannot afford to run through the process as often as we'd like. This means we cannot answer all the performance questions we'd like to and therefore our understanding is limited. We cannot discover all the truths that would enable us to make performance improvements.

A more lightweight process would encourage experimentation and lead to higher productivity.

An ideal workflow

In a low-overhead world I would like to do the following:

  1. Set up hardware and software once only and be able to return to that state again in the future at the press of a button.
  2. Capture the full benchmarking environment so the configuration can be inspected and modified easily.
  3. Store benchmark results so that each run is available for further analysis in the future.

The workflow is actually quite similar to developing code with git:

  1. Create a topic branch for this performance investigation.
  2. Add an environment definition to produce the desired hardware and software state.
  3. Run the benchmark and collect the results.
  4. Commit the environment and results.
  5. Go to Step 2 to modify the environment (e.g. apply proof-of-concept patches to software) and repeat.

Since the performance investigation is captured in a git branch it's easy to switch to another investigation without losing history.

This is actually what I do! Git provides the storage and time machine functionality for easily pausing/resuming or replicating performance investigations.

Ansible

Ansible provides the automation system necessary to put hardware and software into the desired state for benchmarking. Ansible's killer feature is the large ecosystem of modules for handling tasks like installing packages, configuring virtual machines and containers, etc. I find Ansible more productive than Python or shell scripting thanks to Ansible's modules collection.

I've begun collecting Ansible tasks for Linux KVM development in virt-tasks. If you're wondering what the configuration for running a benchmark looks like, here is an Ansible playbook that builds QEMU and a guest kernel, creates a Fedora 34 virtual machine, runs the fio disk I/O benchmark, and collects the results:

---
- hosts: hosts
  tasks:
    - include_tasks: tasks/build-qemu.yml
      vars:
        - repo: https://gitlab.com/qemu-project/qemu.git
        - version: v6.0.0

    - name: create disk image
      include_tasks: tasks/virt-builder-create-image.yml
      vars:
        - os_version: fedora-34
        - size: 32G
        - output: /var/lib/libvirt/images/test.img
        - format: raw

    - name: build guest kernel
      include_tasks: tasks/build-kernel.yml
      vars:
        - repo: https://gitlab.com/stefanha/linux.git
        - version: cpuidle-haltpoll-virtqueue
        - config_src_path: files/.config

    - name: stop vm
      virt:
        name: test
        state: shutdown
      ignore_errors: yes

    - name: start vm
      include_tasks: tasks/start-vm.yml
      vars:
        - xml: "{{ lookup('file', 'files/test.xml') }}"
        - host: 192.168.122.192

- hosts: vms
  tasks:
    - name: install fio and rsync
      dnf:
        state: present
        name:
          - fio
          - rsync

    - name: run fio
      script: files/fio.sh

    - name: fetch fio output files
      synchronize:
        src: fio-output/
        dest: notebook/fio-output/poll_source-off
        mode: pull

    - name: run fio
      script: files/fio.sh --enable

    - name: fetch fio output files
      synchronize:
        src: fio-output/
        dest: notebook/fio-output/poll_source-on
        mode: pull

- hosts: hosts
  tasks:
    - name: stop vm
      virt:
        name: test
        state: shutdown

An important point is that the Ansible playbook sets up the full environment, runs all benchmarks, and collects the results. Unlike a playbook that runs a single benchmark this one runs the full suite so that the playbook captures the entire environment that produced the results. This distinction is important when tweaking the benchmark configuration or trying out proof-of-concept optimizations. Each git commit needs to encompass the full environment so that the performance investigation is reproducible and can be resumed in the future.

Jupyter

I recently started using JupyterLab notebooks for data analysis. It provides a convenient environment for graphing results and organizing them in documents.

Thanks to the official Jupyter container images you can get a full Python data analysis environment running with just one command:

$ podman run --userns=keep-id -e JUPYTER_ENABLE_LAB=yes -p 8888:8888 --rm -v "$PWD":/home/jovyan/work:z jupyter/scipy-notebook

So far I have only scratched the surface of JupyterLab. It works well for visualizing data although the way I currently use it is not much different from writing a Python matplotlib script and running it from the command-line. In time I'll get a better appreciation for its strengths and weaknesses.

Conclusion

A git-based workflow that automates benchmark setup and stores the results goes a long way towards mitigating the high overhead of performance investigations. If I'm interrupted or need to switch to a different machine it's easy to resume the investigation. Having the results and details of the environment stored together makes it possible to revisit benchmark runs in the future to reproduce or tweak them. The combination of git, Ansible, and Jupyter achieves this workflow quite well, but if you're familiar with other tools I'd love to hear!

Monday, April 5, 2021

Learning programming languages

You may be productive and comfortable in one programming language but find the idea of learning a new programming language daunting. Or you may know and use multiple programming languages but haven't learnt a new one in a while. Or you might be a programming language geek who is just curious about how others dive into new programming languages and get productive quickly. No matter how easy or difficult it is for you to engage in new programming languages, this article explains how I like to learn new programming languages. Although people learn best in different ways, I hope you'll find my thought process interesting even if you decide to take a different approach.

Background

Language N+1

This article isn't aimed at learning to program. Learning your first programming language is much harder than learning an additional one. The reason is that many abstract concepts are involved in computer programming. When you first encounter programming, most languages require you to understand concepts like iteration, scopes, (im)mutability, arrays, modules, functions, and much more. The good news is that when you learn an additional language you'll already be familiar with common concepts and can therefore take a more streamlined approach in order to get up to speed quickly.

Courses, videos, exercises

There is a lot of educational material online that teaches various programming languages, but I don't find structured courses, videos, or exercises efficient. If you already know common programming concepts and have an idea of what you want to build in the new programming language, then it's more efficient to chart your own course. Materials that let you jump/skip around will let you focus on information that is novel and that you actually need. Working through a series of exercises that someone else designed may be time spent practicing the wrong things since usually you are the one with the best idea of what to practice. Courses, videos, and exercises tend to be an "on the rails" experience where you are exposed to information in a linear fashion whether it's useful at the moment or not.

1. Understanding the computational model

The first question about a new programming language is "what is its computational model?". Sadly, many language manuals and websites do not describe the computational model beyond what programming paradigms are supported (object-oriented, concatenative, functional, logic programming, etc). The actual computational model may only become fully apparent later. Or it might be expressed in too much detail in a language standards document to be of use early on. In any case, it's worthwhile reading the programming language's website for information on the computational model to grasp the big picture.

It's the computational model that you need to understand in order to write programs. Often we think about syntax and language features too much when learning a new language. The computational model informs us how to break down requirements into programs. We approach logic programming differently from object-oriented programming in how we organize data and code. The syntax and to an extent even the language features don't matter.

Understanding the computational model also helps you situate the new programming language relative to others, especially programming languages that you already know. It will give you an idea of how different programming will be and where you'll need to learn new concepts.

2. The language tutorial

After familiarizing yourself with the computational model of the programming language, the next step is to learn the basic syntax and concepts. Most modern programming languages have an official tutorial available online. The tutorial introduces the language elements, usually with short examples, and its table of contents gives an overview of what the language consists of. The tutorial can be completed in a few hours or days. Unlike full courses, official programming language tutorials often lend themselves to non-linear reading, which is helpful when certain aspects of the language are already familiar or will not be relevant to you.

I remember reading the Python tutorial in an afternoon years ago, but watch out: at this point you might be able to write valid syntax but you won't be writing idiomatic code yet. There's that saying "you can write FORTRAN in any language". In order to write programs that are expressed naturally and take advantage of the language effectively, more effort will be necessary.

3. Writing toy programs

After becoming aware of the language elements the next step is to explore how the language works. This can be done by writing small programs. Often these toy programs are familiar tasks you've already solved in other languages. If you want to write games, maybe it's Pong. If you write web applications, it could be a todo list. There are lots of different well-known programs to write.

During the course of writing toy programs you'll encounter syntax errors or issues with the program. Learning to interpret common error messages is important because they will come up in more complicated scenarios later where it can be harder to resolve them if you haven't seen them before.

You'll also hit common tasks for which you need to find solutions in the standard library or language reference manual. Whether it's parsing command-line options, regular expression matching, HTTP requests, or error handling, the language probably has a way of doing it. Toy programs present a simple environment in which to explore the basic facilities of a programming language.

4. Gaining a deeper appreciation for the language

Once you have written some toy programs you'll be able to start writing your own programs that solve new problems. At this stage you start being productive but there is still more to learn. In particular, the language's idioms and patterns must be studied in order to write natural code. Once I have experience with the basics of a language I like to read the source code to the standard library, popular libraries, and popular applications. In the beginning this is hard because they use unfamiliar language features or library dependencies, but after following up on unknown parts of one program, you'll find it becomes easier to read other programs because your knowledge of the language has expanded.

At this point it is also worth looking for style guides, manuals on language idioms, and documentation on common gotchas or anti-patterns. These will provide the information about thinking natively in the new programming language. This is what's needed to become fluent in the language and capable of reading and writing real programs confidently.

Although I have presented steps in a linear order, learning complex subjects is often an iterative process. Sometimes I find myself jumping back and forth between steps as my understanding evolves.

Conclusion

Learning a new programming language is time-consuming no matter how you do it. However, it doesn't all need to happen upfront and after a few days of reading the documentation and experimenting with toy programs, it's possible to peform basic tasks. Learning how to use a language effectively by studying popular programs and reading guides is the quickest way I've found to reaching fluency. Finally, it just takes practice!

Friday, March 12, 2021

Overcoming fear of public communication in open source

When given the choice between communicating in private or in public, many people opt for private communication. They send questions or unfinished patches to individuals instead of posting on mailing lists, forums, or chat rooms. This post explains why public communication is a faster and more efficient way for developers to communicate in open source communities than private communication. A big factor in the private vs public decision is psychological and I've provided a checklist to give you confidence when communicating in public.

Time and time again I find people initiating discussions through private channels when I know it would be advantageous to have them in public instead. I even keep an email reply template handy asking the sender to reach out to the public mailing list instead of communicating with me in private. Communicating in public is a good default unless you need confidentiality (security bugs, business reasons, etc). So why do people prefer to ask questions or send patches off-list?

Why we fear public communication

I'm interested in understanding why people avoid public communications channels in open source communities. The purpose of mailing lists, chat rooms, and forums is to engage in discussions and share knowledge. Members of these communities are interested in the topic and want to engage in discussion. But there are some common reasons I've found why people don't make use of public communications channels:

  • I don't want to create noise. Mailing lists are home to important discussions between experienced members of the community. Sending a relatively simple question or unfinished patch that no expert would need to ask can make you doubt whether it deserves to be sent at all. This is a fallacy. As long as the question or patches are relevant to the community and you have done your homework (see the checklist below), there is no need to worry about creating noise.
  • I will look dumb or seem like a bad programmer. When you need help it's likely that your understanding is incomplete. We often hold ourselves to artifically high standards when asking for help in public, yet when we observe others interacting in public we don't hold their questions or unfinished code against them. Only if they show a repeated pattern of sloppy work does it damage their reputation. Asking for help in public won't harm your image.
  • Too much traffic. Big open source communities are often so active that no single person can keep track of everything that is going on. Even core members of the community rely on filtering only topics relevant to them. Since filtering becomes necessary at scale anyway, there's no need to worry about sending too many messages.

A note about tone: in the past people were more likely to be put off by the unfriendly tone in chat rooms and mailing lists. Over time the tone seems to have improved in general, probably due to multiple factors like open source becoming more professional, codes of conduct making participants aware of their behavior, etc. I don't want to cover the pros and cons of these factors, but suffice to say that nowadays tone is less of a problem and that's a good thing for everyone.

Why public communication is faster and more efficient

Next let's look at the reasons why public communication is beneficial:

Including the community from the start avoids waste

If you ask for help in private it's possible that the answer you get will not end up being consensus in the community. When you eventually send patches to the community they might disagree with the approach you settled on in private and you'll have to redo your work to get the patches merged. This can be avoided by including the community from the start.

Similarly, you can avoid duplicating work when multiple people are investigating the same problem in private without knowing about each other. If you discuss what you are doing in public then you can collaborate and avoid spending time creating multiple solutions, only one of which can be merged.

Public discussions create a searchable knowledge base

If you find a solution to your problem in private, that doesn't help others who have the same question. Public discussions are typically archived and searchable on the web. When the next person has the same question they will find the answer online and won't need to ask at all!

Additionally, everyone benefits and learns from each other when questions are asked in public. We soak up knowledge by following these discussions. If they are private then we don't have this opportunity.

Get a reply even if the person you asked is unavailable

The person you intended to reach may be away due to timezones, holidays, etc. A private discussion is blocked until they respond to your message. Public discussions, on the other hand, can progress even when the original participants are unavailable. You can get an answer to your question faster by asking in public.

Public activity gives visibility to your work

Private discussions are invisible to your teammates, managers, and other people affected by your progress. When you communicate in public they will be able plan better, help out when needed, and give you credit for the effort you are putting in.

A checklist before you press Send

Hopefully I have encouraged you to go ahead and ask questions or send unfinished patches in public. Here is a checklist that will help you feel confident about communicating in public:

Before asking questions...

  1. Have you searched the web, documentation, and code?
  2. Have you added printfs, GDB breakpoints, or enabled tracing to understand the behavior of the system?

Before sending patches...

  1. Is the code formatted according to the coding standard?
  2. Are the error cases handled, memory allocation/ownership correctly implemented, and thread safety addressed? Most of the time you can figure these out yourself and forgetting to do so may distract from the topics you want help with.
  3. Did you add todo comments pointing out unfinished aspects of the code? Being upfront about what is missing helps readers understand the status of the code and saves them time trying to distinguish requirements you forgot from things that are simply not yet implemented.
  4. Do the commit descriptions explain the purpose of the code changes and does the cover letter give an overview of the patch series?
  5. Are you using git-format-patch --subject-prefix RFC (same for git-publish) to mark your patches as a "request for comment"? This tells people you are seeking input on unfinished code.

Finally, do you know who to CC on emails or mention in comments/chats? Look up the relevant maintainers or active developers using git-log(1), scripts/get_maintainer.pl, etc.

Conclusion

Private communication can be slower, less efficient, and adds less value to an open source community. For many people public communication feels a little scary and they prefer to avoid it. I hope that by understanding the advantages of public communication you will be motivated to use public communication channels more.

Wednesday, February 24, 2021

Milestone Systems: Software that changes how things are done

Every few years a project comes out with a new approach that becomes influential. Often it involves combining existing concepts in a novel way. People argue about whether the project is actually novel or whether it was just in the right place at the right time and popularized existing technology. Regardless, I find these projects fascinating and try to learn about them because they are milestones that future systems are based on.

Here is a short list of projects that I think fall into this category. I hope you enjoy them (if you haven't already explored them). Send me your picks!

Tor

Tor is an onion router. It enables (mostly) anonymous communication by tunneling encrypted connections. The client does not know the IP address of the server (when connecting to so-called hidden services), the server does not know the IP address of the client, and the intermediate hops only know about their immediate predecessor and successor.

The design of Tor is described in a paper.

BitTorrent

BitTorrent is a decentralized peer-to-peer file sharing protocol that can be used to reduce load on file hosting servers and improve download times. It's commonly used to share copyrighted material, but is also used by Linux distributions to publish ISO images and by software update systems.

A central aspect to BitTorrent is that peers exchange pieces of the file amongst themselves thanks to a Merkle tree. Pieces received from untrusted peers are checked against the file's Markle tree to ensure that data has not been corrupted or manipulated.

A paper about the economics of BitTorrent described some of the ideas behind it. The actual protocol is described by the protocol specification.

git

Git is the most popular version control system as of 2020. It replaced the older CVS and Subversion systems that were widely used before it. Other systems like Mercurial, Darcs, Perforce, and BitKeeper had similar use cases and ideas.

Git is a content-addressable object store with a convention for representing trees of files as well as commits and tags. I wrote about how the object store is implemented here if you want to learn about pack files and deltas.

Bitcoin

Bitcoin is a decentralized currency, also known as cryptocurrency. A network of mutually untrusted nodes maintains a ledger called the blockchain that records transactions. Bitcoin is famous for mining where nodes compete to solve a computationally-expensive problem in order to extend the ledger.

What is interesting about Bitcoin is that the blockchain prevents abuse as long as at least half of the nodes are not controlled or colluding. In other words, it is a decentralized consensus - although there can be short-lived splits where not all nodes agree on the current state.

The Bitcoin paper gives an overview of how the system works.

Conclusion

I hope this was a fun post that motivated you to look at a system you haven't studied yet or made you think about systems that you consider milestone systems. Please get in touch if you want to share yours!

Tuesday, February 16, 2021

Video and slides available for "The Evolution of File Descriptor Monitoring in Linux"

My FOSDEM 2021 talk "The Evolution of File Descriptor Monitoring in Linux: From select(2) to io_uring" is now available:

The talk compares the file descriptor monitoring system calls available in Linux and discusses their design. Benchmark results show how well they scale when there are many file descriptors. I hope this is a useful overview to this important kernel feature that GUI applications, network services, and many other programs rely on.

If you are interested in API design and performance, this talk highlights how different approaches like stateless vs stateful APIs can affect performance and how to minimize the number of API calls through careful design.

Enjoy!