Wednesday, 19 August 2015

virtio-vsock: Zero-configuration host/guest communication

Slides are available for my talk at KVM Forum 2015 about virtio-vsock: Zero-configuration host/guest communication.

virtio-vsock is a new host/guest communications mechanism that allows applications to use the Sockets API to communicate between the hypervisor and virtual machines. It uses the AF_VSOCK address family which was introduced in Linux in 2013.

There are several advantages of virtio-serial. The main advantage is the familiar Sockets API semantics, which is more convenient than serial ports. See the slides for full details on what virtio-vsock offers.

8 comments:

  1. Very interesting, I've tested it and it works very well! I have a few questions on how it works:
    1) basically it's a new virtio device which uses vhost to speed up the processing of the packets?
    2) have you tested the performances of the communication?

    I'm also looking at other kinds of communication such as guest to guest? Is there a way to achieve it? Maybe with virtio-vsock or another mechanism?

    ReplyDelete
  2. 1. No, vhost is used in order to hook into the host network stack, not for performance. It allows applications on the host to use socket(AF_VSOCK, ...) natively. A userspace program like QEMU cannot register new socket address families so host kernel code is needed for this.

    2. No serious benchmarking has been done. Performance should be comparable to virtio-serial.

    Regarding guest-to-guest communication, networking is the best way to achieve that today. The exception is low-latency use cases where exitless VM-to-VM communication is desirable (e.g. DPDK between 2 VMs on the same host) - the solutions for these use cases aren't very mature yet but there is work on vhost-user in that area.

    ReplyDelete
    Replies
    1. So tell me if I'm wrong but basically vhost_vsock is the "in-kernel" vhost device right?

      I was more interested as you said in low-latency use cases where the communication doesn't go through the hypervisor but directly to the other guest (communication could be only a signal between two guest or an interrupt). Is vhost-user the only option for now? Do you maybe know if there will be something new in the near future?

      Thank you for your time. Best regards,

      Vincent

      Delete
    2. Yes, vhost_vsock is an in-kernel vhost device on the host.

      For low-latency VM-to-VM communication, please take a look at the discussion here:
      https://lists.nongnu.org/archive/html/qemu-devel/2015-08/msg03993.html

      Delete
  3. Hello, does this work on ARM?

    ReplyDelete
    Replies
    1. It will eventually work with ARM KVM but I have not tested it yet. For non-KVM use cases (QEMU TCG) the vhost_vsock module isn't used but there is currently some work to also integrate virtio-vsock into QEMU userspace.

      Delete
  4. Hello, I did a few tests of the vsock code. TCP works without problems but when I tried UDP, after a certain amount of sent packets the program (guest side) blocks. I can see in the kernel log that messages such as "GOT CREDIT REQUEST" are received on the host and "GOT CREDIT UPDATE" on the guest.

    I've given a quick look to the code and saw that in the function virtio_transport_recv_pkt() when a credit update is received nothing seems to happen. Is that right?

    Thank you and regards,

    Vincent

    ReplyDelete
    Replies
    1. Hi Vincent,
      The credit-based flow control for SOCK_DGRAM is broken and will be removed soon. SOCK_DGRAM is best-effort delivery so the flow control mechanism is unnecessary.

      If you follow netdev@vger.kernel.org and kvm@vger.kernel.org mailing lists, you'll see when I send the patches.

      Stefan

      Delete