KubeVirt makes it possible to run virtual machines on Kubernetes alongside container workloads. Virtual machines are configured using VirtualMachineInstance YAML. But under the hood of KubeVirt lies the same libvirt tooling that is commonly used to run KVM virtual machines on Linux. Accessing libvirt can be convenient for development and troubleshooting.
Note that bypassing KubeVirt must be done carefully. Doing this in production may interfere with running VMs. If a feature is missing from KubeVirt, then please request it.
The following diagram shows how the user's VirtualMachineInstance is turned into a libvirt domain:
Accessing virsh
Libvirt's virsh command-line tool is available inside the virt-launcher Pod that runs a virtual machine. First determine vm1's virt-launcher Pod name by filtering on its label (thanks to Alice Frosi for this trick!):
$ kubectl get pod -l vm.kubevirt.io/name=vm1 NAME READY STATUS RESTARTS AGE virt-launcher-vm1-5gxvg 2/2 Running 0 8m13s
Find the name of the libvirt domain (this is guessable but it doesn't hurt to check):
$ kubectl exec virt-launcher-vm1-5gxvg -- virsh list Id Name State ----------------------------- 1 default_vm1 running
Arbitrary virsh commands can be invoked. Here is an example of dumping the libvirt domain XML:
$ kubectl exec virt-launcher-vm1-5gxvg -- virsh dumpxml default_vm1 <domain type='kvm' id='1'> <name>default_vm1</name> ...
Viewing libvirt logs and full the QEMU command-line
The libvirt logs are captured by Kubernetes so you can view them with kubectl log <virt-launcher-pod-name>. If you don't know the virt-launcher pod name, check with kubectl get pod and look for your virtual machine's name.
The full QEMU command-line is part of the libvirt logs, but unescaping the JSON string is inconvenient. Here is another way to get the full QEMU command-line:
$ kubectl exec <virt-launcher-pod-name> -- ps aux | grep qemu
Customizing KubeVirt's libvirt domain XML
KubeVirt has a feature for customizing libvirt domain XML called hook sidecars. After the libvirt XML is generated, it is sent to a user-defined container that processes the XML and returns it back. The libvirt domain is defined using this processed XML. To learn more about how it works, check out the documentation.
Hook sidecars are available when the Sidecar feature gate is enabled in the kubevirt/kubevirt custom resource. Normally only the cluster administrator can modify the kubevirt CR, so be sure to check when trying this feature:
$ kubectl auth can-i update kubevirt/kubevirt -n kubevirt yes
Although you can provide a complete container image for the hook sidecar, there is a shortcut if you just want to run a script. A generic hook sidecar image is available that launches a script which can be provided as a ConfigMap. Here is example YAML including a ConfigMap that I've used to test the libvirt IOThread Virtqueue Mapping feature:
--- apiVersion: kubevirt.io/v1 kind: KubeVirt metadata: name: kubevirt namespace: kubevirt spec: configuration: developerConfiguration: featureGates: - Sidecar --- apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: "fedora" spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi source: http: url: "https://download.fedoraproject.org/pub/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.raw.xz" --- apiVersion: v1 kind: ConfigMap metadata: name: sidecar-script data: my_script.sh: | #!/usr/bin/env python3 import xml.etree.ElementTree as ET import os.path import sys NUM_IOTHREADS = 4 VOLUME_NAME = 'data' # VirtualMachine volume name def main(xml): domain = ET.fromstring(xml) domain.find('iothreads').text = str(NUM_IOTHREADS) disk = domain.find(f"./devices/disk/alias[@name='ua-{VOLUME_NAME}']..") driver = disk.find('driver') del driver.attrib['iothread'] iothreads = ET.SubElement(driver, 'iothreads') for i in range(NUM_IOTHREADS): iothread = ET.SubElement(iothreads, 'iothread') iothread.set('id', str(i + 1)) ET.dump(domain) if __name__ == "__main__": # Workaround for https://github.com/kubevirt/kubevirt/issues/11276 if os.path.exists('/tmp/ran-once'): main(sys.argv[4]) else: open('/tmp/ran-once', 'wb') print(sys.argv[4]) --- apiVersion: kubevirt.io/v1 kind: VirtualMachineInstance metadata: creationTimestamp: 2018-07-04T15:03:08Z generation: 1 labels: kubevirt.io/os: linux name: vm1 annotations: hooks.kubevirt.io/hookSidecars: '[{"args": ["--version", "v1alpha3"], "image": "kubevirt/sidecar-shim:20240108_99b6c4bdb", "configMap": {"name": "sidecar-script", "key": "my_script.sh", "hookPath": "/usr/bin/onDefineDomain"}}]' spec: domain: ioThreadsPolicy: auto cpu: cores: 8 devices: blockMultiQueue: true disks: - disk: bus: virtio name: disk0 - disk: bus: virtio name: data machine: type: q35 resources: requests: memory: 1024M volumes: - name: disk0 persistentVolumeClaim: claimName: fedora - name: data emptyDisk: capacity: 8Gi
If you need to go down one level further and customize the QEMU command-line, see my post on passing QEMU command-line options in libvirt domain XML.
More KubeVirt debugging tricks
The official KubeVirt documentation has a Virtualization Debugging section with more tricks for customizing libvirt logging, launching QEMU with strace or gdb, etc. Thanks to Alice Frosi for sharing the link!
Conclusion
It is possible to get libvirt access in KubeVirt for development and testing. This can make troubleshooting easier and it gives you the full range of libvirt domain XML if you want to experiment with features that are not yet exposed by KubeVirt.