OpenBSD FAQ - Virtualization [FAQ Index]



Introduction

OpenBSD comes with the vmm(4) hypervisor and vmd(8) daemon. Virtual machines can be orchestrated with the vmctl(8) control utility, using configuration settings stored in the vm.conf(5) file.

The following features are available:

The following features are not available at this time: Supported guest operating systems are currently limited to OpenBSD and Linux. As there is no VGA support yet, the guest OS must support serial console.

Prerequisites

A CPU with nested paging support is required to use vmm(4). Support can be checked by looking at the processor feature flags: SLAT for AMD or EPT for Intel. In some cases, virtualization capabilities must be manually enabled in the system's BIOS. Be sure to run the fw_update(8) command after doing so to get the required vmm-firmware package.

Processor compatibility can be checked with the following command:

$ dmesg | egrep '(VMX/EPT|SVM/RVI)'
Before going further, enable and start the vmd(8) service.
# rcctl enable vmd
# rcctl start vmd

Starting a VM

In the following example, a VM will be created with 50GB of disk space and 1GB of RAM. It will boot from the install73.iso image file.
# vmctl create -s 50G disk.qcow2
vmctl: qcow2 imagefile created
# vmctl start -m 1G -L -i 1 -r install73.iso -d disk.qcow2 example
vmctl: started vm 1 successfully, tty /dev/ttyp8
# vmctl show
   ID   PID VCPUS  MAXMEM  CURMEM     TTY        OWNER NAME
    1 72118     1    1.0G   88.1M   ttyp8         root example
To view the console of the newly created VM, attach to its serial console:
# vmctl console example
Connected to /dev/ttyp8 (speed 115200)
The escape sequence ~. is needed to leave the serial console. See the cu(1) man page for more info. When using a vmctl serial console over SSH, the ~ (tilde) character must be escaped to prevent ssh(1) from dropping the connection. To exit a serial console over SSH, use ~~. instead.

The VM can be stopped using vmctl(8).

# vmctl stop example
stopping vm: requested to shutdown vm 1
Virtual machines can be started with or without a vm.conf(5) file in place. The following /etc/vm.conf example would replicate the above configuration:
vm "example" {
    memory 1G
    enable
    disk /home/user/disk.qcow2
    local interface
}
Some configuration properties in vm.conf(5) can be reloaded by vmd(8) on the fly. Other changes, like adjusting the amount of RAM or disk space, require the VM to be restarted.

Networking

Network access to vmm(4) guests can be configured a number of different ways, four of which are detailed in this section.

In the examples below, various IPv4 address ranges will be mentioned for different use cases:

Option 1 - VMs only need to talk to the host and each other

For this setup, vmm uses local interfaces: interfaces that use the shared address space defined above.

Using vmctl(8)'s -L flag creates a local interface in the guest which will receive an address from vmd via DHCP. This essentially creates two interfaces: one for the host and the other for the VM.

Option 2 - NAT for the VMs

This setup builds upon the previous and allows VMs to connect outside the host. IP forwarding is required for it to work.

The following line in /etc/pf.conf will enable Network Address Translation and redirect DNS requests to the specified server:

match out on egress from 100.64.0.0/10 to any nat-to (egress)
pass in proto { udp tcp } from 100.64.0.0/10 to any port domain \
	rdr-to $dns_server port domain
Reload the pf ruleset and the VM(s) can now connect to the internet.

Option 3 - Additional control over the VM network configuration

Sometimes you may want additional control over the virtual network for your VMs, such as being able to put certain ones on their own virtual switch. This can be done using a bridge(4) and a vether(4) interface.

Create a vether0 interface that will have a private IPv4 address as defined above. In this example, we'll use the 10.0.0.0/8 subnet.

# echo 'inet 10.0.0.1 255.255.255.0' > /etc/hostname.vether0
# sh /etc/netstart vether0
Create the bridge0 interface with the vether0 interface as a bridge port:
# echo 'add vether0' > /etc/hostname.bridge0
# sh /etc/netstart bridge0
Ensure that NAT is set up properly if the guests on the virtual network need access beyond the physical machine. An adjusted NAT line in /etc/pf.conf might look like this:
match out on egress from vether0:network to any nat-to (egress)
The following lines in vm.conf(5) can be used to ensure that a virtual switch is defined:
switch "my_switch" {
    interface bridge0
}

vm "my_vm" {
    ...
    interface { switch "my_switch" }
}
Inside the my_vm guest, it's now possible to assign vio0 an address on the 10.0.0.0/24 network and set the default route to 10.0.0.1.

For convenience, you may wish to set up a DHCP server on vether0.

Option 4 - VMs as real hosts on the same network

In this scenario, the VM interface will be attached to the same network as the host so it can be configured as if it were physically connected to the host network. This option only works for Ethernet-based devices, as the IEEE 802.11 standard prevents wireless interfaces from participating in network bridges.

Create the bridge0 interface with the host network interface as a bridge port. In this example, the host network interface is em0 - you should substitute the interface name that you wish to connect the VM to:

# echo 'add em0' > /etc/hostname.bridge0
# sh /etc/netstart bridge0
As done in the previous example, create or modify the vm.conf(5) file to ensure that a virtual switch is defined:
switch "my_switch" {
    interface bridge0
}

vm "my_vm" {
    ...
    interface { switch "my_switch" }
}
The my_vm guest can now participate in the host network as if it were physically connected.

Note: If the host interface (em0 in the above example) is also configured using DHCP, dhcpleased(8) running on that interface may block DHCP requests from reaching guest VMs. In this case, you should select a different host interface not using DHCP, or terminate any dhcpleased(8) processes assigned to that interface before starting VMs, or use static IP addresses for the VMs.