nested virtualization with KVM: -enable-kvm in qemu in nested virtualization

ribamar picture ribamar · May 12, 2017 · Viewed 6.9k times · Source

In my already virtualized host, trying to pass the option the option -enable-kvm -m 1024, will fail:

qemu-system-x86_64  -vga std -enable-kvm -m 1024   -monitor telnet:localhost:9313,server,nowait -drive file=my_img.img,cache=none
# Could not access KVM kernel module: No such file or directory
# failed to initialize KVM: No such file or directory

If I remove that option -enable-kvm -m 1024, qemu will load (but it will take forever, because it is using software emulation):

qemu-system-x86_64  -vga std  -monitor telnet:localhost:9313,server,nowait -drive file=my_img.img,cache=none
# qemu running, OK, but image taking forever to load.

Surely, this virtualized host of mine has capabilities of nesting its own virtualization. Everywhere I find information about it [like here: https://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html ] tells me that I must check the file /sys/module/kvm_intel/parameters/nested which is simply not available, because kvm-intel isn't and can't be loaded from inside an image:

sudo modprobe  kvm-intel
# modprobe: ERROR: could not insert 'kvm_intel': Operation not supported

Probably that method of debugging nested virtualization only works in the bare metal. So, how do I enable (forward the support of) kvm from inside a kvm?

Additional info:

lscpu # from inside the virtualized host
# Architecture:          x86_64
# ...
# Vendor ID:             GenuineIntel
# CPU family:            6
# Model:                 13
# Model name:            QEMU Virtual CPU version (cpu64-rhel6)
# Stepping:              3 
# ...
# Hypervisor vendor:     KVM

ltrace of qemu:

# open64("/dev/kvm", 524290, 00)                   = -1
# __errno_location()                               = 0x7f958673c730
# __fprintf_chk(0x7f957fd81060, 1, 0x7f9586474ce0, 0Could not access KVM kernel module: No such file or directory

Answer

ribamar picture ribamar · May 15, 2017

To test if the kvm support is enabled in the current host (ie, it works in the virtual machine) do:

grep -E "(vmx|svm)" /proc/cpuinfo 
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce tbm topoext perfctr_core perfctr_nb arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold vmmcall bmi1

In the question:

grep -E "(vmx|svm)" /proc/cpuinfo | wc -l 
0

It means that the support is disabled, and enable-kvm won't work. Action in the bare metal machine is required.