Running VMware ESXi under oVirt

Posted on by 9 comments

Getting VMware ESXi to install and run properly under oVirt was an interesting experience. However, it does require patching Qemu.

If you’re not too fussed about patching SRPMS, I have already patched and uploaded the RPMS for you.

Please note: I am assuming your oVirt host is Fedora 20. You may skip Step 1 if you have already installed my patched RPMS. However, read on if your curious why the patches are needed.

Step 1 – Patching Qemu

Current SRPM for patching Qemu is here: qemu-1.6.2-7.fc20.src.rpm

Please note: I am using the SRPM from the Fedora repository instead of the latest source from Qemu. I prefer it this way as it reduces potential compatibility issues.

QEMU Patch1: VMware IO Port Emulation

When you install ESXi it will attempt to use this incomplete emulated port and PSOD, you need to disable this in Qemu itself with the following patch:

Dagrh has submitted a patch upstream, which is a better version of the above patch and its merged with all major versions of qemu.

QEMU Patch2: vmxnet3 Does Not Pad Short Frames

I fully installed VMware ESXi and realized networking was not working as expected. It was pretty bizarre as the ESXi guest would receive packets from a routed network without issues.

Oddly though, any Linux guests on the same bridge and host could communicate with each other but failed to communicate with the ESXi guest itself or the guests ESXi hosted. I have to thank Paul Sherratt for helping me figure out this odd networking issue. The problem was that short Ethernet frames were not being padded, ESXi would simply discard these frames including ARP Requests and this prevented communication.

So I thought the best way to fix this problem was to write a patch and get vmxnet3 to pad short frames itself. This will ensure frames do not get discarded by the ESXi guest.

I have submitted this patch upstream. This patch has been merged with all major versions of qemu.

Step 2 – oVirt Host/Engine Preparation

Installing Required VDSM Hooks

The following two hooks allows for nested virtualization and mac spoofing.

Please note: You’ll need to reboot the oVirt host at some point before nested virtualization is enabled for the kvm_{intel,amd} modules. However, VMware ESXi will install without this being enabled so you can continue.

By default oVirt only allows you to select the following NIC Models rtl8139, e1000 and VirtIO, unfortunately ESXi does not work with any of them. I have created a custom vdsm hook so the NIC Type can be changed at boot up.

Create a file: /usr/libexec/vdsm/hooks/before_vm_start/50_vmwarehost with the below code and ensure it has 755 permissions, this must be created on the oVirt host.

Now that we have finished installing the hooks we need to make sure the engine and host can use them.

Disabling MSR Emulation

This will need to be disabled globally.

This is all we need to do in the back end.

Step 3 – Create VMware ESXi VM

To get the VMware ESXi guest to install properly ensure the following is set.

– 2 x vCPUs
– 4GB RAM
– Operating System: Other OS
– Optimized for: Server
– Custom Properties: vmwarehost = true, macspoof = true
– NIC Type: VirtIO (My hook will change this at run time to vmxnet3)
– Disk Type: IDE

oVirt-ESXi

Step 4 – Modify ESXi Host Configuration

You would have noticed your VMware installation warn you about not being able to run 64bit guests due to invalid virtualization support, even though KVM was configured to push the required nested virtualization flags to the virtual machine. KVM does not fully implement all the required features, therefore VMware must emulate them.

Ensure you add the following options in /etc/vmware/config

Once you have added those options to the configuration file reboot your ESXi server.

Category: Virtualization

9 comments on “Running VMware ESXi under oVirt

  1. I had spent a week infrustration prior to coming across your article trying to setup a virtualization harness.
    Went through the gamut of
    – Promox (kind of works but is not kind of commercial)
    – Mirantis 5.5.1 using virtualbox scripts (failed)
    – Vanilla virsh approach.
    – oVirt 3.5 LiveCD. However it failed in the install to HDD case.

    Your two articles on oVirt I have used to successfully setup a oVirt 3.5 AllInOne install and trying to run VMware ESXI 5.5. Update 2 on it. Thanks for such a simple explanation whereby I was able get my setup up and running in about couple of hours.

    My Setup –
    Biostar J1900NH2 MB (CPU Intel J1900), 16 GB DDR3, 2 TB HDD. Low cost and low heat mini ITX board.
    oVirt 3.5 – All in one install
    Host OS – Fedora Core 20 with all updates applied.
    QEMU/KMV patches – Installed the pre-compiled patch rpms from your website
    All config changes and vdsm hooks also applied.

    ESXI 5.5 Update 2 installs fine and I get an initial splash screen
    – oVirt Node
    – Core i7 Nehalem
    However during the install process it did warn about Hardware virtualization not supported. Did you face any similar issue?

    Checks I have done:
    1. Biostar bios has VT-X enabled.
    2. Host OS FC20 – cat /proc/cpuinfo | egrep -i “vmx|ept” display both ept and vmx
    3. FC20 guest VM under the same oVirt – cat /proc/cpuinfo | egrep -i “vmx|ept” display both ept and vmx
    So to me it seems that oVirt changes are doing their job properly

    Checks I have done on the ESXI – (SSH to the ESXI shell)
    1. # esxcfg-info|grep “HV Support” – Display value of ‘1’, indicating VT might be available but not supported for this hardware.
    0 – VT/AMD-V indicates that support is not available for this hardware.
    1 – VT/AMD-V indicates that VT or AMD-V might be available but it is not supported for this hardware.
    2 – VT/AMD-V indicates that VT or AMD-V is available but is currently not enabled in the BIOS.
    3 – VT/AMD-V indicates that VT or AMD-V is enabled in the BIOS and can be used.
    Wondering if you had any suggestions on resolving this.

    Thanks and regards
    – Ashok

  2. Ran an additional check.. Installed another guest VM on oVirt using the ESXI CPUINFO iso
    In the the output shows “Supports 64-bit VMWare” :NO (Bios features may enable)

    Does this mean ESXI can be fooled into thinking it is a compatible CPU by tweaking oVirt settings
    a. Changing identification of CPU to something other than Nehalem
    b. Changing BIOS from default Seabios of KVM/QEMU to something else

    Any tips much appreciated.

  3. One thing I noticed and forgot to include was VMware host configuration for nested virtualization, could you try this on your ESXi host:

    /etc/vmware/config
    hv.assumeEnabled = “TRUE”
    vmx.allowNested = “TRUE”

    Reboot your host after making the change, this should emulate the missing features that KVM is lacking.

    Unfortunately my setup has been removed. I’ll see if I can re-create it and verify everything.

  4. Tried this out on Ovirt 3.5 – current release – in an all in one build. Setup for nesting and am running RDO Openstack Kilo all in one as another VM. So nesting is working there.

    Followed the steps except my qemu is already updated to a higher version.

    Tested 5.0 and 5.5.1 ESXI and all of them PSOD. any other hints or is there a new way?

    cheers

  5. Are you able to run 64-bit VMs in the nested ESXi? I’m trying to run ESXi 6.0 under Qemu 2.4.0. I got it to install, although it displayed a warning re. lack of hardware virtualization. I have the properties set in /etc/vmware/config, but when I try to start a 64-bit VM, I get an error:

    This virtual machine is configured for 64-bit guest operating systems. However, 64-bit operation is not possible.

    This host supports Intel VT-x, but the Intel VT-x implementation is incompatible with VMware ESX.

  6. Hi Gleb,

    This should be possible. The problem we have is that KVM does not fully implement all the required features for VT-x, so you must get ESXi to emulate them instead.

    If you rebooted your ESXi host after applying the changes to /etc/vmware/config that should have done it, but I have not tried this with ESXi 6.0.

    When I get time I am going to re-write this entire article to work properly with CentOS 7 and ESXi 6.0.

    Thanks,
    Ben

  7. Hi! I tried this and I am getting en error when I power up an ESXi vm. This error only occurs when I use the “true” in the vmwarehost hook in the custom properties. Here is the error:

    VM ESXi_01 is down with error. Exit message: internal error: process exited while connecting to monitor: 2016-07-27T01:41:25.560159Z qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 4 5 6 7 8 9 10 11 12 13 14 15
    2016-07-27T01:41:25.560287Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config
    2016-07-27T01:41:25.610477Z qemu-kvm: -device vmxnet3,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:54,bus=pci.0,addr=0x3: ‘vmxnet3’ is not a valid device model name

    If I remove the vmwarehost custom property the VM will boot, but of course ESXi installer complains of not having any NIC’s installed.

    I am using oVirt 4.0 on a RHEL7.2 host.

    • Hello Lynn,

      Not tried this on oVirt 4.0 and RHEL 7.2, but this looks like the qemu version you are using might not support the vmxnet3 device.

      Have you tried using version 1.6.x or higher?

      Thanks,
      Ben

Leave a Reply

Your email address will not be published. Required fields are marked *