Welcome to Hancock’s VMware Half Hour! This is the Full Monty Version, the MEGA Full Movie on configuring and installing VMware vSphere Hypervisor ESXi ARM 8.0.3b on a Raspberry Pi Compute Module 4. The CM4 is installed in a Turing Pi v2 Mini ITX Clusterboard, delivering a compact and powerful platform for ARM virtualization.
In this 1 hour and 19-minute guide, I’ll take you step-by-step through every detail, covering:
? Demonstrating Raspberry Pi OS 64-bit booting on CM4.
? Creating and installing the ESXi ARM UEFI boot image.
? Configuring iSCSI storage using Synology NAS.
? Setting up ESXi ARM with licensing, NTP, and NFS storage.
? A full walkthrough of PXE booting and TFTP configuration.
? Netbooting the CM4 and finalizing the ESXi ARM environment.
? Flashing the BMC firmware is covered in this video
? Replacing the self-signed Turing Pi v2 SSL certificate with a certificate from Microsoft Certificate Services. is covered in this video
As an avid enthusiast of VMware’s innovations, I’m diving headfirst into the ESXi ARM Fling v2.0, which is built on the robust VMware vSphere Hypervisor ESXi 8.0.3b codebase. The ARM architecture has always intrigued me, and with this latest version, VMware has pushed the boundaries of what’s possible with ESXi on ARM devices. It’s a playground full of potential for anyone experimenting with lightweight, power-efficient infrastructures.
The Turing Pi Journey
After much anticipation, my Turing Pi Mini ITX boards have arrived! These boards are compatible with the Raspberry Pi Compute Module 4, offering a modular, scalable setup perfect for ARM experimentation. With a few Compute Module 4s ready to go, I’m eager to bring this setup to life. However, finding a suitable case for the Turing Pi board has proven to be a bit of a challenge.
Case Conundrum
While Turing Pi has announced an official ITX case for their boards, it’s currently on preorder and comes with a hefty price tag. For now, I’ve decided to go with a practical and versatile option: the Streamcom Mini ITX OpenBench case. Its open-frame design is functional, and it’ll keep the board accessible during testing and configuration.
I’m also considering crafting my own custom case. Using laser-cut wood or acrylic is an appealing option, offering the opportunity to create something unique and tailored to my specific requirements. But for now, the OpenBench case will do nicely as I explore the ESXi ARM Fling.
Why ESXi ARM Fling v2.0?
The ESXi ARM Fling project is an exciting venture for anyone who loves to experiment with virtualization. Running ESXi on ARM hardware offers several advantages:
Energy efficiency: ARM boards consume far less power compared to traditional x86 systems.
Cost-effectiveness: Affordable hardware like the Raspberry Pi Compute Module 4 makes it accessible to a wider audience.
Flexibility: The compact form factor of ARM devices is ideal for edge computing, IoT, or even small-scale home labs.
The v2.0 update introduces enhanced support, better performance, and bug fixes, making it an excellent choice for exploring the ARM ecosystem.
What’s Next?
With the hardware in hand and the ESXi ARM Fling v2.0 ready to install, I’m planning to dive into:
Setting up and configuring the Turing Pi board with ESXi.
Testing the system’s stability, performance, and scalability using multiple Raspberry Pi Compute Modules.
Exploring practical use cases, such as lightweight Kubernetes clusters or edge computing applications.
I’ll share updates on the build process, challenges, and performance insights in future posts. For now, I’m excited to get started and see what this setup can achieve.
Stay tuned for more! If you’ve experimented with the ESXi ARM Fling or have tips for working with the Turing Pi board, I’d love to hear from you.
This video was created in response to Experts Exchange members asking the question “have I compromised my ESXi host be adding to AD?”
In this video presentation which is part of the Hancock’s VMware Half Hour I will show you HOW TO: Check if you have compromised your VMware ESXi 8.0 Hosts if you have added them to Microsoft Active Directory.
In this video demonstration the ESXi servers are ESXi 8.0.3, which have the “fix” detailed below
Secure Default Settings for ESXi Active Directory integration
To demonstrate the differences between a compromised and non-compromised server, I have deliberately changed the default settings on esxi002.cyrus-consultants.co.uk, so the server can be compromised.
HOW NOT TO: Compromise your VMware vSphere Hypervisor ESXi 5.1, 5.5, 6.0, 6.5, 6.7, 7.0, 8.0 by adding to Microsoft Active Directory
On the 29th July 2024, Microsoft discovered a vulnerability in ESXi hypervisors being exploited by several ransomware operators to obtain full administrative permissions on domain-joined ESXi hypervisors.
this publication is here – https://www.microsoft.com/en-us/security/blog/2024/07/29/ransomware-operators-exploit-esxi-hypervisor-vulnerability-for-mass-encryption/
VMware vExperts – Christian Mohn wrote about it here – VMware vSphere CVE-2024-37085 – A Nothing Burger
and Bob Plankers goes into more detail here – Thoughts on CVE-2024-37085 & VMSA-2024-0013
Please have a read of these publications.
Broadcom have issued updates and fixes to vSphere 7.0 and 8.0, and VCF 4.x and 5.x only. There is no security update for 6.7.
In this video presentation which is part of the Hancock’s VMware Half Hour I will show you HOW TO: Fix Synchronous Exception at 0x00000000XXXXXXX on VMware vSphere Hypervisor 7.0 (ESXi 7.0 ARM) on a Raspberry Pi 4.
It has been well documented that the Raspberry Pi 4 UEFI Firmware Image can cause this fault which renders the UEFI boot image corrupt. See here https://github.com/pftf/RPi4/issues/97
The UEFI firmware imaged used in the lab in this video is v1.37, it is debated as too whether this has been fixed in later releases v1.37, some suggest rolling back to v1.33 !
For the sake of continuity I’ve included previous EE Videos and Articles I’ve created here
In this video I’m going to show you HOW TO: Update the VMware vSphere Hypervisor 7.0 ARM Edition (ESXi 7.0 ARM edition) from v1.12 Build 7.0.0-1.12.21447677to v1.15 Build 22949429 on a Raspberry Pi 4, the method used is based on this article and video
The Sychronous Excepetion at 0x0000000037101434 in the UEFI BOOT Firmware v1.34 is still an issue today, which has not been fixed. These are messages received on Twitter from the Engineers which have worked on ESXi ARM. v1.35 is the latest UEFI firmware available from here
Andrei Warkentin (@WhatAintInside)
“yeah this is a long-standing SD card corruption bug ????… never quite identified, maybe some command needs ti be done on the way out to flush internal card buffers before the loss of power?”
Cyprien Laplace (@cypou)
I think you only need to replace the “RPI_EFI.fd” file from the boot partition. I forgot this bug existed, as all my Pis download the UEFI files using tftp.
(thus no corruption possible, but no change can be saved either)
In this video presentation which is part of the Hancock’s VMware Half Hour HOW TO Video Series I will show you HOW TO: Create a new Distributed and VMKernel Portgroups on a VMware vSphere Distributed Switch for the vSphere Cluster for use with vCenter Server HA.
I created a video here, which shows you how to create a vDS for VMware vSphere.
In this video presentation which is part of the Hancock’s VMware Half Hour HOW TO Video Series I will show you HOW TO: Use the vCenter Server 7.0.3 vCenter Server Appliance Management Interface (VAMI) to backup the database and configuration of your vCenter Server.
It is important once you have created a vDS to ensure you keep regular backups, if the need arises you need to restore vCenter Server.
I created a video here, which shows you how to create a vDS for VMware vSphere.
In this video presentation which is part of the Hancock’s VMware Half Hour HOW TO Video Series I will show you HOW TO: Create a VMware vSphere Distributed Switch (VDS) for use with VMware vSphere vSAN for the VMware vSphere vSAN Cluster.
VMware vSphere Distributed Switch (VDS) provides a centralized interface from which you can configure, monitor and administer virtual machine access switching for the entire data center. The VDS provides:
Simplified virtual machine network configuration
Enhanced network monitoring and troubleshooting capabilities
Support for advanced VMware vSphere networking features
As my 10GBe switch in this VMware vSphere Lab has LACP functionality I have decided to demonstrate how we configure the vDS for a LACP LAG. Link Aggregation Control Protocol (LACP) is one elements of an IEEE specification (802.3ad) that provides guidance on the practice of link aggregation for data connections, it’s used on trunks or port channels, to bond two ethernet ports together. It is only supported using a VMware vSphere Distributed Switch (VDS) , it is not supported on a VMware vSphere Standard Switch (VSS).
This video covers the following
Creation of the VMware vSphere Distributed Switch (VDS).
Creation of Portgroups with vLANs for Management, vMotion and vSAN.
Creation of the LACP LAG.
Adding vDS to hosts in the vSphere Cluster.
Migration of existing VMKernel portgroups from VSS to VDS.
Testing the VMware vSphere Distributed Switch (VDS).
If you are creating a design for VMware vSphere vSAN for a Production environment, please ensure you read the VMware Cloud Foundation Design Guide 01 JUN 2023 – this should be regarded as The Bible!
In this video presentation which is part of the Hancock’s VMware Half Hour HOW TO Video Series I will show you how to change the LBA sector size of storage media to make it compatible with VMware vSphere Hypervisor ESXi 7.0 and ESXi 8.0.
Only an LBA sector size of 512 bytes is compatible with VMware vSphere Hypervisor ESXi 7.0 and ESXi 8.0.
In this video we use an Intel® Optane™ SSD DC P4800X Series 375GB, 2.5in PCIe x4, 3D XPoint™, but this procedure can be use to change the LBA format of any storage media, SSD, HDD, NVMe