Posts Tagged ‘vSphere’

Minisforum MS-A2 HOW TO: Install the NEW Realtek driver on ESXi 9.0

Wednesday, January 14th, 2026

Minisforum MS-A2: How to Install the New Realtek Driver on VMware ESXi 9.0

Running VMware ESXi 9.0 on the Minisforum MS-A2 is a fantastic option for homelabs and edge deployments, but out of the box you may notice that not all Realtek network interfaces are detected.

In this guide, based on my latest episode of Hancock’s VMware Half Hour, I walk through installing the new Broadcom-compiled Realtek driver (available as an official Broadcom Fling) to unlock additional NIC support.


What This Guide Covers

  • Why Realtek NICs are limited by default on ESXi 9.0
  • Where to download the official Broadcom Fling driver
  • Installing the driver using esxcli
  • Rebooting safely and verifying NIC availability

Supported Realtek Network Adapters

The driver demonstrated in this guide supports the following Realtek PCIe devices:

  • RTL8111 – 1GbE
  • RTL8125 – 2.5GbE
  • RTL8126 – 5GbE
  • RTL8127 – 10GbE

Driver Installation Command

Once the driver ZIP has been copied to your ESXi datastore and the host is in maintenance mode, install it using:

esxcli software component apply -d path VMware-Re-Driver_1.101.00-5vmw.800.1.0.20613240.zip

After installation, a reboot is required for the new network interfaces to become available.


Video Chapters

00:00 - Start
00:03 - Welcome to Hancock's VMware Half Hour
00:37 - Todays video - HOW TO Install Realtek driver on ESXi 9.0
00:55 - Broadcom Released the Realtek Driver fling in November 2025
01:55 - Minisforum MS-A2 - VCF 9.0 Homelab of the Year 2025 !
02:26 - Available as a Broadcom Fling - Tech Preview - not for production
02:55 - I'm not a fan of Realtek let it be known!
03:11 - Go to Broadcom Fling Portal site to download - https://support.broadcom.com/group/ecx/productdownloads?subfamily=Flings&freeDownloads=true
03:22 - Download the driver don't forget to Accept the Agreement!
03:51 - Enable SSH on Host, and use WinSCP to copy to local datastore
04:31 - Whoops Zoom is broke again!
05:07 - Connect to host using SSH
05:22 - Use lspci to show PCI devices in the host
06:05 - Use grep - lspci | grep Realtek
07:01 - Install the driver using esxcli software component apply -d /VMware-Re-Driver_1.101.00-5vmw.800.1.0.20613240.zip
07:59 - A reboot is required, reboot the server
08:36 - Reboot server
09:20 - The reason for the 10th Outake !
10:01 - Login to ESXi 9.0 using HTML Client
10:51 - Realtek driver is installed and network interfaces are available for use
11:07 - HenryChan1973 this video is for you!
12:23 - Thanks for Watching

Final Thoughts

This Broadcom Fling makes ESXi 8.0 far more usable on modern mini PCs like the Minisforum MS-A2, especially for homelabbers who rely on multi-gig Realtek networking.

Huge thanks to Henrychan1973 for their contribution and support.

If this guide helped you, consider subscribing on YouTube and checking out more VMware content on the blog.

– Andrew Hancock
Hancock’s VMware Half Hour

Minisforum MS-A2 HOW TO: Install the NEW Realtek driver on ESXi 8.0

Sunday, January 11th, 2026


Minisforum MS-A2: How to Install the New Realtek Driver on VMware ESXi 8.0

Running VMware ESXi 8.0 on the Minisforum MS-A2 is a fantastic option for homelabs and edge deployments, but out of the box you may notice that not all Realtek network interfaces are detected.

In this guide, based on my latest episode of Hancock’s VMware Half Hour, I walk through installing the new Broadcom-compiled Realtek driver (available as an official Broadcom Fling) to unlock additional NIC support.


What This Guide Covers

  • Why Realtek NICs are limited by default on ESXi 8.0
  • Where to download the official Broadcom Fling driver
  • Installing the driver using esxcli
  • Rebooting safely and verifying NIC availability

Supported Realtek Network Adapters

The driver demonstrated in this guide supports the following Realtek PCIe devices:

  • RTL8111 – 1GbE
  • RTL8125 – 2.5GbE
  • RTL8126 – 5GbE
  • RTL8127 – 10GbE

Driver Installation Command

Once the driver ZIP has been copied to your ESXi datastore and the host is in maintenance mode, install it using:

esxcli software component apply -d path VMware-Re-Driver_1.101.00-5vmw.800.1.0.20613240.zip

After installation, a reboot is required for the new network interfaces to become available.


Video Chapters

00:00 - Intro
00:06 - Welcome to Hancock's VMware Half Hour
00:31 - Today’s Video – Minisforum MS-A2
01:01 - Installing the ESXi Realtek Driver for ESXi 8.0
01:16 - Shoutout to member Henrychan1973!
02:03 - HTML Client view of network interfaces
03:00 - Broadcom engineering compiled a driver for ESXi 8.0
04:00 - Driver is available as a Broadcom Fling
05:00 - Download the driver from Broadcom Fling portal
05:44 - WinSCP – Copy driver ZIP to ESXi datastore
06:14 - Put host into maintenance mode
07:11 - Only three interfaces supported out of the box on MS-A2
07:16 - Start an SSH session using PuTTY
07:34 - Using lspci | grep Realtek
08:22 - Supported Realtek PCIe devices
08:35 - Installing the driver using esxcli
09:59 - Whoops! Typo!
10:37 - Can you spot it?
11:08 - Driver installed – reboot required
11:27 - Nano KVM issue accepting root password?
11:41 - Reboot via the GUI
12:30 - MS-A2 restarting
13:42 - Driver installed and Realtek interfaces available
14:54 - Thanks to Henrychan1973!
15:15 - Thanks for watching

Final Thoughts

This Broadcom Fling makes ESXi 8.0 far more usable on modern mini PCs like the Minisforum MS-A2, especially for homelabbers who rely on multi-gig Realtek networking.

Huge thanks to Henrychan1973 !!!

If this guide helped you, consider subscribing on YouTube and checking out more VMware content on the blog.

– Andrew Hancock
Hancock’s VMware Half Hour

Part 7: DIY UNRAID NAS “BY THE POWER OF UNRAID” THE SCRET REVEALED

Saturday, December 6th, 2025

By The Power Of UnRAID – The Secret Reveal Of ESXi And Windows 11 VMs

For the last few episodes of Hancock’s VMware Half Hour, we have been quietly building something a little different.
On the surface it looked like a simple DIY UNRAID NAS project and a couple of Windows 11 P2V demonstrations.
In reality, everything was running inside virtual machines on an UnRAID host.

In Part 7 of the DIY UNRAID NAS series, we finally pull back the curtain and reveal what has really been powering the lab:
UnRAID running nested ESXi and Windows 11 VMs, complete with PCI passthrough.
This post walks through the idea behind the episode, how it ties back to earlier parts, and why I keep saying,
“By the power of UnRAID.”

Recap: Parts 6, 100 and 101

If you have been following along you will have seen:

  • Part 6 – Installing and testing Samsung 990 PRO NVMe drives in the Intel NUC based NAS.
  • Part 100 – Performing P2V migrations of Windows 11 systems.
  • Part 101 – Continuing the Windows 11 P2V work and refining the process.

In those episodes the star of the show appeared to be a physical Windows 11 machine and a separate ESXi host called ESXi052.
In Part 7 we reveal that this was deliberately misleading. Both the Windows 11 system and the ESXi host were in fact virtual machines.

The Secret: Everything Was A Virtual Machine

Part 7 opens by jumping back to those previous episodes and then revealing the twist:

  • The “physical” Windows 11 machine you saw on screen was actually a Windows 11 VM.
  • The ESXi host ESXi052 that we used for P2V work was also a VM.
  • The same VM was used in Part 6 when we installed and tested the NVMe drives.

In other words, the entire recent run of content has been driven by virtual machines on UnRAID.
The NVMe upgrades, the Windows 11 P2Vs, and the ESXi demonstrations were all happening inside VMs, not on bare metal.

Windows 11 With PCI Passthrough

One of the key enabling features in this setup is PCI passthrough on UnRAID.
By passing through hardware devices such as NVMe controllers or GPUs directly into a Windows 11 VM,
we can test and demonstrate “bare metal like” performance while still keeping everything virtual.

In the video we show Windows 11 running with PCI passthrough on UnRAID, giving the VM direct access to the hardware.
This is ideal for lab work, testing, and for scenarios where you want to push a homelab system without dedicating separate physical machines.

Nested ESXi 8.0 On UnRAID

The next part of the reveal is nested virtualization.
UnRAID is hosting a VMware vSphere Hypervisor ESXi 8.0 VM which in turn can run its own VMs.
This gives an incredibly flexible environment:

  • UnRAID manages the storage, cache, parity and core virtual machine scheduling.
  • ESXi runs nested on top for VMware specific testing and lab work.
  • Windows 11 runs as another VM on the same UnRAID host, with PCI passthrough as needed.

With this approach a single Intel NUC based NAS can simulate a much larger lab
while still being compact and power efficient.

By The Power Of UnRAID

To celebrate the reveal I created a fun meme inspired by the classic “By the power of Grayskull” scene.
In our version, “By the power of UnRAID” raises ESXi and Windows 11 high above the NUC,
showing that UnRAID is the platform empowering the whole setup.

Whether you are running nested ESXi, Windows 11 with PCI passthrough, or a mixture of containers and VMs,
UnRAID makes it straightforward to combine storage flexibility with powerful virtualization features.

The Power Of UnRAID In The Homelab

The big takeaway from Part 7 is simple:

  • A single UnRAID host can consolidate multiple roles: NAS, hypervisor, and workstation.
  • You can experiment with ESXi 8.0, Windows 11, and PCI passthrough without building a large rack of servers.
  • By keeping everything virtual you gain snapshots, flexibility, and the ability to rebuild or clone systems quickly.

For homelab enthusiasts, students, and anyone who wants to learn VMware or Windows 11 in depth,
this approach offers a lot of power in a very small footprint.

Watch The Episode

If you want to see the full walkthrough, including the moment the secret is revealed,
watch Part 7 of the DIY UNRAID NAS series on Hancock’s VMware Half Hour.
You will see exactly how the Windows 11 VM, the nested ESXi host, and UnRAID all fit together.

Conclusion

Part 7 closes the loop on a long running lab story.
What looked like separate physical systems were really virtual machines,
carefully layered on top of an UnRAID powered NAS.
By the power of UnRAID, we have been able to demonstrate NVMe upgrades, Windows 11 P2Vs, and ESXi 8.0 testing
all on a single compact platform.

If you are planning a new homelab or want to refresh an existing one,
consider what UnRAID plus nested ESXi and Windows 11 VMs could do for you.

HOW TO: Synchronize Changes in a Linux P2V with VMware vCenter Converter Standalone 9.0 (Part 101)

Thursday, November 27th, 2025

If you’ve ever attempted a P2V migration using VMware vCenter Converter Standalone 9.0, you’ll know that the product can be as unpredictable as a British summer. One minute everything looks fine, the next minute you’re stuck at 91%, the Helper VM has thrown a wobbly, and the Estimated Time Remaining has declared itself fictional.

And yet… when it works, it really works.

This post is the follow-up to Part 100: HOW TO: P2V a Linux Ubuntu PC, where I walked through the seed conversion. In Part 101, I push things further and demonstrate how to synchronize changes — a feature newly introduced for Linux sources in Converter 9.0.

I won’t sugar-coat it: recording this episode took over 60 hours, spread across five days, with 22 hours of raw footage just to create a 32-minute usable video. Multiple conversion attempts failed, sequences broke, the change tracker stalled, and several recordings had to be completely redone. But I was determined to prove that the feature does work — and with enough perseverance, patience, and the power of video editing, the final demonstration shows a successful, validated P2V Sync Changes workflow.


Why Sync Changes Matters

Traditionally, a P2V conversion requires a maintenance window or downtime. After the initial seed conversion, any new data written to the source must be copied over manually, or the source must be frozen until cutover.

Converter 9.0 introduces a long-requested feature for Linux environments:

Synchronize Changes

This allows you to:

  • Perform an initial seed P2V conversion

  • Keep the source machine running

  • Replicate only the delta changes

  • Validate the final migration before cutover

It’s not quite Continuous Replication, but it’s closer than we’ve ever had from VMware’s free tooling.


Behind the Scenes: The Reality of Converter 9.0

Converter 9.0 is still fairly new, and “quirky” is an understatement.

Some observations from extensive hands-on testing:

  • The Helper VM can misbehave, especially around networking

  • At 91%, the Linux change tracker often stalls

  • The job status can report errors even though the sync completes

  • Estimated Time Remaining is not to be trusted

  • Each sync job creates a snapshot on the destination VM

  • Converter uses rsync under the hood for Linux sync

Despite all this, syncing does work — it’s just not a single-click process.


Step-by-Step Overview

Here’s the condensed version of the procedure shown in the video:

  1. Start a seed conversion (see Part 100).

  2. Once complete, use SSH on the source to prepare a 10GB test file for replication testing.

  3. Run an MD5 checksum on the source file.

  4. Select Synchronize Changes in Converter.

  5. Let the sync job run — and don’t panic at the 91% pause.

  6. Review any warnings or errors.

  7. Perform a final synchronization before cutover.

  8. Power off the source, power on the destination VM.

  9. Verify the replicated file using MD5 checksum on the destination.

  10. Celebrate when the checksums match — Q.E.D!


Proof of Success

In the final verification during filming:

  • A 10GB file was replicated

  • Both source and destination MD5 checksums matched

  • The Linux VM booted cleanly

  • Snapshot consolidation completed properly

Despite five days of interruptions, failed jobs, and recording challenges, the outcome was a successful, consistent P2V migration using Sync Changes.


Watch the Full Video (Part 101)

If you want to see the whole process — the setup, the problems, the explanations, the rsync behaviour, and the final success — the full video is now live on my YouTube channel:

Part 101: HOW TO: Synchronize Changes using VMware vCenter Converter Standalone 9.0

If you missed the previous part, you can catch up here:
Part 100: HOW TO: P2V a Linux Ubuntu PC Using VMware vCenter Converter Standalone 9.0


Final Thoughts

This video was one of the most challenging pieces of content I’ve created. But the end result is something I’m genuinely proud of — a real-world demonstration of a feature that many administrators will rely on during migrations, especially in environments where downtime is limited.

Converter 9.0 may still have rough edges, but with patience, persistence, and a bit of luck, it delivers.

Thanks for reading — and as always, thank you for supporting Andysworld!
Don’t forget to like, share, or comment if you found this useful.

Part 4: DIY UnRAID NAS – Insert new 10GBe NIC

Saturday, November 22nd, 2025

 

 

DIY UnRAID NAS Build – Part 4: Installing a 10GBe Intel X710-DA NIC (Plus an Outtake!)

Welcome back to another instalment of my DIY UnRAID NAS Build series.
If you have been following along, you will know this project is built around an Intel NUC chassis that I have been carefully (and repeatedly!) taking apart to transform into a compact but powerful UnRAID server.

In Part 4, we move on to a major upgrade: installing a 10GBe Intel X710-DA network interface card. And yes, the eagle-eyed among you will notice something unusual at the beginning of the video, because this episode starts with a blooper. I left it in for your entertainment.


A Fun Outtake to Start With

Right from the intro, things get a little chaotic. There is also a mysterious soundtrack playing, and I still do not know where it came from.
If you can identify it, feel free to drop a comment on the video.


Tearing Down the Intel NUC Again

To install the X710-DA NIC, the NUC requires almost complete disassembly:

  • Remove the back plate
  • Remove the backplane retainer
  • Take off the side panels
  • Open the case
  • Remove the blanking plate
  • Prepare the internal slot area

This NUC has become surprisingly modular after taking it apart so many times, but it still puts up a fight occasionally.


Installing the Intel X710-DA 10GBe NIC

Once the case is stripped down, the NIC finally slides into place. It is a tight fit, but the X710-DA is a superb card for a NAS build:

  • Dual SFP+ ports
  • Excellent driver support
  • Great performance in VMware, Linux, and Windows
  • Ideal for high-speed file transfers and VM workloads

If you are building a NAS that needs to move data quickly between systems, this NIC is a great option.


Reassembly

Next, everything goes back together:

  • Side panels reinstalled
  • Back plate fitted
  • Case secured
  • System ready for testing

You would think after doing this several times I would be quicker at it, but the NUC still has a few surprises waiting.


Booting into Windows 11 and Driver Issues

Once everything is reassembled, the NUC boots into Windows 11, and immediately there is a warning:

Intel X710-DA: Not Present

Device Manager confirms it. Windows detects that something is installed, but it does not know what it is.

Time to visit the Intel website, download the correct driver bundle, extract it, and install the drivers manually.

After a reboot, success. The NIC appears correctly and is fully functional.


Why 10GBe

For UnRAID, 10GBe significantly improves:

  • VM migrations
  • iSCSI and NFS performance
  • File transfers
  • Backup times
  • SMB throughput for Windows and macOS clients

It also future-proofs the NAS for any future network upgrades.


The Mystery Soundtrack

Towards the end of the video I ask again: what is the music playing in the background?
I genuinely have no idea, so if you recognise it, please leave a comment on the video.


Watch the Episode

You can watch the full episode, including all teardown steps, NIC installation, Windows troubleshooting, and the blooper, here:


Thank You for Watching and Reading

Thank you for following along with this NAS build.
Part 5 will continue the series, so stay tuned.

If you have built your own UnRAID NAS or have a favourite NIC for homelab projects, feel free to comment and share your experience.

Regards,
Andy

Minisforum MS-A2 Can it Run ESXi 8.0.3g ? Minisforum MS-A2 Series Part 10 Ultimate #homelab

Saturday, August 9th, 2025

 

Can the powerful Minisforum MS-A2 run VMware vSphere 8.0?
In Part 10 of the Ultimate #homelab series, we put this compact beast to the test by installing VMware vSphere Hypervisor ESXi 8.0.3g and seeing how it performs. From BIOS setup to creating a demo virtual machine, this episode covers the full journey.

What’s Inside This Video:

Installing ESXi 8.0.3g on the Minisforum MS-A2

BIOS configuration & USB boot with Ventoy

Full ESXi setup walkthrough

Creating & running a test VM

Enabling NVMe Memory Tiering with NVMe namespaces

Checking performance and confirming a successful install

If you’ve been wondering whether the MS-A2 can handle serious VMware workloads in a home lab, this is the episode to watch!

Minisforum MS-A2 – The Ultimate #Homelab Server for VMware vSphere, VVF, and VCF?

Monday, June 30th, 2025

Lately, it feels like every VMware vExpert has been posting photos of their compact lab servers — and I’ll be honest, I was starting to feel left out.

So, I joined the club.

I picked up the new Minisforum MS-A2, and I’ve not looked back. This isn’t just another NUC alternative — it’s a serious powerhouse in a tiny chassis, perfect for VMware enthusiasts building or upgrading their vSphere, VVF, or VCF test environments.

Let’s dig into what makes this little beast a perfect addition to any #homelab setup in 2025.

Hardware Highlights – Not Your Average Mini PC
The MS-A2 isn’t just punching above its weight — it’s redefining what’s possible in a compact lab node.

Key Specs:
CPU: AMD Ryzen™ 9 9955HX – 16 cores / 32 threads of Zen 5 power

Memory: Dual DDR5-5600MHz SODIMM slots – up to 96GB officially, but…

Storage:

3× M.2 PCIe 4.0 slots (22110 supported)

Supports U.2 NVMe – great for enterprise-grade flash

Networking:

Dual 10Gbps SFP+ LAN

Dual 2.5GbE RJ45 ports

Wi-Fi 6E + Bluetooth 5.3 (going to replace this with more NVMe storage!)

Expansion:

Built-in PCIe x16 slot (supports split mode – ideal for GPUs, HBAs, or NICs)

This is homelab gold. It gives you the raw compute of a full rack server, the storage flexibility of a SAN box, and the network fabric of a modern datacenter — all under 2L in size.

How I Configured Mine – still sealed in box as I write – video incoming!
I purchased mine barebones from Amazon, and — as of writing — it’s still sealed in the box. Why? I’m waiting for all the parts to arrive.

Most importantly, I’ll be upgrading it with:
128GB of Crucial DDR5-5600 SODIMMs (2×64GB) — pushing beyond the official spec to see just how much performance this little box can handle.

Once everything’s here, I’ll be unboxing and assembling it live on a future episode of Hancock’s VMware Half Hour. Stay tuned if you want a front-row seat to the full setup, testing, and VMware lab deployment.

Perfect for VMware Labs: vSphere 8/9, VVF, and VCF
Whether you’re testing ESXi on bare metal or running full nested labs, this spec ticks every box.

ESXi Bare Metal Capable
The Ryzen 9 9955HX and AMD chipset boot vSphere 8.0U2 and 9.0 Tech Preview cleanly with minimal tweaks. Use community networking drivers or USB NIC injectors if needed.

VVF / VCF in a Box
If you’re exploring VMware Validated Foundation (VVF) or want a self-contained VCF lab for learning:

16C/32T lets you run nested 3-node ESXi clusters + vCenter + NSX-T comfortably

128GB RAM gives breathing room for resource-heavy components like SDDC Manager

PCIe 4.0 + U.2 = blazing fast vSAN storage

Dual 10Gb SFP+ = NSX-T overlay performance lab-ready

Community Validation – I Was Late to the Party
Fellow vExpert Daniel Krieger was ahead of the curve — writing about the MS-A2 months ago in his excellent blog post here:
sdn-warrior.org/posts/ms-a2

Then vExpert William Lam added his voice to the conversation with a guide to running VMware Cloud Foundation (VCF) on the MS-A2:
williamlam.com/2025/06/vmware-cloud-foundation-vcf-on-minisforum-ms-a2.html

Seeing both of them validate the MS-A2 pushed me over the edge — and I’m glad I jumped in.

Setup Tips (Soon!)
Once the unboxing is done, I’ll share:

BIOS tweaks: SVM, IOMMU, PCIe bifurcation

NIC setup for ESXi USB fling and 10GbE DAC

Storage layout for vSAN and U.2/NVMe configs

Full nested VCF/VVF deployment guide

Considerations
Still not officially VMware HCL — but community-tested

Ryzen platform lacks ECC memory — standard for most mini-PC builds

PCI passthrough needs thoughtful planning for IOMMU groupings

Ideal Use Cases
Nested ESXi, vSAN, vCenter, NSX labs

VVF deployment simulations

VCF lifecycle manager testing

Tanzu Kubernetes Grid

NSX-T Edge simulations on 10GbE

GPU or high-speed NIC via PCIe slot for advanced lab scenarios

Final Thoughts
The Minisforum MS-A2 with Ryzen 9 9955HX is a serious contender for the best compact homelab system of 2025. Whether you’re diving into vSphere 9, experimenting with VVF, or simulating a full VCF environment, this mini server brings serious firepower.

It may still be in the box for now —
—but soon, it’ll be front and center on Hancock’s VMware Half Hour, ready to power the next chapter of my lab.

Join the Conversation
Got an MS-A2 or similar mini-monster? Share your specs, test results, or VMware experience — and tag it:

#VMware #vSphere #VCF #VVF #homelab #MinisforumMSA2 #10GbE #vExpert

HOW TO: Configure & Install VMware ESXi ARM 8.0.3b on Raspberry Pi CM4 installed on a Turing Pi v2 Mini ITX Clusterboard | FULL MEGA GUIDE

Tuesday, December 3rd, 2024

Welcome to Hancock’s VMware Half Hour! This is the Full Monty Version, the MEGA Full Movie on configuring and installing VMware vSphere Hypervisor ESXi ARM 8.0.3b on a Raspberry Pi Compute Module 4. The CM4 is installed in a Turing Pi v2 Mini ITX Clusterboard, delivering a compact and powerful platform for ARM virtualization.

In this 1 hour and 19-minute guide, I’ll take you step-by-step through every detail, covering:

? Demonstrating Raspberry Pi OS 64-bit booting on CM4.

? Creating and installing the ESXi ARM UEFI boot image.

? Configuring iSCSI storage using Synology NAS.

? Setting up ESXi ARM with licensing, NTP, and NFS storage.

? A full walkthrough of PXE booting and TFTP configuration.

? Netbooting the CM4 and finalizing the ESXi ARM environment.

? Flashing the BMC firmware is covered in this video

? Replacing the self-signed Turing Pi v2 SSL certificate with a certificate from Microsoft Certificate Services. is covered in this video


 

 

Exploring ESXi ARM Fling v2.0 with the Turing Pi Mini ITX Board

Tuesday, November 26th, 2024

As an avid enthusiast of VMware’s innovations, I’m diving headfirst into the ESXi ARM Fling v2.0, which is built on the robust VMware vSphere Hypervisor ESXi 8.0.3b codebase. The ARM architecture has always intrigued me, and with this latest version, VMware has pushed the boundaries of what’s possible with ESXi on ARM devices. It’s a playground full of potential for anyone experimenting with lightweight, power-efficient infrastructures.

 

The Turing Pi Journey

After much anticipation, my Turing Pi Mini ITX boards have arrived! These boards are compatible with the Raspberry Pi Compute Module 4, offering a modular, scalable setup perfect for ARM experimentation. With a few Compute Module 4s ready to go, I’m eager to bring this setup to life. However, finding a suitable case for the Turing Pi board has proven to be a bit of a challenge.

Case Conundrum

While Turing Pi has announced an official ITX case for their boards, it’s currently on preorder and comes with a hefty price tag. For now, I’ve decided to go with a practical and versatile option: the Streamcom Mini ITX OpenBench case. Its open-frame design is functional, and it’ll keep the board accessible during testing and configuration.

I’m also considering crafting my own custom case. Using laser-cut wood or acrylic is an appealing option, offering the opportunity to create something unique and tailored to my specific requirements. But for now, the OpenBench case will do nicely as I explore the ESXi ARM Fling.

Why ESXi ARM Fling v2.0?

The ESXi ARM Fling project is an exciting venture for anyone who loves to experiment with virtualization. Running ESXi on ARM hardware offers several advantages:

  • Energy efficiency: ARM boards consume far less power compared to traditional x86 systems.
  • Cost-effectiveness: Affordable hardware like the Raspberry Pi Compute Module 4 makes it accessible to a wider audience.
  • Flexibility: The compact form factor of ARM devices is ideal for edge computing, IoT, or even small-scale home labs.

The v2.0 update introduces enhanced support, better performance, and bug fixes, making it an excellent choice for exploring the ARM ecosystem.

What’s Next?

With the hardware in hand and the ESXi ARM Fling v2.0 ready to install, I’m planning to dive into:

  1. Setting up and configuring the Turing Pi board with ESXi.
  2. Testing the system’s stability, performance, and scalability using multiple Raspberry Pi Compute Modules.
  3. Exploring practical use cases, such as lightweight Kubernetes clusters or edge computing applications.

I’ll share updates on the build process, challenges, and performance insights in future posts. For now, I’m excited to get started and see what this setup can achieve.

Stay tuned for more! If you’ve experimented with the ESXi ARM Fling or have tips for working with the Turing Pi board, I’d love to hear from you.