Minisforum MS-A2 HOW TO: Fix the Failed to Update CPU#0 Microcode PSOD with ESXi 8.0.3i ESXi 8.0u3i
HOW TO: Fix “Failed to update CPU#0 microcode” PSOD on ESXi 8.0.3i (Minisforum MS-A2)
In this guide from Hancock’s VMware Half Hour, I demonstrate how to fix the Purple Screen of Death (PSOD) error:
“Failed to update CPU#0 microcode”
This issue occurs when installing or booting VMware ESXi 8.0 Update 3i (ESXi 8.0.3i)
on the Minisforum MS-A2 mini workstation.
ESXi 8.0.3i, released on 2 March 2026, includes security fixes for CVE-2025-15467, an OpenSSL vulnerability. If you are running ESXi and have not
patched yet, you should update as soon as possible.
However, this update also includes new AMD CPU microcode updates, which currently
appear to trigger a PSOD during boot on the Minisforum MS-A2 platform.
The Problem
When booting the ESXi 8.0.3i installer (for example from Ventoy) on the
Minisforum MS-A2, the system may fail during boot with the following PSOD message:
The system has found a problem on your machine and cannot continue.
Failed to update CPU#0 microcode
This prevents ESXi from completing the boot process or installer launch.
Why This Happens
The ESXi 8.0.3i update includes newer AMD microcode updates intended to improve
security and stability. Unfortunately, these updates currently appear to be incompatible with the MS-A2 platform, which results in the microcode update failing
during boot.
When the microcode update fails, ESXi halts the boot process and displays the PSOD.
The Workaround
Until VMware releases a permanent fix, the issue can be worked around by using a kernel boot option during ESXi startup.
Steps to Fix
Boot the ESXi 8.0.3i installer.
When the ESXi boot screen appears, press Shift + O.
This opens the ESXi boot options.
Add the required kernel option shown in the video.
Press Enter to continue booting.
With the boot option applied, ESXi should boot successfully on the Minisforum MS-A2.
Video Walkthrough
Watch the full walkthrough below where I demonstrate the issue and apply the workaround.
What You Will Learn
What changed in ESXi 8.0.3i
Why AMD microcode updates trigger a PSOD
How to reproduce the issue during boot
The Shift + O ESXi boot workaround
How to confirm the fix works successfully
Credits
Huge thanks to members of the VMware community who investigated and documented this issue:
Stephen Wagner
Patrick Kernstock
vAndu
Martin Gustafsson
Their research and testing helped identify the workaround shown in this video.
HOW TO: Create a Custom VCF 9.0 VMware ESXi 9.0 ISO with Realtek Driver (PowerCLI)
Author: Andy – Andysworld.org.uk
Good news for homelab enthusiasts! We finally have a working Realtek network driver for ESXi 9.0. In this guide, I’ll show you step-by-step how to create a customised VMware ESXi 9.0 ISO for VCF 9.0 with the Realtek driver injected (slipstreamed) using PowerShell 7 and VMware PowerCLI.
Why This Matters
If you’re running ESXi in a homelab environment using consumer hardware, Mini PCs, or whitebox builds, Realtek NIC support has historically been a challenge. With the latest Realtek driver available, we can now build a custom ESXi 9.0 ISO that works perfectly in lab environments.
This process allows you to:
Inject the Realtek driver into the ESXi 9.0 image profile
Create a bootable ISO installer
Export an offline ZIP bundle
Important Disclaimer
ESXi 9.0 depot files require a valid Broadcom contract. I do not distribute depot files, customised ISOs, or any Broadcom software. Please obtain required files through official channels.
You should now have both a customised ISO and ZIP bundle ready for deployment in your homelab.
Method 2 – Scripted Approach
If you prefer automation, you can use a PowerShell script to perform the entire process in one go. The script automates:
Loading depot files
Cloning image profiles
Injecting the Realtek driver
Exporting ISO and ZIP bundles
This method is ideal for repeat builds or lab rebuilds.
Who Is This For?
Homelab enthusiasts
VMware learners
Mini PC / Whitebox ESXi users
VCF 9.0 lab deployments
This guide is intended for lab and educational use only, not production environments.
Final Thoughts
Creating a custom ESXi image is a valuable skill for anyone running a VMware homelab. With the Realtek driver now available for ESXi 9.0, lab builders can continue using affordable hardware while staying current with VMware releases.
If you found this guide helpful, check out more VMware content here at Andysworld.org.uk.
In this post, we’re going to walk through how to create a customised VMware ESXi 8.0 ISO with the Realtek network driver injected. This is particularly useful for homelab users running ESXi on consumer or whitebox hardware where Realtek NICs are common.
With the release of the ESXi 8.0 Realtek Driver Fling, Realtek-based systems can once again be used for lab and learning environments without unsupported hacks.
What You Will Learn
What ESXi depot files and image profiles are
How to inject (slipstream) the Realtek driver into ESXi 8.0
How to build a custom ESXi ISO and ZIP bundle
Manual and scripted methods using PowerShell
Prerequisites
Before starting, ensure you have the following:
PowerShell 7 (required)
VMware PowerCLI (or VMware Cloud Foundation PowerCLI)
Python 3.12
ESXi 8.0 depot ZIP file
Realtek Driver Fling ZIP file
Note: ESXi 8.0.3e and 8.0.3f depot files are publicly available. Later versions such as 8.0.3g and 8.0.3h require a valid Broadcom support contract. I do not distribute depot files or customised ISOs.
If you prefer automation, a PowerShell script called Hancocks-VMware-Half-Hour-ESXi-Customizer1.ps1 is available from GitHub and automates the entire process.
As shown in the video, even scripts can break — troubleshooting is part of the learning process.
Installation and Verification
Boot your ESXi host using the custom ISO
Complete the ESXi installation
Verify that the Realtek NIC is detected during install
If the driver has been injected correctly, the Realtek network adapter will be visible and usable within ESXi.
Important Notes
I do not provide ESXi depot files
I do not provide customised ESXi ISOs
I have no rights to distribute Broadcom software
This guide is intended for lab and homelab use only
Final Thoughts
This is a major step forward for the homelab community. With the Realtek Driver Fling, ESXi 8.0 is once again a viable option on affordable hardware.
Thanks for reading, and as always — happy virtualising!
Minisforum MS-A2: How to Install the New Realtek Driver on VMware ESXi 9.0
Running VMware ESXi 9.0 on the Minisforum MS-A2 is a fantastic option for homelabs and edge deployments, but out of the box you may notice that not all Realtek network interfaces are detected.
In this guide, based on my latest episode of Hancock’s VMware Half Hour, I walk through installing the new Broadcom-compiled Realtek driver (available as an official Broadcom Fling) to unlock additional NIC support.
What This Guide Covers
Why Realtek NICs are limited by default on ESXi 9.0
Where to download the official Broadcom Fling driver
Installing the driver using esxcli
Rebooting safely and verifying NIC availability
Supported Realtek Network Adapters
The driver demonstrated in this guide supports the following Realtek PCIe devices:
RTL8111 – 1GbE
RTL8125 – 2.5GbE
RTL8126 – 5GbE
RTL8127 – 10GbE
Driver Installation Command
Once the driver ZIP has been copied to your ESXi datastore and the host is in maintenance mode, install it using:
After installation, a reboot is required for the new network interfaces to become available.
Video Chapters
00:00 - Start
00:03 - Welcome to Hancock's VMware Half Hour
00:37 - Todays video - HOW TO Install Realtek driver on ESXi 9.0
00:55 - Broadcom Released the Realtek Driver fling in November 2025
01:55 - Minisforum MS-A2 - VCF 9.0 Homelab of the Year 2025 !
02:26 - Available as a Broadcom Fling - Tech Preview - not for production
02:55 - I'm not a fan of Realtek let it be known!
03:11 - Go to Broadcom Fling Portal site to download - https://support.broadcom.com/group/ecx/productdownloads?subfamily=Flings&freeDownloads=true
03:22 - Download the driver don't forget to Accept the Agreement!
03:51 - Enable SSH on Host, and use WinSCP to copy to local datastore
04:31 - Whoops Zoom is broke again!
05:07 - Connect to host using SSH
05:22 - Use lspci to show PCI devices in the host
06:05 - Use grep - lspci | grep Realtek
07:01 - Install the driver using esxcli software component apply -d /VMware-Re-Driver_1.101.00-5vmw.800.1.0.20613240.zip
07:59 - A reboot is required, reboot the server
08:36 - Reboot server
09:20 - The reason for the 10th Outake !
10:01 - Login to ESXi 9.0 using HTML Client
10:51 - Realtek driver is installed and network interfaces are available for use
11:07 - HenryChan1973 this video is for you!
12:23 - Thanks for Watching
Final Thoughts
This Broadcom Fling makes ESXi 8.0 far more usable on modern mini PCs like the Minisforum MS-A2, especially for homelabbers who rely on multi-gig Realtek networking.
Huge thanks to Henrychan1973 for their contribution and support.
If this guide helped you, consider subscribing on YouTube and checking out more VMware content on the blog.
Minisforum MS-A2: How to Install the New Realtek Driver on VMware ESXi 8.0
Running VMware ESXi 8.0 on the Minisforum MS-A2 is a fantastic option for homelabs and edge deployments, but out of the box you may notice that not all Realtek network interfaces are detected.
In this guide, based on my latest episode of Hancock’s VMware Half Hour, I walk through installing the new Broadcom-compiled Realtek driver (available as an official Broadcom Fling) to unlock additional NIC support.
What This Guide Covers
Why Realtek NICs are limited by default on ESXi 8.0
Where to download the official Broadcom Fling driver
Installing the driver using esxcli
Rebooting safely and verifying NIC availability
Supported Realtek Network Adapters
The driver demonstrated in this guide supports the following Realtek PCIe devices:
RTL8111 – 1GbE
RTL8125 – 2.5GbE
RTL8126 – 5GbE
RTL8127 – 10GbE
Driver Installation Command
Once the driver ZIP has been copied to your ESXi datastore and the host is in maintenance mode, install it using:
After installation, a reboot is required for the new network interfaces to become available.
Video Chapters
00:00 - Intro
00:06 - Welcome to Hancock's VMware Half Hour
00:31 - Today’s Video – Minisforum MS-A2
01:01 - Installing the ESXi Realtek Driver for ESXi 8.0
01:16 - Shoutout to member Henrychan1973!
02:03 - HTML Client view of network interfaces
03:00 - Broadcom engineering compiled a driver for ESXi 8.0
04:00 - Driver is available as a Broadcom Fling
05:00 - Download the driver from Broadcom Fling portal
05:44 - WinSCP – Copy driver ZIP to ESXi datastore
06:14 - Put host into maintenance mode
07:11 - Only three interfaces supported out of the box on MS-A2
07:16 - Start an SSH session using PuTTY
07:34 - Using lspci | grep Realtek
08:22 - Supported Realtek PCIe devices
08:35 - Installing the driver using esxcli
09:59 - Whoops! Typo!
10:37 - Can you spot it?
11:08 - Driver installed – reboot required
11:27 - Nano KVM issue accepting root password?
11:41 - Reboot via the GUI
12:30 - MS-A2 restarting
13:42 - Driver installed and Realtek interfaces available
14:54 - Thanks to Henrychan1973!
15:15 - Thanks for watching
Final Thoughts
This Broadcom Fling makes ESXi 8.0 far more usable on modern mini PCs like the Minisforum MS-A2, especially for homelabbers who rely on multi-gig Realtek networking.
Huge thanks to Henrychan1973 !!!
If this guide helped you, consider subscribing on YouTube and checking out more VMware content on the blog.
By The Power Of UnRAID – The Secret Reveal Of ESXi And Windows 11 VMs
For the last few episodes of Hancock’s VMware Half Hour, we have been quietly building something a little different.
On the surface it looked like a simple DIY UNRAID NAS project and a couple of Windows 11 P2V demonstrations.
In reality, everything was running inside virtual machines on an UnRAID host.
In Part 7 of the DIY UNRAID NAS series, we finally pull back the curtain and reveal what has really been powering the lab:
UnRAID running nested ESXi and Windows 11 VMs, complete with PCI passthrough.
This post walks through the idea behind the episode, how it ties back to earlier parts, and why I keep saying,
“By the power of UnRAID.”
Recap: Parts 6, 100 and 101
If you have been following along you will have seen:
Part 6 – Installing and testing Samsung 990 PRO NVMe drives in the Intel NUC based NAS.
Part 100 – Performing P2V migrations of Windows 11 systems.
Part 101 – Continuing the Windows 11 P2V work and refining the process.
In those episodes the star of the show appeared to be a physical Windows 11 machine and a separate ESXi host called ESXi052.
In Part 7 we reveal that this was deliberately misleading. Both the Windows 11 system and the ESXi host were in fact virtual machines.
The Secret: Everything Was A Virtual Machine
Part 7 opens by jumping back to those previous episodes and then revealing the twist:
The “physical” Windows 11 machine you saw on screen was actually a Windows 11 VM.
The ESXi host ESXi052 that we used for P2V work was also a VM.
The same VM was used in Part 6 when we installed and tested the NVMe drives.
In other words, the entire recent run of content has been driven by virtual machines on UnRAID.
The NVMe upgrades, the Windows 11 P2Vs, and the ESXi demonstrations were all happening inside VMs, not on bare metal.
Windows 11 With PCI Passthrough
One of the key enabling features in this setup is PCI passthrough on UnRAID.
By passing through hardware devices such as NVMe controllers or GPUs directly into a Windows 11 VM,
we can test and demonstrate “bare metal like” performance while still keeping everything virtual.
In the video we show Windows 11 running with PCI passthrough on UnRAID, giving the VM direct access to the hardware.
This is ideal for lab work, testing, and for scenarios where you want to push a homelab system without dedicating separate physical machines.
Nested ESXi 8.0 On UnRAID
The next part of the reveal is nested virtualization.
UnRAID is hosting a VMware vSphere Hypervisor ESXi 8.0 VM which in turn can run its own VMs.
This gives an incredibly flexible environment:
UnRAID manages the storage, cache, parity and core virtual machine scheduling.
ESXi runs nested on top for VMware specific testing and lab work.
Windows 11 runs as another VM on the same UnRAID host, with PCI passthrough as needed.
With this approach a single Intel NUC based NAS can simulate a much larger lab
while still being compact and power efficient.
By The Power Of UnRAID
To celebrate the reveal I created a fun meme inspired by the classic “By the power of Grayskull” scene.
In our version, “By the power of UnRAID” raises ESXi and Windows 11 high above the NUC,
showing that UnRAID is the platform empowering the whole setup.
Whether you are running nested ESXi, Windows 11 with PCI passthrough, or a mixture of containers and VMs,
UnRAID makes it straightforward to combine storage flexibility with powerful virtualization features.
The Power Of UnRAID In The Homelab
The big takeaway from Part 7 is simple:
A single UnRAID host can consolidate multiple roles: NAS, hypervisor, and workstation.
You can experiment with ESXi 8.0, Windows 11, and PCI passthrough without building a large rack of servers.
By keeping everything virtual you gain snapshots, flexibility, and the ability to rebuild or clone systems quickly.
For homelab enthusiasts, students, and anyone who wants to learn VMware or Windows 11 in depth,
this approach offers a lot of power in a very small footprint.
Watch The Episode
If you want to see the full walkthrough, including the moment the secret is revealed,
watch Part 7 of the DIY UNRAID NAS series on Hancock’s VMware Half Hour.
You will see exactly how the Windows 11 VM, the nested ESXi host, and UnRAID all fit together.
Conclusion
Part 7 closes the loop on a long running lab story.
What looked like separate physical systems were really virtual machines,
carefully layered on top of an UnRAID powered NAS.
By the power of UnRAID, we have been able to demonstrate NVMe upgrades, Windows 11 P2Vs, and ESXi 8.0 testing
all on a single compact platform.
If you are planning a new homelab or want to refresh an existing one,
consider what UnRAID plus nested ESXi and Windows 11 VMs could do for you.
If you’ve ever attempted a P2V migration using VMware vCenter Converter Standalone 9.0, you’ll know that the product can be as unpredictable as a British summer. One minute everything looks fine, the next minute you’re stuck at 91%, the Helper VM has thrown a wobbly, and the Estimated Time Remaining has declared itself fictional.
And yet… when it works, it really works.
This post is the follow-up to Part 100: HOW TO: P2V a Linux Ubuntu PC, where I walked through the seed conversion. In Part 101, I push things further and demonstrate how to synchronize changes — a feature newly introduced for Linux sources in Converter 9.0.
I won’t sugar-coat it: recording this episode took over 60 hours, spread across five days, with 22 hours of raw footage just to create a 32-minute usable video. Multiple conversion attempts failed, sequences broke, the change tracker stalled, and several recordings had to be completely redone. But I was determined to prove that the feature does work — and with enough perseverance, patience, and the power of video editing, the final demonstration shows a successful, validated P2V Sync Changes workflow.
Why Sync Changes Matters
Traditionally, a P2V conversion requires a maintenance window or downtime. After the initial seed conversion, any new data written to the source must be copied over manually, or the source must be frozen until cutover.
Converter 9.0 introduces a long-requested feature for Linux environments:
Synchronize Changes
This allows you to:
Perform an initial seed P2V conversion
Keep the source machine running
Replicate only the delta changes
Validate the final migration before cutover
It’s not quite Continuous Replication, but it’s closer than we’ve ever had from VMware’s free tooling.
Behind the Scenes: The Reality of Converter 9.0
Converter 9.0 is still fairly new, and “quirky” is an understatement.
Some observations from extensive hands-on testing:
The Helper VM can misbehave, especially around networking
At 91%, the Linux change tracker often stalls
The job status can report errors even though the sync completes
Estimated Time Remaining is not to be trusted
Each sync job creates a snapshot on the destination VM
Converter uses rsync under the hood for Linux sync
Despite all this, syncing does work — it’s just not a single-click process.
Step-by-Step Overview
Here’s the condensed version of the procedure shown in the video:
Start a seed conversion (see Part 100).
Once complete, use SSH on the source to prepare a 10GB test file for replication testing.
Run an MD5 checksum on the source file.
Select Synchronize Changes in Converter.
Let the sync job run — and don’t panic at the 91% pause.
Review any warnings or errors.
Perform a final synchronization before cutover.
Power off the source, power on the destination VM.
Verify the replicated file using MD5 checksum on the destination.
Celebrate when the checksums match — Q.E.D!
Proof of Success
In the final verification during filming:
A 10GB file was replicated
Both source and destination MD5 checksums matched
The Linux VM booted cleanly
Snapshot consolidation completed properly
Despite five days of interruptions, failed jobs, and recording challenges, the outcome was a successful, consistent P2V migration using Sync Changes.
Watch the Full Video (Part 101)
If you want to see the whole process — the setup, the problems, the explanations, the rsync behaviour, and the final success — the full video is now live on my YouTube channel:
This video was one of the most challenging pieces of content I’ve created. But the end result is something I’m genuinely proud of — a real-world demonstration of a feature that many administrators will rely on during migrations, especially in environments where downtime is limited.
Converter 9.0 may still have rough edges, but with patience, persistence, and a bit of luck, it delivers.
Thanks for reading — and as always, thank you for supporting Andysworld! Don’t forget to like, share, or comment if you found this useful.
DIY UnRAID NAS Build – Part 4: Installing a 10GBe Intel X710-DA NIC (Plus an Outtake!)
Welcome back to another instalment of my DIY UnRAID NAS Build series.
If you have been following along, you will know this project is built around an Intel NUC chassis that I have been carefully (and repeatedly!) taking apart to transform into a compact but powerful UnRAID server.
In Part 4, we move on to a major upgrade: installing a 10GBe Intel X710-DA network interface card. And yes, the eagle-eyed among you will notice something unusual at the beginning of the video, because this episode starts with a blooper. I left it in for your entertainment.
A Fun Outtake to Start With
Right from the intro, things get a little chaotic. There is also a mysterious soundtrack playing, and I still do not know where it came from.
If you can identify it, feel free to drop a comment on the video.
Tearing Down the Intel NUC Again
To install the X710-DA NIC, the NUC requires almost complete disassembly:
Remove the back plate
Remove the backplane retainer
Take off the side panels
Open the case
Remove the blanking plate
Prepare the internal slot area
This NUC has become surprisingly modular after taking it apart so many times, but it still puts up a fight occasionally.
Installing the Intel X710-DA 10GBe NIC
Once the case is stripped down, the NIC finally slides into place. It is a tight fit, but the X710-DA is a superb card for a NAS build:
Dual SFP+ ports
Excellent driver support
Great performance in VMware, Linux, and Windows
Ideal for high-speed file transfers and VM workloads
If you are building a NAS that needs to move data quickly between systems, this NIC is a great option.
Reassembly
Next, everything goes back together:
Side panels reinstalled
Back plate fitted
Case secured
System ready for testing
You would think after doing this several times I would be quicker at it, but the NUC still has a few surprises waiting.
Booting into Windows 11 and Driver Issues
Once everything is reassembled, the NUC boots into Windows 11, and immediately there is a warning:
Intel X710-DA: Not Present
Device Manager confirms it. Windows detects that something is installed, but it does not know what it is.
Time to visit the Intel website, download the correct driver bundle, extract it, and install the drivers manually.
After a reboot, success. The NIC appears correctly and is fully functional.
Why 10GBe
For UnRAID, 10GBe significantly improves:
VM migrations
iSCSI and NFS performance
File transfers
Backup times
SMB throughput for Windows and macOS clients
It also future-proofs the NAS for any future network upgrades.
The Mystery Soundtrack
Towards the end of the video I ask again: what is the music playing in the background?
I genuinely have no idea, so if you recognise it, please leave a comment on the video.
Watch the Episode
You can watch the full episode, including all teardown steps, NIC installation, Windows troubleshooting, and the blooper, here:
Thank You for Watching and Reading
Thank you for following along with this NAS build.
Part 5 will continue the series, so stay tuned.
If you have built your own UnRAID NAS or have a favourite NIC for homelab projects, feel free to comment and share your experience.
Minisforum MS-A2 Hyper-V to Proxmox 9.0 Migration Minisforum MS-A2 Series Part 15 Ultimate #homelab
In this episode of Hancock’s VMware Half Hour, I walk you through migrating Hyper-V virtual machines to Proxmox 9.0 on the Minisforum MS-A2.
We’ll cover connecting to the Proxmox server via SSH, exploring datastores, working with VHDX files, and running migration demos—including moving a full VM in under 60 seconds! This step-by-step guide shows how easy it is to transition workloads from Hyper-V into Proxmox for your #homelab or production environment.
Whether you’re testing, learning, or planning a migration, this video gives you the tools and knowledge to make it happen smoothly.
Scripts are here on GitHub – https://github.com/einsteinagogo/Hyper-VtoProxmoxMigration.git
Welcome to Hancock’s VMware Half Hour! In this episode of the Minisforum MS-A2 Series – Part 12 Ultimate #homelab, we take the compact but powerful MS-A2 and push it to the limits by installing VMware vCenter Server 9.0 on ESXi 9.
From installation to configuration and performance benchmarks, I’ll walk you through every step — including DNS setup, deployment options, datastore selection, and SSO configuration. We’ll also run boot speed benchmarks to see just how fast vCenter Server 9.0 can run on the MS-A2. Spoiler: it’s blazing fast! ? It’s on FIRE !
If you’re thinking of building a small, efficient, and powerful #homelab capable of enterprise-level virtualization, this is the video for you.