Archive for the ‘homelab’ Category

HOW TO: Create a Custom ESXi 8.0 ISO with the Realtek Driver Fling

Wednesday, January 28th, 2026

 

In this post, we’re going to walk through how to create a customised VMware ESXi 8.0 ISO with the Realtek network driver injected. This is particularly useful for homelab users running ESXi on consumer or whitebox hardware where Realtek NICs are common.

With the release of the ESXi 8.0 Realtek Driver Fling, Realtek-based systems can once again be used for lab and learning environments without unsupported hacks.


What You Will Learn

  • What ESXi depot files and image profiles are
  • How to inject (slipstream) the Realtek driver into ESXi 8.0
  • How to build a custom ESXi ISO and ZIP bundle
  • Manual and scripted methods using PowerShell

Prerequisites

Before starting, ensure you have the following:

  • PowerShell 7 (required)
  • VMware PowerCLI (or VMware Cloud Foundation PowerCLI)
  • Python 3.12
  • ESXi 8.0 depot ZIP file
  • Realtek Driver Fling ZIP file

Note: ESXi 8.0.3e and 8.0.3f depot files are publicly available. Later versions such as 8.0.3g and 8.0.3h require a valid Broadcom support contract. I do not distribute depot files or customised ISOs.


Method 1: Manual ISO Creation

Add the ESXi Depot

Add-EsxSoftwareDepot .\VMware-ESXi-8.0U3h-25067014-depot.zip

Add the Realtek Driver Depot

Add-EsxSoftwareDepot .\VMware-Re-Driver_1.101.01-5vmw.800.1.0.20613240.zip

List Image Profiles

Get-EsxImageProfile | Select Name

Clone the Standard Profile

$newProfile = New-EsxImageProfile `
 -CloneProfile 'ESXi-8.0U3h-25067014-standard' `
 -Name 'ESXi-8.0U3h-25067014-standard-Realtek-nic' `
 -Vendor "Hancock's VMware Half Hour"

Inject the Realtek Driver

Add-EsxSoftwarePackage `
 -ImageProfile $newProfile `
 -SoftwarePackage "if-re"

Export the Custom ISO

Export-ESXImageProfile `
 -ImageProfile $newProfile `
 -ExportToIso `
 -FilePath "$($newProfile.Name).iso"

Export the ZIP Bundle (Optional)

Export-ESXImageProfile `
 -ImageProfile $newProfile `
 -ExportToBundle `
 -FilePath "$($newProfile.Name).zip"

Method 2: Scripted Approach

If you prefer automation, a PowerShell script called Hancocks-VMware-Half-Hour-ESXi-Customizer1.ps1 is available from GitHub and automates the entire process.

As shown in the video, even scripts can break — troubleshooting is part of the learning process.


Installation and Verification

  • Boot your ESXi host using the custom ISO
  • Complete the ESXi installation
  • Verify that the Realtek NIC is detected during install

If the driver has been injected correctly, the Realtek network adapter will be visible and usable within ESXi.


Important Notes

  • I do not provide ESXi depot files
  • I do not provide customised ESXi ISOs
  • I have no rights to distribute Broadcom software
  • This guide is intended for lab and homelab use only

Final Thoughts

This is a major step forward for the homelab community. With the Realtek Driver Fling, ESXi 8.0 is once again a viable option on affordable hardware.

Thanks for reading, and as always — happy virtualising!

Andy
Andysworld.org.uk

Minisforum MS-A2 HOW TO: Install the NEW Realtek driver on ESXi 9.0

Wednesday, January 14th, 2026

Minisforum MS-A2: How to Install the New Realtek Driver on VMware ESXi 9.0

Running VMware ESXi 9.0 on the Minisforum MS-A2 is a fantastic option for homelabs and edge deployments, but out of the box you may notice that not all Realtek network interfaces are detected.

In this guide, based on my latest episode of Hancock’s VMware Half Hour, I walk through installing the new Broadcom-compiled Realtek driver (available as an official Broadcom Fling) to unlock additional NIC support.


What This Guide Covers

  • Why Realtek NICs are limited by default on ESXi 9.0
  • Where to download the official Broadcom Fling driver
  • Installing the driver using esxcli
  • Rebooting safely and verifying NIC availability

Supported Realtek Network Adapters

The driver demonstrated in this guide supports the following Realtek PCIe devices:

  • RTL8111 – 1GbE
  • RTL8125 – 2.5GbE
  • RTL8126 – 5GbE
  • RTL8127 – 10GbE

Driver Installation Command

Once the driver ZIP has been copied to your ESXi datastore and the host is in maintenance mode, install it using:

esxcli software component apply -d path VMware-Re-Driver_1.101.00-5vmw.800.1.0.20613240.zip

After installation, a reboot is required for the new network interfaces to become available.


Video Chapters

00:00 - Start
00:03 - Welcome to Hancock's VMware Half Hour
00:37 - Todays video - HOW TO Install Realtek driver on ESXi 9.0
00:55 - Broadcom Released the Realtek Driver fling in November 2025
01:55 - Minisforum MS-A2 - VCF 9.0 Homelab of the Year 2025 !
02:26 - Available as a Broadcom Fling - Tech Preview - not for production
02:55 - I'm not a fan of Realtek let it be known!
03:11 - Go to Broadcom Fling Portal site to download - https://support.broadcom.com/group/ecx/productdownloads?subfamily=Flings&freeDownloads=true
03:22 - Download the driver don't forget to Accept the Agreement!
03:51 - Enable SSH on Host, and use WinSCP to copy to local datastore
04:31 - Whoops Zoom is broke again!
05:07 - Connect to host using SSH
05:22 - Use lspci to show PCI devices in the host
06:05 - Use grep - lspci | grep Realtek
07:01 - Install the driver using esxcli software component apply -d /VMware-Re-Driver_1.101.00-5vmw.800.1.0.20613240.zip
07:59 - A reboot is required, reboot the server
08:36 - Reboot server
09:20 - The reason for the 10th Outake !
10:01 - Login to ESXi 9.0 using HTML Client
10:51 - Realtek driver is installed and network interfaces are available for use
11:07 - HenryChan1973 this video is for you!
12:23 - Thanks for Watching

Final Thoughts

This Broadcom Fling makes ESXi 8.0 far more usable on modern mini PCs like the Minisforum MS-A2, especially for homelabbers who rely on multi-gig Realtek networking.

Huge thanks to Henrychan1973 for their contribution and support.

If this guide helped you, consider subscribing on YouTube and checking out more VMware content on the blog.

– Andrew Hancock
Hancock’s VMware Half Hour

Minisforum MS-A2 HOW TO: Install the NEW Realtek driver on ESXi 8.0

Sunday, January 11th, 2026


Minisforum MS-A2: How to Install the New Realtek Driver on VMware ESXi 8.0

Running VMware ESXi 8.0 on the Minisforum MS-A2 is a fantastic option for homelabs and edge deployments, but out of the box you may notice that not all Realtek network interfaces are detected.

In this guide, based on my latest episode of Hancock’s VMware Half Hour, I walk through installing the new Broadcom-compiled Realtek driver (available as an official Broadcom Fling) to unlock additional NIC support.


What This Guide Covers

  • Why Realtek NICs are limited by default on ESXi 8.0
  • Where to download the official Broadcom Fling driver
  • Installing the driver using esxcli
  • Rebooting safely and verifying NIC availability

Supported Realtek Network Adapters

The driver demonstrated in this guide supports the following Realtek PCIe devices:

  • RTL8111 – 1GbE
  • RTL8125 – 2.5GbE
  • RTL8126 – 5GbE
  • RTL8127 – 10GbE

Driver Installation Command

Once the driver ZIP has been copied to your ESXi datastore and the host is in maintenance mode, install it using:

esxcli software component apply -d path VMware-Re-Driver_1.101.00-5vmw.800.1.0.20613240.zip

After installation, a reboot is required for the new network interfaces to become available.


Video Chapters

00:00 - Intro
00:06 - Welcome to Hancock's VMware Half Hour
00:31 - Today’s Video – Minisforum MS-A2
01:01 - Installing the ESXi Realtek Driver for ESXi 8.0
01:16 - Shoutout to member Henrychan1973!
02:03 - HTML Client view of network interfaces
03:00 - Broadcom engineering compiled a driver for ESXi 8.0
04:00 - Driver is available as a Broadcom Fling
05:00 - Download the driver from Broadcom Fling portal
05:44 - WinSCP – Copy driver ZIP to ESXi datastore
06:14 - Put host into maintenance mode
07:11 - Only three interfaces supported out of the box on MS-A2
07:16 - Start an SSH session using PuTTY
07:34 - Using lspci | grep Realtek
08:22 - Supported Realtek PCIe devices
08:35 - Installing the driver using esxcli
09:59 - Whoops! Typo!
10:37 - Can you spot it?
11:08 - Driver installed – reboot required
11:27 - Nano KVM issue accepting root password?
11:41 - Reboot via the GUI
12:30 - MS-A2 restarting
13:42 - Driver installed and Realtek interfaces available
14:54 - Thanks to Henrychan1973!
15:15 - Thanks for watching

Final Thoughts

This Broadcom Fling makes ESXi 8.0 far more usable on modern mini PCs like the Minisforum MS-A2, especially for homelabbers who rely on multi-gig Realtek networking.

Huge thanks to Henrychan1973 !!!

If this guide helped you, consider subscribing on YouTube and checking out more VMware content on the blog.

– Andrew Hancock
Hancock’s VMware Half Hour

Part 9: DIY UNRAID NAS – Dual 4TB NVMe Cache Upgrade with Live Btrfs RAID1 Rebalance Firmware Flash

Wednesday, January 7th, 2026

 

DIY UnRAID NAS – Part 9: NVMe Upgrades

In Part 9 of the DIY UnRAID NAS series, we finally tackle one of the most requested upgrades —
NVMe cache expansion.

This episode covers upgrading the UnRAID cache pool using Samsung 990 PRO 4TB NVMe SSDs,
walking through the hardware changes, UnRAID configuration, and the impact on performance.


What’s covered in Part 9

  • Removing NVMe devices from PCI passthrough
  • Rebooting and validating UnRAID hardware changes
  • Why UnRAID is used instead of vSAN in the homelab
  • Upgrading and rebalancing the NVMe cache pool
  • Btrfs RAID1 behaviour and live rebalance
  • Firmware considerations for Samsung 990 PRO NVMe drives

Why NVMe Matters in UnRAID

NVMe cache drives dramatically improve Docker, VM, and application performance in UnRAID.
With fast PCIe 4.0 NVMe devices, write amplification is reduced, cache flushes are faster,
and overall system responsiveness improves — especially under mixed workloads.

Unlike enterprise storage platforms, UnRAID allows flexible cache pool configurations,
making it ideal for homelab experimentation without vendor lock-in.


Hardware Used

  • Samsung 990 PRO 4TB PCIe 4.0 NVMe SSDs
  • PCIe NVMe adapter cards
  • DIY UnRAID NAS platform

Watch Part 9 on YouTube

? DIY UnRAID NAS – Part 9: NVMe Upgrades

Watch now on YouTube


Series Playlist

If you’re following the full build from the start, you can find the complete
DIY UnRAID NAS playlist here:


DIY UnRAID NAS Playlist


As always, thanks for watching, and if you’ve got questions about NVMe cache pools,
Btrfs behaviour, or UnRAID design decisions, drop them in the comments.

– Andy, Hancock’s VMware Half Hour

Part 8: DIY UNRAID NAS: Preparing the Zero-Downtime NVMe Upgrade (512GB -> 4TB)

Tuesday, December 9th, 2025

DIY UNRAID NAS Part 8: Preparing the Zero-Downtime NVMe Upgrade

Welcome back to Hancock’s VMware Half Hour and to Part 8 of the DIY UNRAID NAS build series.
In this episode, I walk through the planning and preparation for a zero-downtime NVMe cache
upgrade on my homelab UNRAID NAS, running on an Intel NUC 11 Extreme.

The goal of this two-part upgrade is to move from a single 512 GB XPG NVMe cache device
to a pair of Samsung 990 PRO 4 TB NVMe SSDs, ending up with a high capacity Btrfs RAID 1
cache pool for VMs, Docker, and PCIe passthrough workloads. Part 8 focuses on the design,
constraints, and first hardware changes. Part 9 completes the migration and final Btrfs rebalance.


Video: DIY UNRAID NAS Part 8

You can watch the full episode on YouTube here:

DIY UNRAID NAS – Part 8: Preparing the Zero-Downtime NVMe Upgrade
 


What This Episode Covers

Part 8 is all about understanding the current environment, identifying limitations in UNRAID,
and laying the groundwork for a non-destructive storage upgrade. In the video, I cover:

  • How my UNRAID array and cache devices are currently configured.
  • The future hardware specifications for the homelab UNRAID NAS.
  • Plans for using enterprise U.2 NVMe devices in future expansions.
  • Why we cannot simply create another cache pool in UNRAID to solve this.
  • A staged plan to replace the old 512 GB XPG NVMe with 4 TB Samsung 990 PRO drives.
  • How to safely stop Docker and virtual machines before making hardware changes.
  • Using PCIe passthrough (VMDirectPath I/O) to present NVMe devices directly to a Windows 11 VM.
  • Updating Samsung 990 PRO firmware from within the passthrough VM using Samsung Magician.
  • Confirming that all Samsung NVMe drives are genuine and authenticated.
  • Reviewing the NVMe slot layout in the Intel NUC 11 Extreme (2 x Gen 3 and 2 x Gen 4).

Chapter Breakdown

Here is the chapter list from the video for quick navigation:

  • 00:00 – Intro
  • 00:05 – Welcome to Hancock’s VMware Half Hour
  • 00:47 – This is Part 8 DIY UNRAID NAS
  • 01:21 – Explanation of UNRAID and how I have set up UNRAID
  • 04:20 – Explanation of UNRAID array and cache devices
  • 04:51 – Future specifications for homelab UNRAID NAS
  • 05:54 – Future use of enterprise NVMe U.2 device
  • 09:42 – I have a cunning plan says Andy
  • 12:02 – We cannot create another cache pool
  • 12:56 – Stop Docker and VMs
  • 13:10 – Shutdown ESXi on UNRAID
  • 13:28 – Shutdown Windows 11 on UNRAID
  • 14:22 – New NVMe installed, old XPG removed
  • 15:16 – PCIe passthrough demonstration configuration for UNRAID VMs
  • 17:14 – Restart NAS
  • 17:29 – NVMe devices are enabled for PCI passthrough
  • 18:11 – VMware VM Direct I/O (PCI passthrough) explained
  • 18:46 – Configure Windows 11 VM for PCI passthrough
  • 20:00 – Samsung Magician advising firmware update available
  • 20:48 – Update firmware of Samsung 990 PRO from Windows 11
  • 23:14 – Confirmation that all Samsung NVMe are authenticated
  • 26:22 – NVMe slots in Intel NUC 11 Extreme are 2 x Gen 3 and 2 x Gen 4
  • 27:06 – Remove NVMe devices from Windows 11 VM

The Cunning Plan: A Staged, Non-Destructive NVMe Upgrade

The key challenge in this build is upgrading from a 512 GB NVMe cache to larger 4 TB devices
without wiping the array or losing data. Because UNRAID cannot create an additional cache pool
in this configuration, we need a staged process.

In Part 8, I outline and begin the following upgrade path:

  1. Review the current UNRAID array and cache configuration.
  2. Plan the future target: dual 4 TB NVMe Btrfs RAID 1 cache pool.
  3. Shut down Docker and VM services cleanly.
  4. Power down the NAS and remove the old XPG NVMe.
  5. Install the first Samsung 990 PRO 4 TB NVMe drive.
  6. Boot the system and confirm the new NVMe is detected.
  7. Use PCIe passthrough to present the NVMe to a Windows 11 VM for firmware checks and updates.
  8. Update NVMe firmware using Samsung Magician and validate that the drive is genuine.

The actual Btrfs pool expansion and final dual-drive RAID 1 configuration are completed
in Part 9, where the second 4 TB NVMe is installed and the cache pool is fully migrated.


PCIe Passthrough and Firmware Updates

A significant part of the episode is dedicated to demonstrating PCIe passthrough
(VMDirectPath I/O) from VMware ESXi into UNRAID and then into a Windows 11 virtual machine.
This allows the Samsung 990 PRO NVMe to be exposed directly to Windows for:

  • Running Samsung Magician.
  • Checking for and applying firmware updates.
  • Verifying drive health and authenticity.

This approach is particularly useful in homelab environments where the hardware is
permanently installed in a server chassis, but you still want to access vendor tools
without moving drives between physical machines.


Intel NUC 11 Extreme NVMe Layout

Towards the end of the video, I review the NVMe slot layout inside the Intel NUC 11 Extreme.
This platform provides:

  • 2 x PCIe Gen 4 NVMe slots.
  • 2 x PCIe Gen 3 NVMe slots.

Understanding which slots are Gen 3 and which are Gen 4 is critical when deciding where to place
high performance NVMe devices such as the Samsung 990 PRO, especially when planning for
future workloads and potential enterprise U.2 NVMe expansion.


What Comes Next in Part 9

Part 8 ends with the new 4 TB NVMe installed, firmware updated, and the environment ready
for the next stage. In Part 9, I complete the migration by:

  • Replacing the remaining 512 GB cache device with a second 4 TB Samsung 990 PRO.
  • Rebuilding the Btrfs cache pool as a dual-drive RAID 1 configuration.
  • Verifying capacity, redundancy, and performance.

If you are interested in UNRAID, NVMe-based cache pools, or nested VMware and PCIe
passthrough in a small form factor system like the Intel NUC 11 Extreme, this two-part
upgrade is a practical, real-world example of how to approach it safely.


Related Content

  • DIY UNRAID NAS build playlist on Hancock’s VMware Half Hour (YouTube).
  • Previous parts in the series covering hardware assembly, base UNRAID configuration, and initial NVMe installation.
  • Upcoming parts focusing on performance testing, further storage expansion, and homelab workloads.

Part 6: DIY NAS – Installing Two Samsung 990 Pro Gen 4 NVMe M.2 SSD in an Intel NUC 11 Extreme

Monday, December 1st, 2025

 

Welcome back to Hancock’s VMware Half Hour and to Part 6 of the DIY UNRAID NAS build series.

In this episode I install two Samsung 990 PRO Gen 4 NVMe M.2 SSDs into the Intel NUC 11 Extreme.
The NUC 11 Extreme has a surprisingly capable NVMe layout, providing:

  • 2 × PCIe Gen 4 NVMe slots
  • 2 × PCIe Gen 3 NVMe slots

The video walks through verifying the drives, opening the NUC, accessing both NVMe bays, and installing each SSD step-by-step, including the compute board NVMe slot that is a little more awkward to reach.
The episode finishes in Windows 11 where the drives are validated using Disk Manager and Samsung Magician to confirm that both NVMe SSDs are genuine.


What Is Covered in Part 6

  • Checking the authenticity of Samsung 990 PRO NVMe SSDs
  • Accessing both the bottom and compute-board NVMe slots in the Intel NUC 11 Extreme
  • Installing and securing each NVMe stick
  • Reassembling the NUC 11 Extreme, including panels, shrouds, NIC and PCIe bracket
  • Confirming both NVMe drives in Windows 11
  • Using Samsung Magician to verify that the drives are genuine
  • Preparing the NVMe storage for use in later parts of the UNRAID NAS series

Chapters

00:00 - Intro
00:07 - Welcome to Hancock's VMware Half Hour
00:29 - In Part 6 we are going to fit Samsung 990 PRO NVMe
01:24 - Intel NUC 11 Extreme has 2 x Gen3, 2 x Gen4 slots
01:45 - Check the NVMe are genuine
04:20 - Intel NUC 11 Extreme - open NVMe bottom panel
05:23 - Install first NVMe stick
06:33 - Remove NVMe screw
07:06 - Insert and secure NVMe stick
07:30 - Secure bottom NVMe panel cover
08:40 - Remove PCIe securing bracket
08:54 - Remove side panel
09:11 - Remove NIC
09:44 - Remove fan shroud
09:59 - Open compute board
12:23 - Installing the second NVMe stick
14:36 - Secure NVMe in slot
16:26 - Compute board secured
19:04 - Secure side panels
20:59 - Start Windows 11 and login
21:31 - Check in Disk Manager for NVMe devices
22:40 - This Windows 11 machine is the machine used in Part 100/101
22:44 - Start Disk Management to format the NVMe disks
23:43 - Start Samsung Magician to confirm genuine
25:25 - Both NVMe sticks are confirmed as genuine
25:54 - Thanks for watching

About This Build

This DIY NAS series focuses on turning the Intel NUC 11 Extreme into a compact but powerful UNRAID NAS with NVMe performance at its core.
The Samsung 990 PRO NVMe drives installed in this part will provide a significant uplift in storage performance and will feature heavily in later episodes when the NAS is tuned and benchmarked.


Support the Series

If you are enjoying the series so far, please consider supporting the channel and the content:

  • Like the video on YouTube
  • Subscribe to the channel so you do not miss future parts
  • Leave a comment or question with your own experiences or suggestions
  • Follow along for Parts 7, 8, 9 and beyond

Thank you for watching and for following the build.


Gear Used


More From Hancock’s VMware Half Hour

Enjoy the build and stay tuned for upcoming parts where we continue configuring UNRAID and optimising the NAS.
Do not forget to like, comment and subscribe for more technical walkthroughs and builds.


Support and Honey


Watch More Playlists


Follow Hancock’s VMware Half Hour

Part 5: DIY UNRAID NAS: Making Use of the Free Internal USB Headers

Sunday, November 30th, 2025

 

 

Welcome back to Andysworld!*™ and to Part 5 of my DIY UNRAID NAS series.

In this instalment, I explore a small but very useful upgrade: using the free internal USB headers inside the Intel NUC Extreme 11th Gen to hide the UnRAID boot USB neatly inside the chassis. This keeps the build clean, reduces the risk of accidental removal, and makes the system feel much more like a dedicated appliance.


Why Move the UnRAID USB Inside the NUC?

UNRAID must boot from a USB flash drive. Most people leave it plugged into an external port on the back of the system, but the NUC Extreme includes internal USB 2.0 header pins.

By using those internal headers, we can:

  • Keep the USB drive inside the case
  • Free up an external USB port
  • Reduce the chance of accidental removal or damage
  • Improve the overall look and tidiness of the build
  • Make the system feel more like a self-contained NAS appliance

Credit and Hardware Used

This idea came from a very useful Reddit thread:

Reddit source: https://tinyurl.com/yd95mu37
Credit: Thanks to “JoshTheMoss” for highlighting the approach and the required cable.

Adapter Cable

The adapter used in this build was purchased from DeLock:

Adapter product page: https://www.delock.com/produkt/84834/merkmale.html

This adapter converts the internal USB header on the motherboard to a standard USB-A female connector, which is ideal for plugging in the UnRAID boot drive.


What Happens in Part 5

In this episode I:

  • Open up the Intel NUC Extreme 11th Gen chassis
  • Locate the unused internal USB header on the motherboard
  • Prepare the UnRAID USB stick, wrapping it in Kapton tape for additional insulation and protection
  • Install the DeLock internal USB adapter
  • Route and position the cable neatly inside the chassis
  • Connect the USB stick to the internal adapter (with the usual struggle of fitting fingers into a very small case)
  • Confirm that the system still boots correctly from the now-internal USB device
  • Give a short preview of what is coming next in Part 6

Video Chapters

00:00 – Intro
00:07 – Welcome to Hancock's VMware Half Hour
00:47 – Using the free internal USB headers
01:05 – Reddit Source – https://tinyurl.com/yd95mu37
01:17 – Kudos to "JoshTheMoss"
02:32 – The Reddit Post
02:44 – Purchased from – https://www.delock.com/produkt/84834/merkmale.html
02:59 – Intel NUC Extreme 11th Gen close-up
03:58 – Internal USB header left disconnected
04:36 – USB flash drive is used for UnRAID
04:49 – Wrapped USB flash drive in Kapton Tape
05:31 – Fit the cable with fat fingers
07:09 – Part 6 – NVMe Time
07:51 – 4 × 4 TB Samsung 990 PRO NVMe Gen 4
08:25 – Thanks for watching

Watch the Episode

Embedded video:


Follow the DIY UNRAID NAS Series on Andysworld!*™

This project is progressing nicely, and each part builds on the last. In Part 6, I move on to storage performance and install 4 × 4 TB Samsung 990 PRO Gen 4 NVMe SSDs for serious throughput.

If you are interested in homelab builds, UNRAID, VMware, or just general tinkering, keep an eye on the rest of the series here on Andysworld!*™.

Thanks for reading and for supporting the site.

HOW TO: Synchronize Changes in a Linux P2V with VMware vCenter Converter Standalone 9.0 (Part 101)

Thursday, November 27th, 2025

If you’ve ever attempted a P2V migration using VMware vCenter Converter Standalone 9.0, you’ll know that the product can be as unpredictable as a British summer. One minute everything looks fine, the next minute you’re stuck at 91%, the Helper VM has thrown a wobbly, and the Estimated Time Remaining has declared itself fictional.

And yet… when it works, it really works.

This post is the follow-up to Part 100: HOW TO: P2V a Linux Ubuntu PC, where I walked through the seed conversion. In Part 101, I push things further and demonstrate how to synchronize changes — a feature newly introduced for Linux sources in Converter 9.0.

I won’t sugar-coat it: recording this episode took over 60 hours, spread across five days, with 22 hours of raw footage just to create a 32-minute usable video. Multiple conversion attempts failed, sequences broke, the change tracker stalled, and several recordings had to be completely redone. But I was determined to prove that the feature does work — and with enough perseverance, patience, and the power of video editing, the final demonstration shows a successful, validated P2V Sync Changes workflow.


Why Sync Changes Matters

Traditionally, a P2V conversion requires a maintenance window or downtime. After the initial seed conversion, any new data written to the source must be copied over manually, or the source must be frozen until cutover.

Converter 9.0 introduces a long-requested feature for Linux environments:

Synchronize Changes

This allows you to:

  • Perform an initial seed P2V conversion

  • Keep the source machine running

  • Replicate only the delta changes

  • Validate the final migration before cutover

It’s not quite Continuous Replication, but it’s closer than we’ve ever had from VMware’s free tooling.


Behind the Scenes: The Reality of Converter 9.0

Converter 9.0 is still fairly new, and “quirky” is an understatement.

Some observations from extensive hands-on testing:

  • The Helper VM can misbehave, especially around networking

  • At 91%, the Linux change tracker often stalls

  • The job status can report errors even though the sync completes

  • Estimated Time Remaining is not to be trusted

  • Each sync job creates a snapshot on the destination VM

  • Converter uses rsync under the hood for Linux sync

Despite all this, syncing does work — it’s just not a single-click process.


Step-by-Step Overview

Here’s the condensed version of the procedure shown in the video:

  1. Start a seed conversion (see Part 100).

  2. Once complete, use SSH on the source to prepare a 10GB test file for replication testing.

  3. Run an MD5 checksum on the source file.

  4. Select Synchronize Changes in Converter.

  5. Let the sync job run — and don’t panic at the 91% pause.

  6. Review any warnings or errors.

  7. Perform a final synchronization before cutover.

  8. Power off the source, power on the destination VM.

  9. Verify the replicated file using MD5 checksum on the destination.

  10. Celebrate when the checksums match — Q.E.D!


Proof of Success

In the final verification during filming:

  • A 10GB file was replicated

  • Both source and destination MD5 checksums matched

  • The Linux VM booted cleanly

  • Snapshot consolidation completed properly

Despite five days of interruptions, failed jobs, and recording challenges, the outcome was a successful, consistent P2V migration using Sync Changes.


Watch the Full Video (Part 101)

If you want to see the whole process — the setup, the problems, the explanations, the rsync behaviour, and the final success — the full video is now live on my YouTube channel:

Part 101: HOW TO: Synchronize Changes using VMware vCenter Converter Standalone 9.0

If you missed the previous part, you can catch up here:
Part 100: HOW TO: P2V a Linux Ubuntu PC Using VMware vCenter Converter Standalone 9.0


Final Thoughts

This video was one of the most challenging pieces of content I’ve created. But the end result is something I’m genuinely proud of — a real-world demonstration of a feature that many administrators will rely on during migrations, especially in environments where downtime is limited.

Converter 9.0 may still have rough edges, but with patience, persistence, and a bit of luck, it delivers.

Thanks for reading — and as always, thank you for supporting Andysworld!
Don’t forget to like, share, or comment if you found this useful.

Part 100: HOW TO: P2V A Linux Ubuntu PC using VMware vCenter Converter Standalone 9.0

Wednesday, November 19th, 2025

 

 

HOWTO: P2V a Linux Ubuntu PC Using VMware vCenter Converter Standalone 9.0

Migrating physical machines into virtual environments continues to be a key task for many administrators, homelabbers, and anyone modernising older systems. With the release of VMware vCenter Converter Standalone 9.0, VMware has brought back a fully supported, modernised, and feature-rich toolset for performing P2V (Physical-to-Virtual) conversions.

In this post, I walk through how to P2V a powered-on Ubuntu 22.04 Linux PC, using Converter 9.0, as featured in my recent Hancock’s VMware Half Hour episode.

This guide covers each stage of the workflow, from configuring the source Linux machine to selecting the destination datastore and reviewing the final conversion job. Whether you’re prepping for a migration, building a new VM template, or preserving older hardware, this step-by-step breakdown will help you get the job done smoothly.


Video Tutorial

If you prefer to follow along with the full step-by-step:
Embed your YouTube video here once uploaded.


What’s New in VMware vCenter Converter Standalone 9.0?

  • A refreshed and modern UI
  • Improved compatibility with modern Linux distributions
  • Updated helper VM for Linux conversions
  • Support for newer ESXi and vSphere versions
  • Better overall performance and reliability
  • Linux P2V via passwordless sudo-enabled accounts

This makes it far easier to bring physical Linux workloads into your virtual infrastructure.


Full Tutorial Breakdown (Step-by-Step)

Below is a summary of all the steps demonstrated in the video:

  • Step 1 — Open Converter & Select “Convert Machine”
  • Step 2 — Choose “Powered On”
  • Step 3 — Detect Source Machine
  • Step 4 — Select “Remote Linux Machine”
  • Step 5 — Enter FQDN of the Linux PC
  • Step 6 — Use a passwordless sudo-enabled user account
  • Step 7 — Enter the password
  • Step 8 — Proceed to the next stage
  • Step 9 — Enter ESXi or vCenter Server FQDN
  • Step 10 — Authenticate with username and password
  • Step 11 — Continue
  • Step 12 — Name your destination VM
  • Step 13 — Choose datastore & VM hardware version
  • Step 14 — Go to the next screen
  • Step 15 — TIP: Avoid making unnecessary changes!
  • Step 16 — Next
  • Step 17 — Review settings and click “Finish”
  • Step 18 — Monitor the conversion job
  • Step 19 — Review Helper VM deployment on ESXi
  • Step 20 — Cloning process begins
  • Step 21 — Converter best practices & tips
  • Step 22 — Conversion reaches 98%
  • Step 23 — Conversion reaches 100%
  • Step 24 — Disable network on the destination VM
  • Step 25 — Power on the VM
  • Step 26 — Teaser: Something special about Brother 52 (esxi052)!

Why Disable the Network Before First Boot?

Doing this avoids:

  • IP conflicts
  • Hostname duplication
  • Duplicate MAC address issues
  • Unwanted services broadcasting from the cloned system

After confirming the VM boots correctly, you can safely reconfigure networking inside the guest.


Final Thoughts

VMware vCenter Converter Standalone 9.0 brings P2V workflows back into the modern VMware ecosystem. With full Linux support—including Ubuntu 22.04—it’s easier than ever to migrate physical workloads into vSphere.

If you’re maintaining a homelab, doing DR planning, or preserving old systems, Converter remains one of the most valuable free tools VMware continues to offer.

Stay tuned — the next video showcases something special about Brother 52 (esxi052) that you won’t want to miss!


Don’t Forget!

  • Like the video
  • Subscribe to Hancock’s VMware Half Hour
  • Leave a comment — What P2V tutorial should I do next?

Part 1: Building a DIY NVMe NAS with the Intel NUC 11 Extreme (Beast Canyon)

Saturday, November 15th, 2025

 

Part 1: The Hardware Build

Welcome to AndysWorld.org.uk! Today, we’re diving into a project that’s perfect for anyone looking to build a powerful, yet compact, DIY Network-Attached Storage (NAS) solution. In this post, I’ll walk you through the first part of building a ‘MEGA’ NVMe NAS using the Intel NUC 11 Extreme (Beast Canyon). This mini-PC packs a punch with its powerful hardware, making it a great choice for a NAS build, especially when combined with UnRAID to handle storage and virtualization.


Why Choose the Intel NUC 11 Extreme for a NAS?

If you’ve been looking into NAS setups, you know the balance between power, size, and expandability is crucial. The Intel NUC 11 Extreme (Beast Canyon) checks all the right boxes, offering:

  • Compact Form Factor: It’s a small but powerful solution that doesn’t take up much space.

  • High-Performance NVMe Support: NVMe drives provide incredibly fast data transfer speeds—perfect for a NAS that needs to handle heavy workloads.

  • Flexibility for Virtualization: With UnRAID, you can set up multiple virtual machines, containers, and storage arrays, making it a versatile solution for any home or small office.

For this build, we’re focusing on using NVMe storage for high-speed access to files and a 64GB Kingston Fury DDR4 RAM kit to ensure smooth performance under load.


What You’ll Need for This Build:

  • Intel NUC 11 Extreme (Beast Canyon)

  • 64GB Kingston Fury DDR4 RAM

  • 2 x 512GB XPG GAMMIX NVMe SSDs

  • UnRAID Operating System

  • A few basic tools for assembly (screwdriver, anti-static mat, etc.)

If you’ve never worked with the Intel NUC before, don’t worry! I’ll guide you through every step of the assembly process. Let’s get into it!


Step-by-Step Build Process:

1. Unboxing the Intel NUC 11 Extreme

First things first, let’s unbox the Intel NUC 11 Extreme (Beast Canyon). When you open the box, you’ll find the compact, sleek chassis, which packs quite a punch for such a small form factor. This NUC is equipped with an 11th Gen Intel Core i7 processor and can support a variety of high-speed storage options, including NVMe SSDs.

2. Installing the RAM and NVMe Drives

With the NUC unboxed, the next step is to install the Kingston Fury RAM and XPG GAMMIX NVMe SSDs. Be careful during installation—especially with the tiny NVMe screws! The NUC has an easy-to-access compute board where both the RAM and NVMe drives will fit.

  • Installing the RAM: Simply slot the 64GB Kingston Fury DDR4 RAM sticks into the dedicated slots, making sure they’re fully seated.

  • Installing the NVMe SSDs: These go directly onto the motherboard and can be secured using small screws. Be sure to handle them gently as the connectors are quite delicate.

3. Reassembling the NUC

Once the RAM and NVMe drives are installed, it’s time to reassemble the NUC. This involves:

  • Reattaching the fan tray and shroud

  • Reinstalling the side and back panels

At this stage, everything should feel secure and ready for the next steps.


Why NVMe Storage for a NAS?

NVMe drives are game-changers when it comes to NAS storage. Here’s why:

  • Speed: NVMe offers lightning-fast read/write speeds compared to SATA SSDs or traditional HDDs. For anyone who works with large files or needs to serve data quickly, NVMe is a must.

  • Future-Proofing: With more applications and data being handled in the cloud, having NVMe in your NAS ensures your storage solution is ready for the future.

  • Reliability: NVMe drives are more reliable than traditional spinning hard drives, with less moving parts and faster data recovery times.


What’s Next?

Now that we’ve completed the hardware installation, in the next post, we’ll dive into setting up UnRAID on the NUC. UnRAID will allow us to easily configure our storage arrays, virtual machines, and containers—all from a user-friendly interface. Stay tuned for Part 2, where we’ll cover configuring the software, optimizing the NAS, and making sure everything runs smoothly.


Helpful Resources:

To help you along the way, I recommend checking out the blog posts from two experts in the field:


Wrapping Up

This build was just the beginning! The Intel NUC 11 Extreme provides an excellent foundation for a fast, reliable NAS. With NVMe storage and the flexibility of UnRAID, you can build a high-performance system that’s both versatile and compact.

What do you think of this build? Have you used the Intel NUC for similar projects? Drop a comment below or connect with me on social media—I’d love to hear about your experiences!


Follow Andy’s World for More DIY Tech Projects
Don’t forget to check out the latest posts and tutorials on AndysWorld.org.uk to keep up with all things tech and DIY. Happy building!