Posts Tagged ‘ESXi’

HOW TO: Synchronize Changes in a Linux P2V with VMware vCenter Converter Standalone 9.0 (Part 101)

Thursday, November 27th, 2025

If you’ve ever attempted a P2V migration using VMware vCenter Converter Standalone 9.0, you’ll know that the product can be as unpredictable as a British summer. One minute everything looks fine, the next minute you’re stuck at 91%, the Helper VM has thrown a wobbly, and the Estimated Time Remaining has declared itself fictional.

And yet… when it works, it really works.

This post is the follow-up to Part 100: HOW TO: P2V a Linux Ubuntu PC, where I walked through the seed conversion. In Part 101, I push things further and demonstrate how to synchronize changes — a feature newly introduced for Linux sources in Converter 9.0.

I won’t sugar-coat it: recording this episode took over 60 hours, spread across five days, with 22 hours of raw footage just to create a 32-minute usable video. Multiple conversion attempts failed, sequences broke, the change tracker stalled, and several recordings had to be completely redone. But I was determined to prove that the feature does work — and with enough perseverance, patience, and the power of video editing, the final demonstration shows a successful, validated P2V Sync Changes workflow.


Why Sync Changes Matters

Traditionally, a P2V conversion requires a maintenance window or downtime. After the initial seed conversion, any new data written to the source must be copied over manually, or the source must be frozen until cutover.

Converter 9.0 introduces a long-requested feature for Linux environments:

Synchronize Changes

This allows you to:

  • Perform an initial seed P2V conversion

  • Keep the source machine running

  • Replicate only the delta changes

  • Validate the final migration before cutover

It’s not quite Continuous Replication, but it’s closer than we’ve ever had from VMware’s free tooling.


Behind the Scenes: The Reality of Converter 9.0

Converter 9.0 is still fairly new, and “quirky” is an understatement.

Some observations from extensive hands-on testing:

  • The Helper VM can misbehave, especially around networking

  • At 91%, the Linux change tracker often stalls

  • The job status can report errors even though the sync completes

  • Estimated Time Remaining is not to be trusted

  • Each sync job creates a snapshot on the destination VM

  • Converter uses rsync under the hood for Linux sync

Despite all this, syncing does work — it’s just not a single-click process.


Step-by-Step Overview

Here’s the condensed version of the procedure shown in the video:

  1. Start a seed conversion (see Part 100).

  2. Once complete, use SSH on the source to prepare a 10GB test file for replication testing.

  3. Run an MD5 checksum on the source file.

  4. Select Synchronize Changes in Converter.

  5. Let the sync job run — and don’t panic at the 91% pause.

  6. Review any warnings or errors.

  7. Perform a final synchronization before cutover.

  8. Power off the source, power on the destination VM.

  9. Verify the replicated file using MD5 checksum on the destination.

  10. Celebrate when the checksums match — Q.E.D!


Proof of Success

In the final verification during filming:

  • A 10GB file was replicated

  • Both source and destination MD5 checksums matched

  • The Linux VM booted cleanly

  • Snapshot consolidation completed properly

Despite five days of interruptions, failed jobs, and recording challenges, the outcome was a successful, consistent P2V migration using Sync Changes.


Watch the Full Video (Part 101)

If you want to see the whole process — the setup, the problems, the explanations, the rsync behaviour, and the final success — the full video is now live on my YouTube channel:

Part 101: HOW TO: Synchronize Changes using VMware vCenter Converter Standalone 9.0

If you missed the previous part, you can catch up here:
Part 100: HOW TO: P2V a Linux Ubuntu PC Using VMware vCenter Converter Standalone 9.0


Final Thoughts

This video was one of the most challenging pieces of content I’ve created. But the end result is something I’m genuinely proud of — a real-world demonstration of a feature that many administrators will rely on during migrations, especially in environments where downtime is limited.

Converter 9.0 may still have rough edges, but with patience, persistence, and a bit of luck, it delivers.

Thanks for reading — and as always, thank you for supporting Andysworld!
Don’t forget to like, share, or comment if you found this useful.

Part 4: DIY UnRAID NAS – Insert new 10GBe NIC

Saturday, November 22nd, 2025

 

 

DIY UnRAID NAS Build – Part 4: Installing a 10GBe Intel X710-DA NIC (Plus an Outtake!)

Welcome back to another instalment of my DIY UnRAID NAS Build series.
If you have been following along, you will know this project is built around an Intel NUC chassis that I have been carefully (and repeatedly!) taking apart to transform into a compact but powerful UnRAID server.

In Part 4, we move on to a major upgrade: installing a 10GBe Intel X710-DA network interface card. And yes, the eagle-eyed among you will notice something unusual at the beginning of the video, because this episode starts with a blooper. I left it in for your entertainment.


A Fun Outtake to Start With

Right from the intro, things get a little chaotic. There is also a mysterious soundtrack playing, and I still do not know where it came from.
If you can identify it, feel free to drop a comment on the video.


Tearing Down the Intel NUC Again

To install the X710-DA NIC, the NUC requires almost complete disassembly:

  • Remove the back plate
  • Remove the backplane retainer
  • Take off the side panels
  • Open the case
  • Remove the blanking plate
  • Prepare the internal slot area

This NUC has become surprisingly modular after taking it apart so many times, but it still puts up a fight occasionally.


Installing the Intel X710-DA 10GBe NIC

Once the case is stripped down, the NIC finally slides into place. It is a tight fit, but the X710-DA is a superb card for a NAS build:

  • Dual SFP+ ports
  • Excellent driver support
  • Great performance in VMware, Linux, and Windows
  • Ideal for high-speed file transfers and VM workloads

If you are building a NAS that needs to move data quickly between systems, this NIC is a great option.


Reassembly

Next, everything goes back together:

  • Side panels reinstalled
  • Back plate fitted
  • Case secured
  • System ready for testing

You would think after doing this several times I would be quicker at it, but the NUC still has a few surprises waiting.


Booting into Windows 11 and Driver Issues

Once everything is reassembled, the NUC boots into Windows 11, and immediately there is a warning:

Intel X710-DA: Not Present

Device Manager confirms it. Windows detects that something is installed, but it does not know what it is.

Time to visit the Intel website, download the correct driver bundle, extract it, and install the drivers manually.

After a reboot, success. The NIC appears correctly and is fully functional.


Why 10GBe

For UnRAID, 10GBe significantly improves:

  • VM migrations
  • iSCSI and NFS performance
  • File transfers
  • Backup times
  • SMB throughput for Windows and macOS clients

It also future-proofs the NAS for any future network upgrades.


The Mystery Soundtrack

Towards the end of the video I ask again: what is the music playing in the background?
I genuinely have no idea, so if you recognise it, please leave a comment on the video.


Watch the Episode

You can watch the full episode, including all teardown steps, NIC installation, Windows troubleshooting, and the blooper, here:


Thank You for Watching and Reading

Thank you for following along with this NAS build.
Part 5 will continue the series, so stay tuned.

If you have built your own UnRAID NAS or have a favourite NIC for homelab projects, feel free to comment and share your experience.

Regards,
Andy

60 second migrations! Watch & Learn! Minisforum MS-A2 Hyper-V to Proxmox 9.0 Migration Minisforum MS-A2 Series Part 15 Ultimate #homelab

Wednesday, August 20th, 2025

Minisforum MS-A2 Hyper-V to Proxmox 9.0 Migration Minisforum MS-A2 Series Part 15 Ultimate #homelab

 

In this episode of Hancock’s VMware Half Hour, I walk you through migrating Hyper-V virtual machines to Proxmox 9.0 on the Minisforum MS-A2. 

We’ll cover connecting to the Proxmox server via SSH, exploring datastores, working with VHDX files, and running migration demos—including moving a full VM in under 60 seconds! This step-by-step guide shows how easy it is to transition workloads from Hyper-V into Proxmox for your #homelab or production environment.

Whether you’re testing, learning, or planning a migration, this video gives you the tools and knowledge to make it happen smoothly.

Scripts are here on GitHub – https://github.com/einsteinagogo/Hyper-VtoProxmoxMigration.git

Minisforum MS-A2 Can it Run ESXi 8.0.3g ? Minisforum MS-A2 Series Part 10 Ultimate #homelab

Saturday, August 9th, 2025

 

Can the powerful Minisforum MS-A2 run VMware vSphere 8.0?
In Part 10 of the Ultimate #homelab series, we put this compact beast to the test by installing VMware vSphere Hypervisor ESXi 8.0.3g and seeing how it performs. From BIOS setup to creating a demo virtual machine, this episode covers the full journey.

What’s Inside This Video:

Installing ESXi 8.0.3g on the Minisforum MS-A2

BIOS configuration & USB boot with Ventoy

Full ESXi setup walkthrough

Creating & running a test VM

Enabling NVMe Memory Tiering with NVMe namespaces

Checking performance and confirming a successful install

If you’ve been wondering whether the MS-A2 can handle serious VMware workloads in a home lab, this is the episode to watch!

Minisforum MS-A2 Migrate ESXi VMs to Hyper-V, Minisforum MS-A2 Series Part 6 Ultimate #homelab

Thursday, July 31st, 2025

 

In Part 6 of the Minisforum MS-A2 Series, we show you how to migrate VMware ESXi Virtual Machines (VMs) to Microsoft Hyper-V on Windows Server 2025 — using the powerful and compact Minisforum MS-A2 as the ultimate homelab platform.

This video features Veeam Backup & Replication v12.3 to safely back up your ESXi VMs and restore them directly to Hyper-V. It’s a clean and efficient migration method for anyone exploring life after VMware.

Whether you’re planning a full platform switch or testing a hybrid setup, you’ll find practical, step-by-step guidance from backup to restore — with key gotchas and tips throughout.

In this episode, you’ll learn:

Preparing VMware ESXi VMs for migration

Creating backups using Veeam v12.3

Restoring backups to Microsoft Hyper-V

Configuring networking, storage, and integration services

Post-migration testing and optimization

Real-world advice for homelabbers and IT professionals

Perfect for #homelab enthusiasts, sysadmins, and IT pros evaluating alternatives to VMware.
Got questions or want to share your experience? Drop a comment below!

Like this video if it helped you
Subscribe and hit the bell to follow the full MS-A2 homelab journey

Minisforum MS-A2 – The Ultimate #Homelab Server for VMware vSphere, VVF, and VCF?

Monday, June 30th, 2025

Lately, it feels like every VMware vExpert has been posting photos of their compact lab servers — and I’ll be honest, I was starting to feel left out.

So, I joined the club.

I picked up the new Minisforum MS-A2, and I’ve not looked back. This isn’t just another NUC alternative — it’s a serious powerhouse in a tiny chassis, perfect for VMware enthusiasts building or upgrading their vSphere, VVF, or VCF test environments.

Let’s dig into what makes this little beast a perfect addition to any #homelab setup in 2025.

Hardware Highlights – Not Your Average Mini PC
The MS-A2 isn’t just punching above its weight — it’s redefining what’s possible in a compact lab node.

Key Specs:
CPU: AMD Ryzen™ 9 9955HX – 16 cores / 32 threads of Zen 5 power

Memory: Dual DDR5-5600MHz SODIMM slots – up to 96GB officially, but…

Storage:

3× M.2 PCIe 4.0 slots (22110 supported)

Supports U.2 NVMe – great for enterprise-grade flash

Networking:

Dual 10Gbps SFP+ LAN

Dual 2.5GbE RJ45 ports

Wi-Fi 6E + Bluetooth 5.3 (going to replace this with more NVMe storage!)

Expansion:

Built-in PCIe x16 slot (supports split mode – ideal for GPUs, HBAs, or NICs)

This is homelab gold. It gives you the raw compute of a full rack server, the storage flexibility of a SAN box, and the network fabric of a modern datacenter — all under 2L in size.

How I Configured Mine – still sealed in box as I write – video incoming!
I purchased mine barebones from Amazon, and — as of writing — it’s still sealed in the box. Why? I’m waiting for all the parts to arrive.

Most importantly, I’ll be upgrading it with:
128GB of Crucial DDR5-5600 SODIMMs (2×64GB) — pushing beyond the official spec to see just how much performance this little box can handle.

Once everything’s here, I’ll be unboxing and assembling it live on a future episode of Hancock’s VMware Half Hour. Stay tuned if you want a front-row seat to the full setup, testing, and VMware lab deployment.

Perfect for VMware Labs: vSphere 8/9, VVF, and VCF
Whether you’re testing ESXi on bare metal or running full nested labs, this spec ticks every box.

ESXi Bare Metal Capable
The Ryzen 9 9955HX and AMD chipset boot vSphere 8.0U2 and 9.0 Tech Preview cleanly with minimal tweaks. Use community networking drivers or USB NIC injectors if needed.

VVF / VCF in a Box
If you’re exploring VMware Validated Foundation (VVF) or want a self-contained VCF lab for learning:

16C/32T lets you run nested 3-node ESXi clusters + vCenter + NSX-T comfortably

128GB RAM gives breathing room for resource-heavy components like SDDC Manager

PCIe 4.0 + U.2 = blazing fast vSAN storage

Dual 10Gb SFP+ = NSX-T overlay performance lab-ready

Community Validation – I Was Late to the Party
Fellow vExpert Daniel Krieger was ahead of the curve — writing about the MS-A2 months ago in his excellent blog post here:
sdn-warrior.org/posts/ms-a2

Then vExpert William Lam added his voice to the conversation with a guide to running VMware Cloud Foundation (VCF) on the MS-A2:
williamlam.com/2025/06/vmware-cloud-foundation-vcf-on-minisforum-ms-a2.html

Seeing both of them validate the MS-A2 pushed me over the edge — and I’m glad I jumped in.

Setup Tips (Soon!)
Once the unboxing is done, I’ll share:

BIOS tweaks: SVM, IOMMU, PCIe bifurcation

NIC setup for ESXi USB fling and 10GbE DAC

Storage layout for vSAN and U.2/NVMe configs

Full nested VCF/VVF deployment guide

Considerations
Still not officially VMware HCL — but community-tested

Ryzen platform lacks ECC memory — standard for most mini-PC builds

PCI passthrough needs thoughtful planning for IOMMU groupings

Ideal Use Cases
Nested ESXi, vSAN, vCenter, NSX labs

VVF deployment simulations

VCF lifecycle manager testing

Tanzu Kubernetes Grid

NSX-T Edge simulations on 10GbE

GPU or high-speed NIC via PCIe slot for advanced lab scenarios

Final Thoughts
The Minisforum MS-A2 with Ryzen 9 9955HX is a serious contender for the best compact homelab system of 2025. Whether you’re diving into vSphere 9, experimenting with VVF, or simulating a full VCF environment, this mini server brings serious firepower.

It may still be in the box for now —
—but soon, it’ll be front and center on Hancock’s VMware Half Hour, ready to power the next chapter of my lab.

Join the Conversation
Got an MS-A2 or similar mini-monster? Share your specs, test results, or VMware experience — and tag it:

#VMware #vSphere #VCF #VVF #homelab #MinisforumMSA2 #10GbE #vExpert

HOW TO: Fix Raspberry Pi CM5 framebuffer Issue with ESXi 8.0.3 ARM NOW!

Saturday, June 14th, 2025

Are you tired of dealing with the Raspberry Pi 5 frame buffer issue when running ESXi ARM? In this video, we’ll show you a step-by-step guide on how to fix this frustrating problem and get your Raspberry Pi 5 up and running smoothly with ESXi ARM. Whether you’re a hobbyist or a professional, this tutorial is perfect for anyone looking to troubleshoot and resolve the frame buffer issue on their Raspberry Pi 5. So, what are you waiting for? Let’s dive in and get started!

HOW TO: Configure & Install VMware ESXi ARM 8.0.3b on Raspberry Pi CM4 installed on a Turing Pi v2 Mini ITX Clusterboard | FULL MEGA GUIDE

Tuesday, December 3rd, 2024

Welcome to Hancock’s VMware Half Hour! This is the Full Monty Version, the MEGA Full Movie on configuring and installing VMware vSphere Hypervisor ESXi ARM 8.0.3b on a Raspberry Pi Compute Module 4. The CM4 is installed in a Turing Pi v2 Mini ITX Clusterboard, delivering a compact and powerful platform for ARM virtualization.

In this 1 hour and 19-minute guide, I’ll take you step-by-step through every detail, covering:

? Demonstrating Raspberry Pi OS 64-bit booting on CM4.

? Creating and installing the ESXi ARM UEFI boot image.

? Configuring iSCSI storage using Synology NAS.

? Setting up ESXi ARM with licensing, NTP, and NFS storage.

? A full walkthrough of PXE booting and TFTP configuration.

? Netbooting the CM4 and finalizing the ESXi ARM environment.

? Flashing the BMC firmware is covered in this video

? Replacing the self-signed Turing Pi v2 SSL certificate with a certificate from Microsoft Certificate Services. is covered in this video


 

 

Exploring ESXi ARM Fling v2.0 with the Turing Pi Mini ITX Board

Tuesday, November 26th, 2024

As an avid enthusiast of VMware’s innovations, I’m diving headfirst into the ESXi ARM Fling v2.0, which is built on the robust VMware vSphere Hypervisor ESXi 8.0.3b codebase. The ARM architecture has always intrigued me, and with this latest version, VMware has pushed the boundaries of what’s possible with ESXi on ARM devices. It’s a playground full of potential for anyone experimenting with lightweight, power-efficient infrastructures.

 

The Turing Pi Journey

After much anticipation, my Turing Pi Mini ITX boards have arrived! These boards are compatible with the Raspberry Pi Compute Module 4, offering a modular, scalable setup perfect for ARM experimentation. With a few Compute Module 4s ready to go, I’m eager to bring this setup to life. However, finding a suitable case for the Turing Pi board has proven to be a bit of a challenge.

Case Conundrum

While Turing Pi has announced an official ITX case for their boards, it’s currently on preorder and comes with a hefty price tag. For now, I’ve decided to go with a practical and versatile option: the Streamcom Mini ITX OpenBench case. Its open-frame design is functional, and it’ll keep the board accessible during testing and configuration.

I’m also considering crafting my own custom case. Using laser-cut wood or acrylic is an appealing option, offering the opportunity to create something unique and tailored to my specific requirements. But for now, the OpenBench case will do nicely as I explore the ESXi ARM Fling.

Why ESXi ARM Fling v2.0?

The ESXi ARM Fling project is an exciting venture for anyone who loves to experiment with virtualization. Running ESXi on ARM hardware offers several advantages:

  • Energy efficiency: ARM boards consume far less power compared to traditional x86 systems.
  • Cost-effectiveness: Affordable hardware like the Raspberry Pi Compute Module 4 makes it accessible to a wider audience.
  • Flexibility: The compact form factor of ARM devices is ideal for edge computing, IoT, or even small-scale home labs.

The v2.0 update introduces enhanced support, better performance, and bug fixes, making it an excellent choice for exploring the ARM ecosystem.

What’s Next?

With the hardware in hand and the ESXi ARM Fling v2.0 ready to install, I’m planning to dive into:

  1. Setting up and configuring the Turing Pi board with ESXi.
  2. Testing the system’s stability, performance, and scalability using multiple Raspberry Pi Compute Modules.
  3. Exploring practical use cases, such as lightweight Kubernetes clusters or edge computing applications.

I’ll share updates on the build process, challenges, and performance insights in future posts. For now, I’m excited to get started and see what this setup can achieve.

Stay tuned for more! If you’ve experimented with the ESXi ARM Fling or have tips for working with the Turing Pi board, I’d love to hear from you.