Posts Tagged ‘cluster’

Part 7: DIY UNRAID NAS “BY THE POWER OF UNRAID” THE SCRET REVEALED

Saturday, December 6th, 2025

By The Power Of UnRAID – The Secret Reveal Of ESXi And Windows 11 VMs

For the last few episodes of Hancock’s VMware Half Hour, we have been quietly building something a little different.
On the surface it looked like a simple DIY UNRAID NAS project and a couple of Windows 11 P2V demonstrations.
In reality, everything was running inside virtual machines on an UnRAID host.

In Part 7 of the DIY UNRAID NAS series, we finally pull back the curtain and reveal what has really been powering the lab:
UnRAID running nested ESXi and Windows 11 VMs, complete with PCI passthrough.
This post walks through the idea behind the episode, how it ties back to earlier parts, and why I keep saying,
“By the power of UnRAID.”

Recap: Parts 6, 100 and 101

If you have been following along you will have seen:

  • Part 6 – Installing and testing Samsung 990 PRO NVMe drives in the Intel NUC based NAS.
  • Part 100 – Performing P2V migrations of Windows 11 systems.
  • Part 101 – Continuing the Windows 11 P2V work and refining the process.

In those episodes the star of the show appeared to be a physical Windows 11 machine and a separate ESXi host called ESXi052.
In Part 7 we reveal that this was deliberately misleading. Both the Windows 11 system and the ESXi host were in fact virtual machines.

The Secret: Everything Was A Virtual Machine

Part 7 opens by jumping back to those previous episodes and then revealing the twist:

  • The “physical” Windows 11 machine you saw on screen was actually a Windows 11 VM.
  • The ESXi host ESXi052 that we used for P2V work was also a VM.
  • The same VM was used in Part 6 when we installed and tested the NVMe drives.

In other words, the entire recent run of content has been driven by virtual machines on UnRAID.
The NVMe upgrades, the Windows 11 P2Vs, and the ESXi demonstrations were all happening inside VMs, not on bare metal.

Windows 11 With PCI Passthrough

One of the key enabling features in this setup is PCI passthrough on UnRAID.
By passing through hardware devices such as NVMe controllers or GPUs directly into a Windows 11 VM,
we can test and demonstrate “bare metal like” performance while still keeping everything virtual.

In the video we show Windows 11 running with PCI passthrough on UnRAID, giving the VM direct access to the hardware.
This is ideal for lab work, testing, and for scenarios where you want to push a homelab system without dedicating separate physical machines.

Nested ESXi 8.0 On UnRAID

The next part of the reveal is nested virtualization.
UnRAID is hosting a VMware vSphere Hypervisor ESXi 8.0 VM which in turn can run its own VMs.
This gives an incredibly flexible environment:

  • UnRAID manages the storage, cache, parity and core virtual machine scheduling.
  • ESXi runs nested on top for VMware specific testing and lab work.
  • Windows 11 runs as another VM on the same UnRAID host, with PCI passthrough as needed.

With this approach a single Intel NUC based NAS can simulate a much larger lab
while still being compact and power efficient.

By The Power Of UnRAID

To celebrate the reveal I created a fun meme inspired by the classic “By the power of Grayskull” scene.
In our version, “By the power of UnRAID” raises ESXi and Windows 11 high above the NUC,
showing that UnRAID is the platform empowering the whole setup.

Whether you are running nested ESXi, Windows 11 with PCI passthrough, or a mixture of containers and VMs,
UnRAID makes it straightforward to combine storage flexibility with powerful virtualization features.

The Power Of UnRAID In The Homelab

The big takeaway from Part 7 is simple:

  • A single UnRAID host can consolidate multiple roles: NAS, hypervisor, and workstation.
  • You can experiment with ESXi 8.0, Windows 11, and PCI passthrough without building a large rack of servers.
  • By keeping everything virtual you gain snapshots, flexibility, and the ability to rebuild or clone systems quickly.

For homelab enthusiasts, students, and anyone who wants to learn VMware or Windows 11 in depth,
this approach offers a lot of power in a very small footprint.

Watch The Episode

If you want to see the full walkthrough, including the moment the secret is revealed,
watch Part 7 of the DIY UNRAID NAS series on Hancock’s VMware Half Hour.
You will see exactly how the Windows 11 VM, the nested ESXi host, and UnRAID all fit together.

Conclusion

Part 7 closes the loop on a long running lab story.
What looked like separate physical systems were really virtual machines,
carefully layered on top of an UnRAID powered NAS.
By the power of UnRAID, we have been able to demonstrate NVMe upgrades, Windows 11 P2Vs, and ESXi 8.0 testing
all on a single compact platform.

If you are planning a new homelab or want to refresh an existing one,
consider what UnRAID plus nested ESXi and Windows 11 VMs could do for you.

60 second migrations! Watch & Learn! Minisforum MS-A2 Hyper-V to Proxmox 9.0 Migration Minisforum MS-A2 Series Part 15 Ultimate #homelab

Wednesday, August 20th, 2025

Minisforum MS-A2 Hyper-V to Proxmox 9.0 Migration Minisforum MS-A2 Series Part 15 Ultimate #homelab

 

In this episode of Hancock’s VMware Half Hour, I walk you through migrating Hyper-V virtual machines to Proxmox 9.0 on the Minisforum MS-A2. 

We’ll cover connecting to the Proxmox server via SSH, exploring datastores, working with VHDX files, and running migration demos—including moving a full VM in under 60 seconds! This step-by-step guide shows how easy it is to transition workloads from Hyper-V into Proxmox for your #homelab or production environment.

Whether you’re testing, learning, or planning a migration, this video gives you the tools and knowledge to make it happen smoothly.

Scripts are here on GitHub – https://github.com/einsteinagogo/Hyper-VtoProxmoxMigration.git

HOW TO: Configure & Install VMware ESXi ARM 8.0.3b on Raspberry Pi CM4 installed on a Turing Pi v2 Mini ITX Clusterboard | FULL MEGA GUIDE

Tuesday, December 3rd, 2024

Welcome to Hancock’s VMware Half Hour! This is the Full Monty Version, the MEGA Full Movie on configuring and installing VMware vSphere Hypervisor ESXi ARM 8.0.3b on a Raspberry Pi Compute Module 4. The CM4 is installed in a Turing Pi v2 Mini ITX Clusterboard, delivering a compact and powerful platform for ARM virtualization.

In this 1 hour and 19-minute guide, I’ll take you step-by-step through every detail, covering:

? Demonstrating Raspberry Pi OS 64-bit booting on CM4.

? Creating and installing the ESXi ARM UEFI boot image.

? Configuring iSCSI storage using Synology NAS.

? Setting up ESXi ARM with licensing, NTP, and NFS storage.

? A full walkthrough of PXE booting and TFTP configuration.

? Netbooting the CM4 and finalizing the ESXi ARM environment.

? Flashing the BMC firmware is covered in this video

? Replacing the self-signed Turing Pi v2 SSL certificate with a certificate from Microsoft Certificate Services. is covered in this video


 

 

Part 56: HOW TO: Manually remove a failed vSAN disk group from a VMware vSphere vSAN cluster using ESXCLI

Monday, November 25th, 2024

In this video presentation which is part of the Hancock’s VMware Half Hour I will show you HOW TO: Manually remove a failed vSAN disk group from a VMware vSphere vSAN cluster using ESXCLI.

 

The VMware vSphere vCenter Server web client has difficulty in performing some vSAN actions, so we need to connect via SSH to the bash shell of the ESXI host to perform this action using the following command

esxcli vsan storage remove -u <VSAN Disk Group UUID>

see here

How to manually remove and recreate a vSAN disk group using esxcli

Part 48. HOW TO: Add a VMware vSphere vSAN license to a VMware vSphere vSAN Cluster

Saturday, October 19th, 2024

In this video presentation which is part of the Hancock’s VMware Half Hour HOW TO Video Series I will show you HOW TO: Add a VMware vSphere vSAN license to a VMware vSphere vSAN Cluster.

The storage devices we are using in this video are the Intel® Optane™ SSD DC P4800X Series 375GB, 2.5in PCIe x4, 3D XPoint™, but this procedure can be use to add any compatible storage devices in ESXi to a vSAN datastore.

This video follows on from the follow video in this series

Part 36: HOW TO: Select an inexpensive HCL Certified 10GBe network interfaces for vSphere ESXi 7.0 and vSphere ESXi 8.0 for VMware vSphere vSAN

Part 37: HOW TO: Change the LBA sector size of storage media to make it compatible with VMware vSphere Hypervisor ESXi 7.0 and ESXi 8.0.

Part 39: HOW TO: Create a VMware vSphere Distributed Switch (VDS) for use with VMware vSphere vSAN for the VMware vSphere vSAN Cluster.

If you are creating a design for VMware vSphere vSAN for a Production environment, please ensure you read the  VMware Cloud Foundation Design Guide 01 JUN 2023 – this should be regarded as The Bible!

References

HOW TO: FIX the Warning System logs on host are stored on non-persistent storage, Move system logs to NFS shared storage.

WHAT’S HAPPENING WITH INTEL OPTANE? – Mr vSAN – Simon Todd

Matt Mancini blog

VMware vSAN 8.0 U1 Express Storage Architecture Deep Dive

VMware vSAN 7.0 U3 Deep Dive Paperback – 5 May 2022

VMware vSphere vSAN Licensing Guide

VMUG Advantage

Part 39: HOW TO: Create a VMware vSphere Distributed Switch (VDS) for use with VMware vSphere vSAN for the VMware vSphere vSAN Cluster.

Saturday, October 12th, 2024

In this video presentation which is part of the Hancock’s VMware Half Hour HOW TO Video Series I will show you HOW TO: Create a VMware vSphere Distributed Switch (VDS) for use with VMware vSphere vSAN for the VMware vSphere vSAN Cluster.

VMware vSphere Distributed Switch (VDS) provides a centralized interface from which you can configure, monitor and administer virtual machine access switching for the entire data center. The VDS provides:

  • Simplified virtual machine network configuration
  • Enhanced network monitoring and troubleshooting capabilities
  • Support for advanced VMware vSphere networking features

As my 10GBe switch in this VMware vSphere Lab has LACP functionality I have decided to demonstrate how we configure the vDS for a LACP LAG. Link Aggregation Control Protocol (LACP) is one elements of an IEEE specification (802.3ad) that provides guidance on the practice of link aggregation for data connections, it’s used on trunks or port channels, to bond two ethernet ports together. It is only supported using a VMware vSphere Distributed Switch (VDS) , it is not supported on a VMware vSphere Standard Switch (VSS).

This video covers the following

  • Creation of the VMware vSphere Distributed Switch (VDS).
  • Creation of Portgroups with vLANs for Management, vMotion and vSAN.
  • Creation of the LACP LAG.
  • Adding vDS to hosts in the vSphere Cluster.
  • Migration of existing VMKernel portgroups from VSS to VDS.
  • Testing the VMware vSphere Distributed Switch (VDS).

If you are creating a design for VMware vSphere vSAN for a Production environment, please ensure you read the  VMware Cloud Foundation Design Guide 01 JUN 2023 – this should be regarded as The Bible!

3..2..1.. and Success (don’t try this at home kids)

Saturday, September 18th, 2010

After another few failed attempts, and some diagnosis, it was the sealed lead acid battery that wasn’t providing enough power (Amps) to fire the rocket cluster. I tried charging the battery with a car battery charger and that went horribly wrong and smelly, and the plastic case melted on the battery, because my special charger for the seal lead acid battery had also failed.

So I purchased a new sealed lead acid battery and here are the results

Quick disclaimer (don’t try this at home kids), you could get seriously hurt!

This test was to check the rocket mounts were secure to hold in the two rocket motors, before further completing the rocket assembly. You can clearly here the warning tone before firing the rocket motors, both rocket motors ignite simultaneously, the rockets both burn at a constant rate, until the ejection charge which would normally deploy the recovery mechanism, e.g. a parachute is heard.

Looking back at the video, I think there is a slight delay between the rocket motors!

3..2..1.. and nothing… Umm

Monday, August 30th, 2010

Now that the UK has relaxed it’s Rocketry and Explosives requirements, it’s a little easier to acquire and use rocket motors in the UK, without all the red tape and certificates, and explosives license.

so back to Model Rocketry….

Before I complete the rest of this rocket, I’m testing the motor mounts, before adding fins and painting, it’s very important to ensure that the motor mounts are fixed and glued correctly within the body of the rocket, otherwise the motors when ignited will just seperate from the model rocket body.

Test Number 1. 3..2..1.. and nothing… Umm

Time passes and sometime later that evening….. (it’s getting dark now).

Test Number 2. 3..2..1.. and nothing… Umm

Damn Damn and Fecking Damn!!!

Back to the drawing board, electrical issues maybe……not firing….

Well it’s Rocket Science you know!