Posts Tagged ‘homelab’

Minisforum MS-A2 Can it Run ESXi 8.0.3g ? Minisforum MS-A2 Series Part 10 Ultimate #homelab

Saturday, August 9th, 2025

 

Can the powerful Minisforum MS-A2 run VMware vSphere 8.0?
In Part 10 of the Ultimate #homelab series, we put this compact beast to the test by installing VMware vSphere Hypervisor ESXi 8.0.3g and seeing how it performs. From BIOS setup to creating a demo virtual machine, this episode covers the full journey.

What’s Inside This Video:

Installing ESXi 8.0.3g on the Minisforum MS-A2

BIOS configuration & USB boot with Ventoy

Full ESXi setup walkthrough

Creating & running a test VM

Enabling NVMe Memory Tiering with NVMe namespaces

Checking performance and confirming a successful install

If you’ve been wondering whether the MS-A2 can handle serious VMware workloads in a home lab, this is the episode to watch!

Minisforum MS-A2 Windows Server 2025 Install, Minisforum MS-A2 Series Part 5 Ultimate #homelab

Thursday, July 31st, 2025

 

How to Install Windows Server 2025 on the Minisforum MS-A2 
In this video, I walk you through the complete step-by-step process of installing Windows Server 2025 on the compact yet powerful MINISFORUM MS-A2 mini PC.

What You’ll Learn:

How to prepare your bootable USB with Windows Server 2025

BIOS/UEFI configuration on the MS-A2

Installing Windows Server 2025 from scratch

Initial setup and configuration

Performance and compatibility overview

Not sure Windows Server 2025 is designed to run on the MS-A2 because lack of drivers, still in discussions with Minisforum! Checkout the forced driver install!

Minisforum MS-A2 Migrate ESXi VMs to Hyper-V, Minisforum MS-A2 Series Part 6 Ultimate #homelab

Thursday, July 31st, 2025

 

In Part 6 of the Minisforum MS-A2 Series, we show you how to migrate VMware ESXi Virtual Machines (VMs) to Microsoft Hyper-V on Windows Server 2025 — using the powerful and compact Minisforum MS-A2 as the ultimate homelab platform.

This video features Veeam Backup & Replication v12.3 to safely back up your ESXi VMs and restore them directly to Hyper-V. It’s a clean and efficient migration method for anyone exploring life after VMware.

Whether you’re planning a full platform switch or testing a hybrid setup, you’ll find practical, step-by-step guidance from backup to restore — with key gotchas and tips throughout.

In this episode, you’ll learn:

Preparing VMware ESXi VMs for migration

Creating backups using Veeam v12.3

Restoring backups to Microsoft Hyper-V

Configuring networking, storage, and integration services

Post-migration testing and optimization

Real-world advice for homelabbers and IT professionals

Perfect for #homelab enthusiasts, sysadmins, and IT pros evaluating alternatives to VMware.
Got questions or want to share your experience? Drop a comment below!

Like this video if it helped you
Subscribe and hit the bell to follow the full MS-A2 homelab journey

Minisforum MS-A2 – The Ultimate #Homelab Server for VMware vSphere, VVF, and VCF?

Monday, June 30th, 2025

Lately, it feels like every VMware vExpert has been posting photos of their compact lab servers — and I’ll be honest, I was starting to feel left out.

So, I joined the club.

I picked up the new Minisforum MS-A2, and I’ve not looked back. This isn’t just another NUC alternative — it’s a serious powerhouse in a tiny chassis, perfect for VMware enthusiasts building or upgrading their vSphere, VVF, or VCF test environments.

Let’s dig into what makes this little beast a perfect addition to any #homelab setup in 2025.

Hardware Highlights – Not Your Average Mini PC
The MS-A2 isn’t just punching above its weight — it’s redefining what’s possible in a compact lab node.

Key Specs:
CPU: AMD Ryzen™ 9 9955HX – 16 cores / 32 threads of Zen 5 power

Memory: Dual DDR5-5600MHz SODIMM slots – up to 96GB officially, but…

Storage:

3× M.2 PCIe 4.0 slots (22110 supported)

Supports U.2 NVMe – great for enterprise-grade flash

Networking:

Dual 10Gbps SFP+ LAN

Dual 2.5GbE RJ45 ports

Wi-Fi 6E + Bluetooth 5.3 (going to replace this with more NVMe storage!)

Expansion:

Built-in PCIe x16 slot (supports split mode – ideal for GPUs, HBAs, or NICs)

This is homelab gold. It gives you the raw compute of a full rack server, the storage flexibility of a SAN box, and the network fabric of a modern datacenter — all under 2L in size.

How I Configured Mine – still sealed in box as I write – video incoming!
I purchased mine barebones from Amazon, and — as of writing — it’s still sealed in the box. Why? I’m waiting for all the parts to arrive.

Most importantly, I’ll be upgrading it with:
128GB of Crucial DDR5-5600 SODIMMs (2×64GB) — pushing beyond the official spec to see just how much performance this little box can handle.

Once everything’s here, I’ll be unboxing and assembling it live on a future episode of Hancock’s VMware Half Hour. Stay tuned if you want a front-row seat to the full setup, testing, and VMware lab deployment.

Perfect for VMware Labs: vSphere 8/9, VVF, and VCF
Whether you’re testing ESXi on bare metal or running full nested labs, this spec ticks every box.

ESXi Bare Metal Capable
The Ryzen 9 9955HX and AMD chipset boot vSphere 8.0U2 and 9.0 Tech Preview cleanly with minimal tweaks. Use community networking drivers or USB NIC injectors if needed.

VVF / VCF in a Box
If you’re exploring VMware Validated Foundation (VVF) or want a self-contained VCF lab for learning:

16C/32T lets you run nested 3-node ESXi clusters + vCenter + NSX-T comfortably

128GB RAM gives breathing room for resource-heavy components like SDDC Manager

PCIe 4.0 + U.2 = blazing fast vSAN storage

Dual 10Gb SFP+ = NSX-T overlay performance lab-ready

Community Validation – I Was Late to the Party
Fellow vExpert Daniel Krieger was ahead of the curve — writing about the MS-A2 months ago in his excellent blog post here:
sdn-warrior.org/posts/ms-a2

Then vExpert William Lam added his voice to the conversation with a guide to running VMware Cloud Foundation (VCF) on the MS-A2:
williamlam.com/2025/06/vmware-cloud-foundation-vcf-on-minisforum-ms-a2.html

Seeing both of them validate the MS-A2 pushed me over the edge — and I’m glad I jumped in.

Setup Tips (Soon!)
Once the unboxing is done, I’ll share:

BIOS tweaks: SVM, IOMMU, PCIe bifurcation

NIC setup for ESXi USB fling and 10GbE DAC

Storage layout for vSAN and U.2/NVMe configs

Full nested VCF/VVF deployment guide

Considerations
Still not officially VMware HCL — but community-tested

Ryzen platform lacks ECC memory — standard for most mini-PC builds

PCI passthrough needs thoughtful planning for IOMMU groupings

Ideal Use Cases
Nested ESXi, vSAN, vCenter, NSX labs

VVF deployment simulations

VCF lifecycle manager testing

Tanzu Kubernetes Grid

NSX-T Edge simulations on 10GbE

GPU or high-speed NIC via PCIe slot for advanced lab scenarios

Final Thoughts
The Minisforum MS-A2 with Ryzen 9 9955HX is a serious contender for the best compact homelab system of 2025. Whether you’re diving into vSphere 9, experimenting with VVF, or simulating a full VCF environment, this mini server brings serious firepower.

It may still be in the box for now —
—but soon, it’ll be front and center on Hancock’s VMware Half Hour, ready to power the next chapter of my lab.

Join the Conversation
Got an MS-A2 or similar mini-monster? Share your specs, test results, or VMware experience — and tag it:

#VMware #vSphere #VCF #VVF #homelab #MinisforumMSA2 #10GbE #vExpert

These Arrived Today: The ComputeBlade – A New Era in Compact Computing

Thursday, December 5th, 2024

After much anticipation, The ComputeBlade has finally arrived! This innovative piece of hardware has been making waves in the compact computing and homelab community since its inception as a Kickstarter project, which closed in February 2023. While the Kickstarter campaign was highly successful, the journey to delivery has been anything but smooth.

The ComputeBlade Journey

For those unfamiliar, the ComputeBlade is an ambitious project by Uptime Lab designed to bring powerful, modular computing to a compact blade-style chassis. It offers support for Raspberry Pi Compute Modules (CM4) and similar SBCs, providing a platform for homelab enthusiasts, developers, and small-scale edge computing setups.

However, the project has faced several setbacks that delayed delivery for many backers:

  1. Russian Screws: Supply chain disruptions included sourcing specific screws, which became problematic due to geopolitical tensions.
  2. PoE (Power over Ethernet) Issues: The team encountered complications ensuring consistent and safe PoE functionality.
  3. Certification Challenges: Meeting various regulatory standards across regions added another layer of complexity.

Despite these hurdles, I opted to purchase my ComputeBlades retail, as Kickstarter backers have yet to fully receive their units.

For those interested in the Kickstarter campaign details, you can check it out here.

First Impressions

The retail packaging was sleek, compact, and felt premium. The ComputeBlade itself is a marvel of design, seamlessly blending form and function. Its modularity and expandability immediately stand out, with features such as:

  • Support for Raspberry Pi CM4: Making it a natural fit for virtualization, containerization, and other development projects.
  • Hot-Swappable Design: Simplifies maintenance and upgrades.
  • Integrated Networking: Includes options for advanced network setups, perfect for a homelab.

What’s Next?

Now that the ComputeBlade has arrived, I’m eager to put it through its paces. Over the next few weeks, I’ll be:

  1. Testing Homelab Applications: From running lightweight virtual machines to hosting containers using Docker or Kubernetes.
  2. Evaluating Networking Features: Especially the PoE capabilities and how it handles edge computing scenarios.
  3. Sharing Configurations: I’ll document how I integrate it into my existing homelab setup.

Closing Thoughts

While the journey of the ComputeBlade from Kickstarter to retail has been rocky, the product itself seems poised to live up to its promise. If you’ve been waiting for a scalable and compact compute platform, the ComputeBlade might just be the solution you’ve been looking for.

Stay tuned for my follow-up posts where I dive deeper into its performance and practical applications. If you’re also experimenting with the ComputeBlade, feel free to share your experiences in the comments or reach out via social media.

HOW TO: Configure & Install VMware ESXi ARM 8.0.3b on Raspberry Pi CM4 installed on a Turing Pi v2 Mini ITX Clusterboard | FULL MEGA GUIDE

Tuesday, December 3rd, 2024

Welcome to Hancock’s VMware Half Hour! This is the Full Monty Version, the MEGA Full Movie on configuring and installing VMware vSphere Hypervisor ESXi ARM 8.0.3b on a Raspberry Pi Compute Module 4. The CM4 is installed in a Turing Pi v2 Mini ITX Clusterboard, delivering a compact and powerful platform for ARM virtualization.

In this 1 hour and 19-minute guide, I’ll take you step-by-step through every detail, covering:

? Demonstrating Raspberry Pi OS 64-bit booting on CM4.

? Creating and installing the ESXi ARM UEFI boot image.

? Configuring iSCSI storage using Synology NAS.

? Setting up ESXi ARM with licensing, NTP, and NFS storage.

? A full walkthrough of PXE booting and TFTP configuration.

? Netbooting the CM4 and finalizing the ESXi ARM environment.

? Flashing the BMC firmware is covered in this video

? Replacing the self-signed Turing Pi v2 SSL certificate with a certificate from Microsoft Certificate Services. is covered in this video


 

 

Part 36: HOW TO: Select an inexpensive HCL Certified 10GBe network interfaces for vSphere ESXi 7.0 for VMware vSphere vSAN

Saturday, October 12th, 2024


In this video presentation which is part of the Hancock’s VMware Half Hour HOW TO Video Series I explore two inexpensive 10Gbe network interfaces suitable for the #homelab for use with VMware vSphere vSAN.

 

  • Dell 0Y40PH Broadcom 57810S Dual Port 10GbE SFP+ Network Card Low Profile Dell P/N: 0Y40PH
  • Dell 0XYT17 Intel X520-DA2 Dual Port 10GB SFP+ NIC with SFP

Full details of the part numbers can be found here on my blog – Inexpensive HCL Certified 10GBe network interfaces for vSphere ESXi 7.0 and vSphere ESXi 8.0 #homelab

A list of all my @ExpertsExchange articles and videos can be found at The CodHeadClub

Monday, August 21st, 2023

A list of all my Experts Exchange articles and videos can be found here – at the  – CodHeadClub – To copy and paste! 

http://tinyurl.com/AwesomeResourcesURL

This is an Awesome List of Computer Science, Technology, Programming and Educational resources for the benefit of all who care to use it.

The list was originally created by Closebracket.

I’ve now written over 140 articles and created 40 hours of tutorial VMware vSphere videos on vSphere 7.0 and 8.0. and today published Part 50 – VMware vSphere videos on vSphere 7.0.

HOW TO: Perform storage performance tests on VMware vSphere vSAN, using the VMware Hyper-converged Infrastructure Benchmark fling (HCIBench)

Monday, August 14th, 2023

In this video presentation which is part of the Hancock’s VMware Half Hour HOW TO Video Series I will show you HOW TO:  Perform storage performance tests on VMware vSphere vSAN, using the VMware Hyper-converged Infrastructure Benchmark fling (HCIBench).

HCIBench is a storage performance testing automation tool that simplifies and accelerates customer Proof of Concept (POC) performance testing in a consistent and controlled way. VMware vSAN Community Forum provides support for HCIBench.

HCIBench

The storage devices we are using in this video are the Intel® Optane™ SSD DC P4800X Series 375GB, 2.5in PCIe x4, 3D XPoint™, but this procedure can be use to add any compatible storage devices in ESXi to a vSAN datastore.

This video follows on from the follow video in this series

Part 36: HOW TO: Select an inexpensive HCL Certified 10GBe network interfaces for vSphere ESXi 7.0 and vSphere ESXi 8.0 for VMware vSphere vSAN

Part 37: HOW TO: Change the LBA sector size of storage media to make it compatible with VMware vSphere Hypervisor ESXi 7.0 and ESXi 8.0.

Part 39: HOW TO: Create a VMware vSphere Distributed Switch (VDS) for use with VMware vSphere vSAN for the VMware vSphere vSAN Cluster.

If you are creating a design for VMware vSphere vSAN for a Production environment, please ensure you read the  VMware Cloud Foundation Design Guide 01 JUN 2023 – this should be regarded as The Bible!

References

HOW TO: FIX the Warning System logs on host are stored on non-persistent storage, Move system logs to NFS shared storage.

WHAT’S HAPPENING WITH INTEL OPTANE? – Mr vSAN – Simon Todd

Matt Mancini blog

VMware vSAN 8.0 U1 Express Storage Architecture Deep Dive

VMware vSAN 7.0 U3 Deep Dive Paperback – 5 May 2022

The results generated from this video are available here in these PDFs for download

FIO Benchmarks

4K/70%Read/100%Random

4K/100%Read100%Random

8K/50%Read/100%Random

256K/100%Write/100%Sequential

VDBENCH Benchmarks

4K/70%Read/100%Random
4K/100%Read100%Random
8K/50%Read/100%Random
256K/100%Write/100%Sequential

#intel #optane SSD demo units received as part of the vExpert Program not being detected as a datastore in ESXi 7.0 or ESXi 8.0 ?

Monday, July 3rd, 2023

This blog is specific to the #intel #optane demo units received as part of the vExpert Program. but later you will observe that this applies to all storage devices connected to ESXi 7.0 or ESXi 8.0.

and again my special thanks to fellow #vExperts –  Mr vSAN, Matt Mancini, and vCommunity Guy for arranging this fantastic opportunity to work with #intel #optane demo units for free in our #homelabs .

These demo units received may have been previously used as part of the #intel loan program.

I received 10 (ten) #intel #optane – Intel® Optane™ SSD DC P4800X Series 375GB, 2.5in PCIe x4, 3D XPoint™

the form factor I selected for my #homelab was U.2 15mm, rather than a PCIe slot in card, because I want to use them, in the storage/disk slot of a server, I could connect them to a U.2 to PCIe card, but I would rather use them as “intended”.  More on the complications of that later with my #homelab.

For ease, I did quickly connect them all to a recommended (thanks Mr vSAN) – StarTech.com U.2 to PCIe Adapter – x4 PCIe – For 2.5″ U.2 NVMe SSD – SFF-8639 PCIe Adapter – U.2 SSD – PCIe SSD – U.2 drive (PEX4SFF8639) for testing and formatting in my test bench. In fact I’ve now got a bucket full of these cards, I’ve tried and tested, how difficult can it be to connect a U.2. NVMe interface to a PCIe slot, when some cards are £50 GBP, and other cards are available from that well known China website for £1.99 GBP ! and some are described as Low Profile – NOT! – but more on that later!

You may notice if you look through the above photos, there is one U.2 #intel #optane unit with a RED DOT! Read on.

This because it was faulty, ah, or so I thought! I must admit, it was very odd, because it worked in Windows 10, and on checking in ESXi and Ubuntu, the devices were present.

esxi007-no-storage1

PCIe passthrough devices

esxi007-storage-adapters

Storage Adapters

Device visible in Ubuntu

But when trying to create a datastore, no device was available to create a datastore.

No storage for datastore visible

BUT BUT BUT After discussions with Mr vSAN,  Mr Todd (MrVSAN) suggested checking that the #intel #optane SSD how not been formatted to/with LBA 4K sectors! Because ESXi 7.0 and 8.0 does not support LBA 4K. I was surprised that it would not list the SSD device!

Interestingly on Twitter at the same time, another vExpert was also having similar issues!

Checking with an Ubuntu Live “CDROM” USB flash drive

Dell PowerEdge R730 UEFI BOOT

 

Dell PowerEdge R730 UEFI BOOT

 

Dell PowerEdge R730 Ubuntu

Using the nvme-cli, which you can pull using sudo apt-get install nvem-cli, and use the command sudo nvme list to list NVMe devices

to check the LBA format – sudo nvme id-ns -H /dev/nvmeXnY | grep “LBA format” you can see in the screenshot below I have two NVMe devices /dev/nvme0n1
/dev/nvme1n1 and both show

[3:0] : 03x Current LBA Format Selected

if you look at LBA Format 3: it states Data Size – 4096 bytes !

check LBA  – sudo nvme id-ns -H /dev/nvmeXnY | grep “LBA format”

 

Argh! 4K ! lets just reformat with – sudo nvme format -l 0 /dev/nvme0n1, it does not display any progress, as there is no -v verbose option, but eventually it will respond with Success Formatting Namespace:1

sudo nvme format -l 0 /dev/nvme0n1,

sudo nvme format -l 0 /dev/nvme0n1,

success 512k

for #shitsandgiggles I’ve left /dev/nvme1n1 formatted as 4k, but now above you can see /dev/nvme0n1 is 512 sectors, so now back and restart ESXi. I’ll do a quick video on /dev/nvme1n1 for Hancock’s VMware Half Hour

If I now check the storage devices in ESXi, there is a Local NVMe SSD available

storage devices

and if you now try to create a datastore – Viola!

device for datastore available

You will notice from the above storage device list the 4K formatted NVMe device is still missing. Q.E.D

Anyway kudos and my sincere Thanks to Simon Todd aka Mr vSAN !

So onward with my #intel #optane #homelab journey more later!