Archive for the ‘intel’ Category

Part 4: DIY UnRAID NAS – Insert new 10GBe NIC

Saturday, November 22nd, 2025

 

 

DIY UnRAID NAS Build – Part 4: Installing a 10GBe Intel X710-DA NIC (Plus an Outtake!)

Welcome back to another instalment of my DIY UnRAID NAS Build series.
If you have been following along, you will know this project is built around an Intel NUC chassis that I have been carefully (and repeatedly!) taking apart to transform into a compact but powerful UnRAID server.

In Part 4, we move on to a major upgrade: installing a 10GBe Intel X710-DA network interface card. And yes, the eagle-eyed among you will notice something unusual at the beginning of the video, because this episode starts with a blooper. I left it in for your entertainment.


A Fun Outtake to Start With

Right from the intro, things get a little chaotic. There is also a mysterious soundtrack playing, and I still do not know where it came from.
If you can identify it, feel free to drop a comment on the video.


Tearing Down the Intel NUC Again

To install the X710-DA NIC, the NUC requires almost complete disassembly:

  • Remove the back plate
  • Remove the backplane retainer
  • Take off the side panels
  • Open the case
  • Remove the blanking plate
  • Prepare the internal slot area

This NUC has become surprisingly modular after taking it apart so many times, but it still puts up a fight occasionally.


Installing the Intel X710-DA 10GBe NIC

Once the case is stripped down, the NIC finally slides into place. It is a tight fit, but the X710-DA is a superb card for a NAS build:

  • Dual SFP+ ports
  • Excellent driver support
  • Great performance in VMware, Linux, and Windows
  • Ideal for high-speed file transfers and VM workloads

If you are building a NAS that needs to move data quickly between systems, this NIC is a great option.


Reassembly

Next, everything goes back together:

  • Side panels reinstalled
  • Back plate fitted
  • Case secured
  • System ready for testing

You would think after doing this several times I would be quicker at it, but the NUC still has a few surprises waiting.


Booting into Windows 11 and Driver Issues

Once everything is reassembled, the NUC boots into Windows 11, and immediately there is a warning:

Intel X710-DA: Not Present

Device Manager confirms it. Windows detects that something is installed, but it does not know what it is.

Time to visit the Intel website, download the correct driver bundle, extract it, and install the drivers manually.

After a reboot, success. The NIC appears correctly and is fully functional.


Why 10GBe

For UnRAID, 10GBe significantly improves:

  • VM migrations
  • iSCSI and NFS performance
  • File transfers
  • Backup times
  • SMB throughput for Windows and macOS clients

It also future-proofs the NAS for any future network upgrades.


The Mystery Soundtrack

Towards the end of the video I ask again: what is the music playing in the background?
I genuinely have no idea, so if you recognise it, please leave a comment on the video.


Watch the Episode

You can watch the full episode, including all teardown steps, NIC installation, Windows troubleshooting, and the blooper, here:


Thank You for Watching and Reading

Thank you for following along with this NAS build.
Part 5 will continue the series, so stay tuned.

If you have built your own UnRAID NAS or have a favourite NIC for homelab projects, feel free to comment and share your experience.

Regards,
Andy

PART 3: DIY Unraid NAS: Power Testing & Stability Checking with OCCT

Sunday, November 16th, 2025

 

PART 3 – DIY Unraid NAS: Power Testing & Stability Checking with OCCT

Welcome back to Part 3 of the DIY Unraid NAS series!
In Part 1, we unboxed and assembled the hardware.
In Part 2, we ran a quick Windows 11 installation test (and of course, everything that could go wrong… went Pete Tong).

Now that the system boots and behaves under a “normal” workload, it’s time to get serious. Before committing this Intel NUC–powered machine to Unraid full-time, we need to ensure it’s electrically stable, thermally stable, and capable of running 24/7 without surprises.

This stage is all about power draw, thermals, and stress testing using OCCT — a powerful tool for validating hardware stability.


Why Power & Stability Testing Is Essential for a NAS

A NAS must be:

  • Reliable
  • Predictable
  • Stable under load
  • Able to handle long uptimes
  • Capable of sustained read/write operations
  • Tolerant of temperature variation

Unlike a desktop, a NAS doesn’t get breaks. It runs constantly, serving files, running Docker containers, hosting VMs, and performing parity checks. Any weakness now — PSU spikes, hot VRMs, faulty RAM — will eventually show up as file corruption or unexpected reboots.

That’s why stress testing at this stage is non-negotiable.


Using OCCT for a Full-System Torture Test

OCCT is typically used by overclockers, but it’s perfect for checking new NAS hardware.
It includes tests for:

1. CPU Stability

Pushes the CPU to 100% sustained load.
Checks:

  • Thermal throttling
  • Cooling capacity
  • Voltage stability
  • Clock behaviour under load

A NAS must not throttle or overheat under parity checks or rebuilds.

2. Memory Integrity Test

RAM is the most overlooked component in DIY NAS builds.
Errors = silent data corruption.

OCCT’s memory test:

  • Fills RAM with patterns
  • Reads, writes, and verifies
  • Detects bit-flip issues
  • Ensures stability under pressure

Memory integrity is vital for Unraid, especially with Docker and VMs.

3. Power Supply Stress Test

OCCT is one of the few tools capable of stressing:

  • CPU
  • GPU (if present)
  • Memory
  • All power rails

simultaneously.

This simulates worst-case load and reveals:

  • Weak PSUs
  • Voltage drops
  • Instability
  • Flaky power bricks
  • VRM overheating

Not what you want in a NAS.

4. Thermal Behaviour Monitoring

OCCT provides excellent graphs showing:

  • Heat buildup
  • Fan curve response
  • Temperature equilibrium
  • VRM load
  • Stability over time

This shows whether the NUC case and cooling can handle long running services.


Test Results: Can the Intel NUC Handle It?

After running OCCT, the system performed exceptionally well.

CPU

  • No throttling
  • Temperatures within acceptable limits
  • Clock speeds held steady

RAM

  • Passed memory integrity tests
  • No bit errors
  • Stable under extended load

Power Delivery

  • No shutdowns or brown-outs
  • The power brick handled peaks
  • VRMs stayed within thermal limits

Thermals

  • Fans behaved predictably
  • Temperature plateau was stable
  • No unsafe spikes

In other words:
This machine is ready to become an Unraid NAS.


Why Validate Hardware Before Installing Unraid?

Because fixing hardware problems AFTER configuring:

  • Shares
  • Parity
  • Docker containers
  • VMs
  • Backups
  • User data

…is painful.

Hardware validation now ensures:

  • No silent RAM corruption
  • No thermal issues
  • No unexpected shutdowns
  • No nasty surprises during parity builds
  • The system is reliable for 24/7 operation

This step protects your data, your time, and your sanity.


What’s Coming in Part 4

With the hardware:

  • Burned in
  • Power-tested
  • Thermally stable
  • Verified by OCCT

We move to the exciting part:
Actually installing Unraid!

In Part 4, we will:

  • Prepare the Unraid USB boot device
  • Configure BIOS for NAS use
  • Boot Unraid for the first time
  • Create the array
  • Assign drives
  • Add parity
  • Begin configuring shares and services

We’re finally at the point where the NAS becomes… a NAS!

Stay tuned — the best parts are still ahead.


 

Part 2: Building a DIY NVMe NAS with the Intel NUC 11 Extreme (Beast Canyon) – Testing hardware with Windows 11 (and When Things Go Pete Tong!)

Sunday, November 16th, 2025

Welcome back to Part 2 of our DIY Unraid NAS adventure!
In Part 1, we unboxed the hardware, checked the spec, and got ready to build a tiny but mighty home-brew NAS around the Intel NUC “Skull” chassis.

Before committing this machine to Unraid full-time, I wanted to run a quick hardware test — and what better way than to throw a Windows 11 installation at it? Simple, right?

Well… maybe not.
As usual, things went a bit Pete Tong along the way! ?


Booting the NUC – and Immediate Problems

The video starts with the NUC firing up nicely… until I discover the mouse isn’t working.
Not ideal when you’re trying to install an OS.

After poking around, I realise the issue is down to the NanoKVM I use for remote access.
The trick?
Switch the KVM to HID mode only — suddenly the mouse returns from the dead.

Lesson learned:
Tiny KVMs can cause BIG installation headaches.


Ventoy + Windows 11 ISO = Let’s Try This Again

Once the input devices were behaving, I booted Ventoy from USB and selected the Windows 11 ISO.

This part should be smooth.
Except it wasn’t.

Windows 11 booted fine…
The setup loaded…
Language and keyboard selected…
Version chosen…
Installation begins…

Then:
“Windows 11 installation has failed.”

No reason.
No explanation.
Just a failure screen and a shrug.

Excellent.


If At First You Don’t Succeed – Install Again

Time for round two.

Ventoy ? Windows 11 ISO ? Setup ? Install
Copying files…

YES!
It finally completes.

That warm feeling of success lasted a whole ten seconds before Windows restarted to continue configuration — and hit me with another set of “what now?!” delays.

Still, persistence wins.
Eventually we get to:

  • Keyboard setup

  • Feature selection

  • Updates

  • Account creation

  • Security questions

  • More updates

  • Even more updates

Whoever said installing Windows 11 only takes 10 minutes was telling porkies.


Finally… Windows 11 Desktop

After the second attempt, repeated reboots, KVM issues, updates, and the bizarre initial failure, we finally land on a clean, working Windows 11 desktop.

Why bother with all this before Unraid?

Because hardware burn-in testing NOW can save hours (or days) of pain LATER.

And, despite the chaos, the system:

  • Booted reliably

  • Handled disk I/O without any red flags

  • Passed the Windows installation stress test

  • Proved the RAM and NVMe are behaving

  • Survived the “Hancock Troubleshooting Gauntlet”™

So we can move into Part 3 with confidence!

A list of all my @ExpertsExchange articles and videos can be found at The CodHeadClub

Monday, August 21st, 2023

A list of all my Experts Exchange articles and videos can be found here – at the  – CodHeadClub – To copy and paste! 

http://tinyurl.com/AwesomeResourcesURL

This is an Awesome List of Computer Science, Technology, Programming and Educational resources for the benefit of all who care to use it.

The list was originally created by Closebracket.

I’ve now written over 140 articles and created 40 hours of tutorial VMware vSphere videos on vSphere 7.0 and 8.0. and today published Part 50 – VMware vSphere videos on vSphere 7.0.

HOW TO: Perform storage performance tests on VMware vSphere vSAN, using the VMware Hyper-converged Infrastructure Benchmark fling (HCIBench)

Monday, August 14th, 2023

In this video presentation which is part of the Hancock’s VMware Half Hour HOW TO Video Series I will show you HOW TO:  Perform storage performance tests on VMware vSphere vSAN, using the VMware Hyper-converged Infrastructure Benchmark fling (HCIBench).

HCIBench is a storage performance testing automation tool that simplifies and accelerates customer Proof of Concept (POC) performance testing in a consistent and controlled way. VMware vSAN Community Forum provides support for HCIBench.

HCIBench

The storage devices we are using in this video are the Intel® Optane™ SSD DC P4800X Series 375GB, 2.5in PCIe x4, 3D XPoint™, but this procedure can be use to add any compatible storage devices in ESXi to a vSAN datastore.

This video follows on from the follow video in this series

Part 36: HOW TO: Select an inexpensive HCL Certified 10GBe network interfaces for vSphere ESXi 7.0 and vSphere ESXi 8.0 for VMware vSphere vSAN

Part 37: HOW TO: Change the LBA sector size of storage media to make it compatible with VMware vSphere Hypervisor ESXi 7.0 and ESXi 8.0.

Part 39: HOW TO: Create a VMware vSphere Distributed Switch (VDS) for use with VMware vSphere vSAN for the VMware vSphere vSAN Cluster.

If you are creating a design for VMware vSphere vSAN for a Production environment, please ensure you read the  VMware Cloud Foundation Design Guide 01 JUN 2023 – this should be regarded as The Bible!

References

HOW TO: FIX the Warning System logs on host are stored on non-persistent storage, Move system logs to NFS shared storage.

WHAT’S HAPPENING WITH INTEL OPTANE? – Mr vSAN – Simon Todd

Matt Mancini blog

VMware vSAN 8.0 U1 Express Storage Architecture Deep Dive

VMware vSAN 7.0 U3 Deep Dive Paperback – 5 May 2022

The results generated from this video are available here in these PDFs for download

FIO Benchmarks

4K/70%Read/100%Random

4K/100%Read100%Random

8K/50%Read/100%Random

256K/100%Write/100%Sequential

VDBENCH Benchmarks

4K/70%Read/100%Random
4K/100%Read100%Random
8K/50%Read/100%Random
256K/100%Write/100%Sequential

#intel #optane SSD demo units received as part of the vExpert Program not being detected as a datastore in ESXi 7.0 or ESXi 8.0 ?

Monday, July 3rd, 2023

This blog is specific to the #intel #optane demo units received as part of the vExpert Program. but later you will observe that this applies to all storage devices connected to ESXi 7.0 or ESXi 8.0.

and again my special thanks to fellow #vExperts –  Mr vSAN, Matt Mancini, and vCommunity Guy for arranging this fantastic opportunity to work with #intel #optane demo units for free in our #homelabs .

These demo units received may have been previously used as part of the #intel loan program.

I received 10 (ten) #intel #optane – Intel® Optane™ SSD DC P4800X Series 375GB, 2.5in PCIe x4, 3D XPoint™

the form factor I selected for my #homelab was U.2 15mm, rather than a PCIe slot in card, because I want to use them, in the storage/disk slot of a server, I could connect them to a U.2 to PCIe card, but I would rather use them as “intended”.  More on the complications of that later with my #homelab.

For ease, I did quickly connect them all to a recommended (thanks Mr vSAN) – StarTech.com U.2 to PCIe Adapter – x4 PCIe – For 2.5″ U.2 NVMe SSD – SFF-8639 PCIe Adapter – U.2 SSD – PCIe SSD – U.2 drive (PEX4SFF8639) for testing and formatting in my test bench. In fact I’ve now got a bucket full of these cards, I’ve tried and tested, how difficult can it be to connect a U.2. NVMe interface to a PCIe slot, when some cards are £50 GBP, and other cards are available from that well known China website for £1.99 GBP ! and some are described as Low Profile – NOT! – but more on that later!

You may notice if you look through the above photos, there is one U.2 #intel #optane unit with a RED DOT! Read on.

This because it was faulty, ah, or so I thought! I must admit, it was very odd, because it worked in Windows 10, and on checking in ESXi and Ubuntu, the devices were present.

esxi007-no-storage1

PCIe passthrough devices

esxi007-storage-adapters

Storage Adapters

Device visible in Ubuntu

But when trying to create a datastore, no device was available to create a datastore.

No storage for datastore visible

BUT BUT BUT After discussions with Mr vSAN,  Mr Todd (MrVSAN) suggested checking that the #intel #optane SSD how not been formatted to/with LBA 4K sectors! Because ESXi 7.0 and 8.0 does not support LBA 4K. I was surprised that it would not list the SSD device!

Interestingly on Twitter at the same time, another vExpert was also having similar issues!

Checking with an Ubuntu Live “CDROM” USB flash drive

Dell PowerEdge R730 UEFI BOOT

 

Dell PowerEdge R730 UEFI BOOT

 

Dell PowerEdge R730 Ubuntu

Using the nvme-cli, which you can pull using sudo apt-get install nvem-cli, and use the command sudo nvme list to list NVMe devices

to check the LBA format – sudo nvme id-ns -H /dev/nvmeXnY | grep “LBA format” you can see in the screenshot below I have two NVMe devices /dev/nvme0n1
/dev/nvme1n1 and both show

[3:0] : 03x Current LBA Format Selected

if you look at LBA Format 3: it states Data Size – 4096 bytes !

check LBA  – sudo nvme id-ns -H /dev/nvmeXnY | grep “LBA format”

 

Argh! 4K ! lets just reformat with – sudo nvme format -l 0 /dev/nvme0n1, it does not display any progress, as there is no -v verbose option, but eventually it will respond with Success Formatting Namespace:1

sudo nvme format -l 0 /dev/nvme0n1,

sudo nvme format -l 0 /dev/nvme0n1,

success 512k

for #shitsandgiggles I’ve left /dev/nvme1n1 formatted as 4k, but now above you can see /dev/nvme0n1 is 512 sectors, so now back and restart ESXi. I’ll do a quick video on /dev/nvme1n1 for Hancock’s VMware Half Hour

If I now check the storage devices in ESXi, there is a Local NVMe SSD available

storage devices

and if you now try to create a datastore – Viola!

device for datastore available

You will notice from the above storage device list the 4K formatted NVMe device is still missing. Q.E.D

Anyway kudos and my sincere Thanks to Simon Todd aka Mr vSAN !

So onward with my #intel #optane #homelab journey more later!

It’s A Proper Job! – Low Profile PCIe Bracket for Dell 0XYT17 Intel X520-DA2 Dual Port 10GB SFP+ NIC

Sunday, July 2nd, 2023
 

So I recently blogged here – about the network interfaces I’ve chosen for my #intel #optane #vSAN #homelab, but I cam across a small snag, in the Dell PowerEdge R730 I’m using as one of the #homelab servers uses low profile PCIe cards so I need to purchase a low profile bracket from eBay for it, so it will fit nicely.  I could have swapped the bracket from another network interface card but that then leaves that card I cannot use in the future!

Low profile bracket

flapping in the breeze

So one purchased from the UK, I could purchase one from China, but I’m in a hurry, I could leave the PCIe card floppy around in the breeze, but I do like to do things proper!

R730 riser with Intel-DA2 fitted with low profile bracket

It’s a Proper Job!

oh for those that don’t know, it’s a lovely tipple from Cornwall! A Cornish IPA from St Austell Brewery !

 

 

 

 

Inexpensive HCL Certified 10GBe network interfaces for vSphere ESXi 7.0 and vSphere ESXi 8.0 #homelab

Sunday, July 2nd, 2023

I’ve been desperately searching for inexpensive 10GBe SFP+ network interface cards for use with my new #intel #optane #homelab #vSAN which support both VMware vSphere 7.0 (ESXi 7.0) and 8.0 (ESXi 8.0), the major reason is so I can use the same lab for vSphere 7.0 and then upgrade to vSphere 8.0.

You can use the VMware Hardware compatibility list to find them, but it does take some searching and testing, so I’ve settled on the following network interface cards

Dell 0Y40PH Broadcom 57810S Dual Port 10GbE SFP+ Network Card Low Profile Dell P/N: 0Y40PH – this uses the qfle3i driver in ESXi 7.x and 8.x.

Dell 0Y40PH Broadcom 57810S Dual Port 10GbE SFP+ Network Card Low Profile Dell P/N: 0Y40PH

Dell 0Y40PH Broadcom 57810S Dual Port 10GbE SFP+ Network Card Low Profile Dell P/N: 0Y40PH

Dell 0Y40PH Broadcom 57810S Dual Port 10GbE SFP+ Network Card Low Profile Dell P/N: 0Y40PH

Dell 0Y40PH Broadcom 57810S Dual Port 10GbE SFP+ Network Card Low Profile Dell P/N: 0Y40PH

Dell 0XYT17 Intel X520-DA2 Dual Port 10GB SFP+ NIC with SFP this uses the ixgben driver for ESXi 7.x and 8.x

Dell 0XYT17 Intel X520-DA2 Dual Port 10GB SFP+ NIC with SFP

Dell 0XYT17 Intel X520-DA2 Dual Port 10GB SFP+ NIC with SFP

Dell 0XYT17 Intel X520-DA2 Dual Port 10GB SFP+ NIC with SFP

Dell 0XYT17 Intel X520-DA2 Dual Port 10GB SFP+ NIC with SFP

which I’ve tested both with ESXi 7.0 and ESXi 8.0.

What does surprise me, in it’s taken many months to find the first card above, which is present on the HCL, and also functions with ESXi 7. and 8.0, without adding any additional drivers, e.g. it works out of the box!

We were scrapping many Dell R910 from the datacentre, and I was surprised after testing that the Dell 0XYT17 Intel X520-DA2 Dual Port 10GB SFP+ NIC with SFP still functions with ESXi 7.0 and ESXi 8.0, and was manufactured 10 years ago, and is still present on the HCL today! Well done #intel for stopping these cards going to landfill!

10 years old and on the HCL!

10 years old and on the HCL!

This is a bargain card and firmware is still available and can be updated from Dell ! From memory 21.x !

So ideal for an inexpensive #homelab

Argh, I’ve got one issue, more later on that!

#intel #optane demo units as part of the @vExpert program

Saturday, July 1st, 2023

I was able to take part in the fantastic offer of free #intel #optane demo units as part of the vExpert Program to create a #vSAN #homelab project which I will document here on this blog. It has taken me a while to obtain all the parts for the #homelab BOM, so here goes….. the #homelab will be based on VMware vSphere 7.0 and 8.0 vSAN.

So one of the many benefits of the vExpert Program , so if you have an interest in VMware Products, reach out to me as a vExpert Pro, for help with applying to the program!

So many Thanks to fellow #vExperts –  Mr vSAN, and Matt Mancini, vCommunity Guy for organising this.