After much anticipation, The ComputeBlade has finally arrived! This innovative piece of hardware has been making waves in the compact computing and homelab community since its inception as a Kickstarter project, which closed in February 2023. While the Kickstarter campaign was highly successful, the journey to delivery has been anything but smooth.
The ComputeBlade Journey
For those unfamiliar, the ComputeBlade is an ambitious project by Uptime Lab designed to bring powerful, modular computing to a compact blade-style chassis. It offers support for Raspberry Pi Compute Modules (CM4) and similar SBCs, providing a platform for homelab enthusiasts, developers, and small-scale edge computing setups.
However, the project has faced several setbacks that delayed delivery for many backers:
Russian Screws: Supply chain disruptions included sourcing specific screws, which became problematic due to geopolitical tensions.
PoE (Power over Ethernet) Issues: The team encountered complications ensuring consistent and safe PoE functionality.
Certification Challenges: Meeting various regulatory standards across regions added another layer of complexity.
Despite these hurdles, I opted to purchase my ComputeBlades retail, as Kickstarter backers have yet to fully receive their units.
For those interested in the Kickstarter campaign details, you can check it out here.
First Impressions
The retail packaging was sleek, compact, and felt premium. The ComputeBlade itself is a marvel of design, seamlessly blending form and function. Its modularity and expandability immediately stand out, with features such as:
Support for Raspberry Pi CM4: Making it a natural fit for virtualization, containerization, and other development projects.
Hot-Swappable Design: Simplifies maintenance and upgrades.
Integrated Networking: Includes options for advanced network setups, perfect for a homelab.
What’s Next?
Now that the ComputeBlade has arrived, I’m eager to put it through its paces. Over the next few weeks, I’ll be:
Testing Homelab Applications: From running lightweight virtual machines to hosting containers using Docker or Kubernetes.
Evaluating Networking Features: Especially the PoE capabilities and how it handles edge computing scenarios.
Sharing Configurations: I’ll document how I integrate it into my existing homelab setup.
Closing Thoughts
While the journey of the ComputeBlade from Kickstarter to retail has been rocky, the product itself seems poised to live up to its promise. If you’ve been waiting for a scalable and compact compute platform, the ComputeBlade might just be the solution you’ve been looking for.
Stay tuned for my follow-up posts where I dive deeper into its performance and practical applications. If you’re also experimenting with the ComputeBlade, feel free to share your experiences in the comments or reach out via social media.
As an avid enthusiast of VMware’s innovations, I’m diving headfirst into the ESXi ARM Fling v2.0, which is built on the robust VMware vSphere Hypervisor ESXi 8.0.3b codebase. The ARM architecture has always intrigued me, and with this latest version, VMware has pushed the boundaries of what’s possible with ESXi on ARM devices. It’s a playground full of potential for anyone experimenting with lightweight, power-efficient infrastructures.
The Turing Pi Journey
After much anticipation, my Turing Pi Mini ITX boards have arrived! These boards are compatible with the Raspberry Pi Compute Module 4, offering a modular, scalable setup perfect for ARM experimentation. With a few Compute Module 4s ready to go, I’m eager to bring this setup to life. However, finding a suitable case for the Turing Pi board has proven to be a bit of a challenge.
Case Conundrum
While Turing Pi has announced an official ITX case for their boards, it’s currently on preorder and comes with a hefty price tag. For now, I’ve decided to go with a practical and versatile option: the Streamcom Mini ITX OpenBench case. Its open-frame design is functional, and it’ll keep the board accessible during testing and configuration.
I’m also considering crafting my own custom case. Using laser-cut wood or acrylic is an appealing option, offering the opportunity to create something unique and tailored to my specific requirements. But for now, the OpenBench case will do nicely as I explore the ESXi ARM Fling.
Why ESXi ARM Fling v2.0?
The ESXi ARM Fling project is an exciting venture for anyone who loves to experiment with virtualization. Running ESXi on ARM hardware offers several advantages:
Energy efficiency: ARM boards consume far less power compared to traditional x86 systems.
Cost-effectiveness: Affordable hardware like the Raspberry Pi Compute Module 4 makes it accessible to a wider audience.
Flexibility: The compact form factor of ARM devices is ideal for edge computing, IoT, or even small-scale home labs.
The v2.0 update introduces enhanced support, better performance, and bug fixes, making it an excellent choice for exploring the ARM ecosystem.
What’s Next?
With the hardware in hand and the ESXi ARM Fling v2.0 ready to install, I’m planning to dive into:
Setting up and configuring the Turing Pi board with ESXi.
Testing the system’s stability, performance, and scalability using multiple Raspberry Pi Compute Modules.
Exploring practical use cases, such as lightweight Kubernetes clusters or edge computing applications.
I’ll share updates on the build process, challenges, and performance insights in future posts. For now, I’m excited to get started and see what this setup can achieve.
Stay tuned for more! If you’ve experimented with the ESXi ARM Fling or have tips for working with the Turing Pi board, I’d love to hear from you.
In this video presentation which is part of the Hancock’s VMware Half Hour I will show you HOW TO: Fix Synchronous Exception at 0x00000000XXXXXXX on VMware vSphere Hypervisor 7.0 (ESXi 7.0 ARM) on a Raspberry Pi 4.
It has been well documented that the Raspberry Pi 4 UEFI Firmware Image can cause this fault which renders the UEFI boot image corrupt. See here https://github.com/pftf/RPi4/issues/97
The UEFI firmware imaged used in the lab in this video is v1.37, it is debated as too whether this has been fixed in later releases v1.37, some suggest rolling back to v1.33 !
For the sake of continuity I’ve included previous EE Videos and Articles I’ve created here
In this video I’m going to show you HOW TO: Update the VMware vSphere Hypervisor 7.0 ARM Edition (ESXi 7.0 ARM edition) from v1.12 Build 7.0.0-1.12.21447677to v1.15 Build 22949429 on a Raspberry Pi 4, the method used is based on this article and video
The Sychronous Excepetion at 0x0000000037101434 in the UEFI BOOT Firmware v1.34 is still an issue today, which has not been fixed. These are messages received on Twitter from the Engineers which have worked on ESXi ARM. v1.35 is the latest UEFI firmware available from here
Andrei Warkentin (@WhatAintInside)
“yeah this is a long-standing SD card corruption bug ????… never quite identified, maybe some command needs ti be done on the way out to flush internal card buffers before the loss of power?”
Cyprien Laplace (@cypou)
I think you only need to replace the “RPI_EFI.fd” file from the boot partition. I forgot this bug existed, as all my Pis download the UEFI files using tftp.
(thus no corruption possible, but no change can be saved either)
In this video presentation which is part of the Hancock’s VMware Half Hour HOW TO Video Series I will show you HOW TO: Perform storage performance tests on VMware vSphere vSAN, using the VMware Hyper-converged Infrastructure Benchmark fling (HCIBench).
HCIBench is a storage performance testing automation tool that simplifies and accelerates customer Proof of Concept (POC) performance testing in a consistent and controlled way. VMware vSAN Community Forum provides support for HCIBench.
The storage devices we are using in this video are the Intel® Optane™ SSD DC P4800X Series 375GB, 2.5in PCIe x4, 3D XPoint™, but this procedure can be use to add any compatible storage devices in ESXi to a vSAN datastore.
This video follows on from the follow video in this series
If you are creating a design for VMware vSphere vSAN for a Production environment, please ensure you read the VMware Cloud Foundation Design Guide 01 JUN 2023 – this should be regarded as The Bible!
So I recently blogged here – about the network interfaces I’ve chosen for my #intel #optane #vSAN #homelab, but I cam across a small snag, in the Dell PowerEdge R730 I’m using as one of the #homelab servers uses low profile PCIe cards so I need to purchase a low profile bracket from eBay for it, so it will fit nicely. I could have swapped the bracket from another network interface card but that then leaves that card I cannot use in the future!
Low profile bracket
flapping in the breeze
So one purchased from the UK, I could purchase one from China, but I’m in a hurry, I could leave the PCIe card floppy around in the breeze, but I do like to do things proper!
R730 riser with Intel-DA2 fitted with low profile bracket
I was able to take part in the fantastic offer of free #intel #optane demo units as part of the vExpert Program to create a #vSAN #homelab project which I will document here on this blog. It has taken me a while to obtain all the parts for the #homelab BOM, so here goes….. the #homelab will be based on VMware vSphere 7.0 and 8.0 vSAN.
So one of the many benefits of the vExpert Program , so if you have an interest in VMware Products, reach out to me as a vExpert Pro, for help with applying to the program!