Posts Tagged ‘SSD’

Synology NAS and SSD Cache Part III – Is cache better for VMware vSphere (ESXi)? Confusing results!

Monday, April 11th, 2016

So in today’s, crude and experimental research I thought I would connect all our VMware vSphere Hypervisors (ESXi 5.5 build 1892794) to a NFS datastore presented to the ESXi Hosts from a Synology NAS, and we’ll try the following tests

I deployed a small Windows 7 template, onto the NFS datastore as follows

  • No Cache Enabled – 3 minutes 27 seconds to deploy
  • Read and Write Cache Enabled – 2 minutes and 40 seconds to deploy.

Time for some more testing – The template deployed to the datastore was converted to a virtual machine, and the following tests were performed using CrystalDiskMark 5.1.2 in the virtual machine.

NFS Exported volume No SSD Cache on the Synology NAS.

NFS Exported volume Read and Write SSD Cache on the Synology NAS.

NFS Exported volume Read only SSD Cache on the Synology NAS.

Some a bunch of very confusing results! And every time I test the results are similar.

Post to Twitter

Synology NAS and SSD Cache

Saturday, April 9th, 2016

I’ve been recently experimenting with SSDs (solid state disks), to accelerate my spinning rust in my Synology NAS.

Recently in DSM, a new SSD cache option is available, which allows you to create a read or write cache with 1 or 2 SSD devices respectively.

Here are some results, which I’ve graphed

2016-04-09-21_26_42-microsoft-excel-book5

In my very quick and crude tests, I could see an improvement in Writing to the NAS, which doubles in performance. Read speed is very similar, and the cache was “warmed-up” before testing.

And here’s a video of the new Synology SSD Cache Read Hit Rate graphic, which looks a little graphic equalizer, from the 70-80s, so I’ve dropped a music track in the background! I thought it only right to over-flange (distort!) the track, so you may want to turn down your volume!

Post to Twitter

The Roundabout, for Andy “The Return to the Mandelbrot sets!”

Thursday, February 20th, 2014

When I first started experimenting with computers in the early 80’s, I was fascinated with the Mandelbrot set, and spent many hours generating it on a BBC Micro, later I added a Second 6502 Processor, I then upgrade to a Master 128k with a Turbo Co-Processor, all in the aid of more compute power! (I search high and low for a Acornsoft GXR rom, to give me more colours, and finally got one from Plymouth Polytechnic Computer Science Department!).

BBC Micro Mandelbrot

BBC Micro Mandelbrot

This week, I find myself with the following configuration:-

  • (2) nVidia Tesla K40 (currently the most powerful and most expensive graphics processing unit (gpu), used for compute (no video output!).
  • 1 Terra Byte PCI-e Flash Card (SSD but plugs into the motherboard!).

This is a real time, video of calculating Mandelbrot set, using CUDA 5.5 on the 2 nVidia Tesla K40, a little faster than using a BBC Microcomputer!

Post to Twitter

HOW TO: Tag and Configure a storage device as a Solid State Disk (SSD) in VMware vSphere 5.0 or 5.1 (ESXi 5.0 or ESXi 5.1)

Thursday, November 29th, 2012
In VMware vSphere 5.x (ESXi 5.x) there is a new feature called Host Cache Configuration. This new feature allows the VMware vSphere Administrator to configure the VMware vSphere 5.x (ESXi 5.x) host server to use a cache on a Solid State Disks (SSD) for the virtual machine’s swapfile for better performance, because the SSD has much faster latency than a traditional mechanical disk. This is also known in VMware Administrator circles as Swap to Host Cache or Swap to SSD. Once Host Cache Configuration has been enabled, the virtual machines will be swapping to SSD, but this swapfile is not a true swap file, and the entire virtual machine swap file (.vswp)  is not stored on the SSD.However, not all SSD devices are correctly tagged as SSD. This tutorial shows how to tag a Non-SSD storage device as SSD, if you want to experiment with Host Cache Configuration but do not have a SSD to hand. This is not supported by VMware, tagging a non-SSD as a SSD.

The same procedure can be followed to tag a SSD, correctly, if it’s not recognized by the VMware ESXi server.

With the current fall in prices for consumer SSDs, it can give a real performance boost to a VMware ESXi 5.x server which is short on memory. Consumer SSDs e.g. Kingston SSDNow V+200 Drive Model SVP200S37A/60G are generally cheaper than server memory. We recently purchased this model for £29.99 GBP.

The commands we will be using in this Tutorial, are the esxcli commands, these commands can be executed on the ESXi shell, through the vMA or PowerCLI esxcli remote version. In this tutorial I’ll be logging into the ESXi server, and executing the commands on the ESXi shell.

1. Connect to the VMware vSphere Hypervisor (ESXi) or VMware vSphere vCenter Server

Using the VMware vSphere Client, Login and Connect to the ESXi server, using IP address or hostname of the ESXi server, using root username and password credentials. If you have a VMware vSphere vCenter server, you could also specify IP address or hostname of the vCenter server.

  • Using the VMware vSphere Client, Login and Connect to the ESXi server

Using the VMware vSphere Client, Login and Connect to the ESXi server

2. Check and record the storage device name to be tagged as a SSD

Check there is a VMFS volume already formatted on the storage device, you want to present to the Host ESXi server, as a SSD and record the device name for later in Step 4.

Select Host > Configuration > Storage

  • storage device to configure as SSD

storage device to configure as SSD

In the example above, the local storage device mpx.vmhba1:C0:T0:L0 is a local disk, formatted with the datastore name datastore1 as VMFS5. Record the storage device name mpx.vmhba1:C0:T0:L0.

3. Logon to ESXi console (shell) via PuTTY

Using PuTTY a free telnet and SSH client or another SSH client Login and Connect to the VMware Hypervisor ESXi server, using IP address or hostname of the VMware Hypervisor ESXi server, using root username and password credentials.

  • putty SSH terminal session

putty SSH terminal session

  • logged in as root to ssh terminal session

logged in as root to ssh terminal session

4. Create a new SATP rule

At the console or SSH session type the following commands to create a new SATP rule.

esxcli storage nmp satp rule add --satp VMW_SATP_LOCAL --device mpx.vmhba1:C0:T0:L0 --option=enable_ssd

using the device name recorded in Step 2 above. The console will return a new line. To check the rule has been created correctly type the following commands

esxcli storage nmp satp rule list | grep enable_ssd

the following screenshot should be displayed

  • Confirmation of rule creation

Confirmation of rule creation

confirming the creation of the rule.

5. Claim storage device

At the console or SSH session type the following commands

esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T0:L0

using the device name recorded in Step 2 above.

I have seen the following error messages when trying to claim devices, either restart the server or use the “unclaim” device command.

  • Unable to unclaim path vmhba1:C0:T0:L0 on device mpx.vmhba1:C0:T0:L0. Some paths may be left in an unclaimed state. You will need to claim them manually using the appropriate commands or wait for peri

Unable to unclaim path vmhba1:C0:T0:L0 on device mpx.vmhba1:C0:T0:L0. Some paths may be left in an unclaimed state. You will need to claim them manually using the appropriate commands or wait for peri

you can unclaim the device by specifying the device name.

esxcli storage core claiming unclaim --type device --device device_name

6. Reload the claim rules

I usually reload the claim rules and run the rules using the following commands:

esxcli storage core claimrule load
esxcli storage core claimrule run

7. Confirm device is Tagged as SSD

Use the following command at the console, to check if the device has successfully been tagged as a SSD

esxcli storage core device list --device=mpx.vmhba1:C0:T0:L0

The following output will be displayed for the device.

  • local device tagged as SSD

local device tagged as SSD

Check the output states “Is SSD: true”

You have successfully configured and tagged a local device as a SSD. If you now repeat Step 2 above, you will see the device now states SSD.

  • storage device to configure or tagged as SSD

storage device to configure or tagged as SSD

In my next Article, I show you how to configure Host Cache Configuration.

Further reading can be found here in the VMware
vSphere 5 Documentation Center :- Tag Devices as SSD

Post to Twitter

Tweaking HP ProLiant MicroServer BIOS to support 2 additional AHCI SATA Ports for VMware ESXi 4.1/5.0, SSD

Wednesday, August 24th, 2011

I’ve been experimenting with the HP ProLiant MicroServer N36L, to extend its capacity to support an additional 2 AHCI SATA Ports, from the standard IDE mode offerings, that the on-board SATA and eSATA ports offer. This will support the use of SSDs better in the future.

Storage Controllers available to VMware vSphere 4.1 U1, not there are four AHCI SATA controllers, vmhba0, vmhba34, vmhba35, and  vmhba36. These correspond to the “not supported hot plug” bays.

HP ProLiant MicroServer BIOS POST before tweak!

HP ProLiant MicroServer BIOS POST before tweak!

VMware ESXi 4.1 installed on HP ProLiant MicroServer Before tweak!VMware ESXi 4.1 installed on HP ProLiant MicroServer before tweak!

VMware ESXi 4.1 installed on HP ProLiant MicroServer Before tweak!

and also two IDE vmhba1 and vmhba33.

VMware ESXi 4.1 installed on HP ProLiant MicroServer before tweak!

VMware ESXi 4.1 installed on HP ProLiant MicroServer Before tweak!

After tweaking…a total of six AHCI SATA ports, vmhba0, vmhba33, vmhba34, vmhba35, vmhba36 and vmhba37.

HP ProLiant MicroServer BIOS POST After tweak!

HP ProLiant MicroServer BIOS POST After tweak!

VMware ESXi 4.1 installed on HP ProLiant MicroServer After tweak!

VMware ESXi 4.1 installed on HP ProLiant MicroServer After tweak!

If you want more details, ping me an email or twitter, and I’ll send you the bios.

Post to Twitter