I had an old, but reliable HP Microserver N40L. I had been using it 24/7 since 2012. It was running ESXi 5.1. There was a couple of 2003R2 servers (a DNS and a WEB ones). A sort of encapsulation of dangerous vectors. It was just a separate information web server. If hacked, it was easy to recover from a backup and to restart. Normally it was not connected to the internal network but for getting content updates. But time passed and my server got “tired”.
I wanted to run both DNS and WEB at least on Windows Server 2008R2 and ideally on Windows Server 2012 R2. N40L started hanging up. N40L didn’t have enough hardware resources. You couldn’t install something more powerful than a 1.5 GHz AMD Turion (™) II processor with 2/2 architecture and 8 GB of RAM to it.
I understand that HP Microserver Gen8 is not a magic pill as well, but you can install 16 GB of memory and, for example, a 2.3/3.5 GHz Intel® Xeon® E3-1220l processor with 2/4 architecture and ultra-low energy consumption (about 17-20 W). You can install, for example, Intel® Xeon® E3-1270 3.4/4.1 GHz with 4/8 architecture. But then you shouldn’t be surprised that the processor alone is 1.5 times more expensive than the Microserver :-).
HP Microserver Gen8 is the last HP microserver, which:
- can be upgraded;
- has an iLO card.
Unfortunately, Gen9 did not appear, and the new Gen10 is … just read about it and you’ll see 🙂
HP Microserver Gen8 is not sold officially, but you still can purchase it. I have bought a used Gen8 (G2020T). Then I have installed the HPE ESXi 5.5U3 custom image on it.
However, it turned out there was more about it. The most interesting things started to happen afterwards.
Installed: Vmware-ESXi-5.5.0-Update3-3568722-HPE-550.9.6.5.9-Dec2016.iso
Disk driver version: scsi-hpvsa-5.5.0.100-1OEM.550.0.0.1331820
As it turned out, HP has spoiled something in the disk subsystem driver for ESXi 5.5 and disk management became not as efficient as it should. Moreover, as I found out later, this problem also occurred in the HPE ESXi 6.0, 6.5 and 6.7 images.
After talking to my mates and searching the Web, I understood that the matter was in the driver that HPE integrated in its custom image with the installer of ESXi 5.5 or later.
However, this problem can be solved. The Internet community (https://homeservershow.com) has managed to find a driver that really increases the disk performance in HP Microserver Gen8.
Driver version: scsi-hpvsa-5.5.0-88OEM.550.0.0.1331820
You can download the driver for free from the official HPE website:
https://support.hpe.com/hpsc/swd/…b1dfc5314e02bc01b1436b
Type: Driver — Storage Controller
Version: 5.5.0-88.0(9 Sep 2014)
Operating System(s): VMware vSphere 5.5
File name: scsi-hpvsa-5.5.0-88OEM.550.0.0.1331820.x86_64.vib (707 KB)
Now we need to install it. The procedure is described below. First of all, check the version of the installed driver, and if it is different, replace it for the right one.
Connect ESXi host console using PuTTY, authenticate as root and run this command:
esxcli software vib list | grep scsi
This is what I had prior to changing the driver:
esxcli software vib list | grep scsi
scsi-hpsa 5.5.0.124-1OEM.550.0.0.1331820 HPE VMwareCertified 2018-04-10
scsi-hpdsa 5.5.0.52-1OEM.550.0.0.1331820 Hewlett-Packard PartnerSupported 2018-04-10
scsi-hpvsa 5.5.0.100-1OEM.550.0.0.1331820 Hewlett-Packard PartnerSupported 2018-04-10
scsi-mpt2sas 15.10.06.00.1vmw-1OEM.550.0.0.1198610 LSI VMwareCertified 2018-04-10
scsi-bfa 3.2.6.0-1OEM.550.0.0.1331820 QLogic VMwareCertified 2018-04-10
scsi-bnx2fc 1.713.20.v55.4-1OEM.550.0.0.1331820 QLogic VMwareCertified 2018-04-10
scsi-bnx2i 2.713.10.v55.3-1OEM.550.0.0.1331820 QLogic VMwareCertified 2018-04-10
scsi-qla4xxx 644.55.37.0-1OEM.550.0.0.1331820 QLogic VMwareCertified 2018-04-10
It means not the right one. Why? Here is what the disk performance test showed. Not exactly a test, but you can understand what is being tested from the given commands.
Run the following commands in ESXI console:
cd /vmfs/volumes/[datastore]
time dd if=/dev/zero of=tempfile bs=8k count=1000000
Here is the result:
1000000+0 records in
1000000+0 records out
real 14m 12.62s
user 0m 12.23s
sys 0m 0.00s
Not so bad, is it?
Compare it with the results obtained for the same configuration with ESXi 5.1U3 installed:
1000000+0 records in
1000000+0 records out
real 17m 25.62s
user 0m 7.23s
sys 0m 0.00s
As you can see, there is an improvement compared to the previous ESXi version. However, you’ll have to trust me and then look at the different result. Read this post to the end.
So, let’s change the driver.
The procedure is quite simple. It is supposed that you have downloaded the driver you need from HPE website following the link above.
- Stop all running VMs;
- If disabled, enable SSH;
- Copy scsi-hpvsa-5.5.0-88OEM.550.0.0.1331820.x86_64.vib to /tmp (e. g., using WinSCP);
- Connect to ESXi host console using PuTTY;
- Change the current folder to the one you put the file to, i. e. to /tmp:
cd /tmp
- Copy the VIB file to the folder, from which it will be installed:
cp scsi-hpvsa-5.5.0-88OEM.550.0.0.1331820.x86_64.vib /var/log/vmware/
- Enable Maintenance Mode of the host:
esxcli system maintenanceMode set --enable true
- Remove the current driver of the disk subsystem:
esxcli software vib remove -n scsi-hpvsa -f
- Install the right scsi-hpvsa-5.5.0-88OEM driver from the file:
esxcli software vib install -v file:scsi-hpvsa-5.5.0-88OEM.550.0.0.1331820.x86_64.vib --force --no-sig-check --maintenance-mode
- Restart ESXi, disable Maintenance Mode, disable SSH (if necessary) and start your virtual machines.
esxcli system maintenanceMode set --enable false
Is it easy? Yes, it is.
But you always need to make sure if the author was not lying. Let’s make sure that the driver version has changed:
esxcli software vib list | grep scsi
scsi-hpsa 5.5.0.124-1OEM.550.0.0.1331820 HPE VMwareCertified 2018-04-10
scsi-hpdsa 5.5.0.52-1OEM.550.0.0.1331820 Hewlett-Packard PartnerSupported 2018-04-10
scsi-hpvsa 5.5.0-88OEM.550.0.0.1331820 Hewlett-Packard PartnerSupported 2018-04-10
scsi-mpt2sas 15.10.06.00.1vmw-1OEM.550.0.0.1198610 LSI VMwareCertified 2018-04-10
scsi-bfa 3.2.6.0-1OEM.550.0.0.1331820 QLogic VMwareCertified 2018-04-10
scsi-bnx2fc 1.713.20.v55.4-1OEM.550.0.0.1331820 QLogic VMwareCertified 2018-04-10
scsi-bnx2i 2.713.10.v55.3-1OEM.550.0.0.1331820 QLogic VMwareCertified 2018-04-10
scsi-qla4xxx 644.55.37.0-1OEM.550.0.0.1331820 QLogic VMwareCertified 2018-04-10
Yes, it has changed to the right one. Then I started the performance test again. I was taken aback by the result:
cd /vmfs/volumes/[datastore]
time dd if=/dev/zero of=tempfile bs=8k count=1000000
1000000+0 records in
1000000+0 records out
real 2m 6.73s
user 0m 5.21s
sys 0m 0.00s
It is SEVEN times faster than with the previous driver and almost 9 times faster than in ESXI 5.1U3.
Forum users confirmed that the wrong driver is installed during the installation of ESXi 6.0 and 6.5. If you replace it with scsi-hpvsa-5.5.0-88OEM.550.0.0.1331820, the disk subsystem began to work as fast as in the last of my tests.
In my opinion, this is a very convincing argument in favor of replacing the ESXi storage driver.
11 comments
Great finding!
I wonder how much slower or faster disks of Windows Hyper-V Server 2012R2 or 2016 (both are free as ESXi is) could be on that h/w?
Hi,
I’ve got a Gen8 Dl380p and I affected to the same problem and looking for the WEB I found your page.
I installed the FREE esxi 6.7 U2 (for Gen 9 and upper model), I try to follow your guide, but I stopped me on the first step because after I digit “esxcli software vib list | grep scsi” I received:
elxiscsi 12.0.1188.0-1OEM.670.0.0.8169922 EMU VMwareCertified 2019-08-19
scsi-hpdsa 5.5.0.66-1OEM.550.0.0.1331820 HPE PartnerSupported 2019-08-19
pvscsi 0.1-2vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-aacraid 1.1.5.1-9vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-adp94xx 1.0.8.12-6vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-aic79xx 3.1-6vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-fnic 1.5.0.45-3vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-ips 7.12.05-4vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-iscsi-linux-92 1.0.0.2-3vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-libfc-92 1.0.40.9.3-5vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-megaraid-mbox 2.20.5.1-6vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-megaraid-sas 6.603.55.00-2vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-megaraid2 2.00.4-9vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-mpt2sas 19.00.00.00-2vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-mptsas 4.23.01.00-10vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-mptspi 4.23.01.00-10vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
scsi-qla4xxx 5.01.03.2-7vmw.670.0.0.8169922 VMW VMwareCertified 2019-08-19
shim-iscsi-linux-9-2-1-0 6.7.0-0.0.8169922 VMW VMwareCertified 2019-08-19
shim-iscsi-linux-9-2-2-0 6.7.0-0.0.8169922 VMW VMwareCertified 2019-08-19
As you can see, there is any “scsi-hpvsa” driver, but only “scsi-hpdsa” driver…
Can you help me?
tanks a lot.
Marco
Just found this page after getting bad performance with ESXi 6.5 on DL380 with P400 smart array. Changing the driver sorted the problem.
On my DL360 Gen8 the latest “HPE vmware Update 3″ already has newer driver ” 5.5.0.102-1OEM.550.0.0.1331820″. In this case the performance is the same with -88 and -102 versions. It seems the issue was only up to -100 version.
Pete
Just found this page after getting bad performance with ESXi 6.7u3 on Hp microserver gen8 with 16gb ram and xeon e3-1265l v3.
problem still remain
Thanks i optimise transfert from 23 to 4 minutes 🙂
Thanks so much for this detailed article. I just updated our ML350p Setver to 6.0.0 since that is the latest version of vmware that the hardware officially supports. The speeds were abysmal inside the vm’s at 11 mb/s max (80211b anyone?) After the update I never really tested, but the ssd’s loafed at 100 showing like 6 percent or something like that. Huge change from 100 % all the time.
You saved me a ton of time trying to search in a forum as you did, and for that, I figured with the effort you put in, you should at least get a thanks. }:0)
Not mine, but it fixed my issue after the update! Taken from Johan
If you run into a problem where the RAID arrays are not visible anymore; don’t worry, the data is still there. It’s just a matter of convincing the hpvsa driver to load properly. One issue I’ve found is that when I look under Storage > Adapters, that my hard drives were listed there, but not as RAID drives. Just as standard SATA/AHCI drives. In the column ‘drivers’ I saw that it wasn’t using the hpvsa driver that I downgraded, but was using vmw-ahci instead. What I did to resolve that was disable the vmw-ahci driver with this command:
esxcli system module set –enabled=false –module=vmw_ahci
(ps: above command is with two dashes not a single dash – wordpress messing up the formatting)
And uninstall the hpvsa .88 driver and rebooted.
Then I reinstalled the hpvsa .88 driver and rebooted again.
Then hpvsa loaded properly and my datastores became visible again.
Also, I found that I had to update to 20170404001-standard (build 5310538) for this to work at all; on an older build it wouldn’t load the hpvsa driver no matter what I did.
https://www.johandraaisma.nl/fix-vmware-esxi-6-slow-disk-performance-on-hp-b120i-controller/
Didnt work for me.
ESX 6.7
Still same slow performance any ideas?
Ok worked for me now!
Server needed a 5 sec powerless life:)
Then did all the commands again and it worked.
Strange thing.
Hdd has now 2m time instead of 11m
SSD on port 5 still has 3M instead of 12m
Why is SSD still slower?