It’s time to discuss a very disturbing trend ongoing for some time, more and more from the large “retail” hosts, providing “Dedicated Servers” within some type of Virtualization layer.
I’ve seen several clients’ who migrated inbound towards our datacenter, coming from very popular US based datacenters, who had “dedicated servers” at their provider, that quite frankly weren’t dedicated servers. While I understand that some companies may wish to provide some kind of “virtualization layer” for easy backups, cheaper cPanel licensing, or easier “management” (is it really that hard to use an iLo these days..), the customer needs to know what he’s getting.
The performance of a VM within a system is _NOT_ the same, as the performance of the system itself, especially considering that 99% of these budget dedicated servers are provisioned on SATA disks, usually without a quality RAID controller (w/battery backed write cache).
It’s common knowledge that 7200RPM SATA disks are slow. It’s also common knowledge, that if you give a customer a 1TB disk, he’ll likely use all of it, considering that cPanel backups, Plesk backups, and the like will easily fill up the entire disk if the customer’s dataset is 200-300GB. Then, you end up with a filesystem with hundreds of thousands of files, GB’s of archives, not to mention the original dataset of /home, mail, websites, databases, all in a “Virtual Disk” of 1TB in size. (don’t forget, this is also hosted on a 7200 RPM disk)
The consequence of all of the above for your customers, includes, but is certainly not limited to:
* Horrible disk performance – resulting in high loads and very slow websites
* Significantly slower than the original “bare metal” server could have been
* No way to detect any disk failures, HW failures, PSU failures (if redundant anyway… )
So, what does all this mean for the customer who’s on the “Virtualized Dedicated Server”?
* We migrated a customer who was on one of the “famous” hosts here, within a VM, on an E3 system with what seemed to be either 1 or 2 SATA 7200 1TB disks. The problem was, that it was nearly impossible to migrate him, as the maximum I/O we could get out of that VM (dedicated server) was 20-30 tps. Result, backing up his 80GB domain was absolutely impossible, and we had to revert to rsync with bwlimit flags to get the data out of there. Took almost a week, to get the filesystem copied, for a single domain.
After the migration was completed, the customer just couldn’t believe what a real dedicated server felt like, his site immediately was realtime, where it was totally sluggish previously. He asked for proof that the system was in fact a VM, so he could confront his prior host.
With all services shutdown, and cPanel/Apache/MySQL completely offline, just imagine hdparm speeds of 40-45MB/sec, iometer tests “All in one” profile with 4 workers, resulting in <200 IOPS, and the best part – simply compressing a single, large archive he had with pigz (multithreaded gzip) took 20X longer than it took on a single L5640 proc.
Yes, 20X, ran it over and over again.
This means, that his “E3 dedicated server” is obviously not a dedicated server, it was a VM, with several neighboring “Dedicated servers” sharing the same 7200 RPM disks, and the same simple E3 proc. No system with local disks and dedicated resources would deliver hdparm’s of 40-50MB/s and capped CPU limits.
This whole situation is totally disappointing and needs to stop now.
Customers, logon to your dedicated servers, and know if you’re being sold legitimate hardware or not,
* dmidecode – will provide you with a listing of your hardware within the system
* lspci – will also provide you with a quick and simple listing of pci devices in your machine
Obviously if you see something like /dev/vda, or /dev/xda, you know you’re not on a dedicated server.
Furthermore, although it’s true that the overhead present in any modern virtualization layer is low, that is CERTAINLY NOT a blanket statement, and it’s not valid in cases where high disk (i/o) queue depths are present. There is no reason for your dedicated server to be slower than it had to be.
It makes sense to run XenServer or KVM on a real server, e.g. say a DL380 or something like this, with 6-8 disks in RAID 10 /50, but it just doesn’t make sense to try to virtualize lowly E3 hardware with SATA disks.
e.g.
This is a “VPS” within a ‘Dedicated Server’ (really?)
root@some-dedicated-client [~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX – 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
00:04.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:05.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:06.0 RAM memory: Red Hat, Inc Virtio memory balloon
root@some-dedicated-client [~]#
Handle 0×0100, DMI type 1, 27 bytes
System Information
Manufacturer: Red Hat
Product Name: KVM
Version: RHEL 6.3.0 PC
Serial Number: Not Specified
UUID: 05D2xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx29
Wake-up Type: Power Switch
SKU Number: Not Specified
Family: Red Hat Enterprise Linux
This, is a real dedicated server:
00:00.0 Host bridge: Intel Corporation 5500 I/O Hub to ESI Port (rev 13)
00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 (rev 13)
00:02.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 2 (rev 13)
00:03.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 3 (rev 13)
00:07.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 (rev 13)
00:08.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 8 (rev 13)
00:09.0 PCI bridge: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 9 (rev 13)
00:0a.0 PCI bridge: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 10 (rev 13)
00:0d.0 Host bridge: Intel Corporation Device 343a (rev 13)
00:0d.1 Host bridge: Intel Corporation Device 343b (rev 13)
00:0d.2 Host bridge: Intel Corporation Device 343c (rev 13)
00:0d.3 Host bridge: Intel Corporation Device 343d (rev 13)
00:0d.4 Host bridge: Intel Corporation 7500/5520/5500/X58 Physical Layer Port 0 (rev 13)
00:0d.5 Host bridge: Intel Corporation 7500/5520/5500 Physical Layer Port 1 (rev 13)
00:0d.6 Host bridge: Intel Corporation Device 341a (rev 13)
00:0e.0 Host bridge: Intel Corporation Device 341c (rev 13)
00:0e.1 Host bridge: Intel Corporation Device 341d (rev 13)
00:0e.2 Host bridge: Intel Corporation Device 341e (rev 13)
00:0e.3 Host bridge: Intel Corporation Device 341f (rev 13)
00:0e.4 Host bridge: Intel Corporation Device 3439 (rev 13)
00:14.0 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub System Management Registers (rev 13)
00:14.1 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers (rev 13)
00:14.2 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub Control Status and RAS Registers (rev 13)
00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 1
00:1d.0 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1
00:1d.1 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2
00:1d.2 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3
00:1d.3 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #6
00:1d.7 USB controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90)
00:1f.0 ISA bridge: Intel Corporation 82801JIB (ICH10) LPC Interface Controller
01:03.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] ES1000 (rev 02)
01:04.0 System peripheral: Compaq Computer Corporation Integrated Lights Out Controller (rev 03)
01:04.2 System peripheral: Compaq Computer Corporation Integrated Lights Out Processor (rev 03)
01:04.4 USB controller: Hewlett-Packard Company Integrated Lights-Out Standard Virtual USB Controller
01:04.6 IPMI SMIC interface: Hewlett-Packard Company Integrated Lights-Out Standard KCS Interface
02:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711E 10-Gigabit PCIe
02:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711E 10-Gigabit PCIe
0c:00.0 RAID bus controller: Hewlett-Packard Company Smart Array G6 controllers (rev 01)
Furthermore, when the vendors advise you that you have RAID, ask them to show you the utility and how it works, such that you can monitor your own RAID. What good is a RAID 1, if you don’t know that one of the disks have failed. A second disk failure will result in data loss.
Not to mention the other point that is too often neglected, do you have valid, working backups outside your dedicated server?
Server Hosting Tips via What about Dedicated Servers, that aren’t really Dedicated?.