KVM and I/O problemsPublic feed posted on Thu, 05 Jul 2012 by Alex
KVM and Postgresql Problem:
Slow I/O – with Software Raid 1 and LVM2.
So we had some trouble with a Software Raid 1 and our KVM Virtual Machines, the writing and reading speed was just bad. But this was only for paralle wirte/read requests. So doing single test like dd or even bonnie didn’t show the problem.
This approach is great, but the fatal flaw is that it assumes a single, physical disk, attached to a single physical SCSI controller in a single physical host. How does the elevator algorithm know what to do when the disk is actually a RAID array? Does it? Or, what if that one Linux kernel isn’t the only kernel running on a physical host? Does the elevator mechanism still help in virtual environments?
No, no it doesn’t. Hypervisors have elevators, too. So do disk arrays. Remember that in virtual environments the hypervisor can’t tell what is happening inside the VM. It’s a black box, and all it sees is the stream of I/O requests that eventually get passed to the hypervisor. It doesn’t know if they got reordered, how they got reordered, or why. It doesn’t know how long the request has been outstanding. As a result it probably won’t make the best decision about handling those requests, adding latency and extra work for the array. Worst case, all that I/O ends up looking very random to the disk array.
Fixing this using the io scheduler of the SCSI HDD/Raid 1. Disabling it via on a live system.
echo noop > /sys/block/queue/scheduler
Enabling it into the system by editing /etc/grub.conf with option
Sources: http://www.gnutoolbox.com/linux-io-elevator/ http://blog.bodhizazen.net/linux/improve-kvm-performance/http://lonesysadmin.net/2008/02/21/elevatornoop/https://www.redhat.com/magazine/008jun05/features/schedulers/
>(This is an imported feed item. You can read the original item at /blog.akendo.eu/kvm-io-problems/">http://blog.akendo.eu/kvm-io-problems/)