Linux LVM performace measurement
Modern Linux LVM offers great abilities to maintain snapshots of existing logical volumes. Unlike NetApp “Write Anywhere File Layout” (WAFL), Linux LVM uses “Copy-on-Write” (COW) to allow snapshots. The process, in general, can be described in this pdf document.
I have issues several small tests, just to get real-life estimations of what is the actual performance impact such COW method can cause.
Server details:
1. CPU: 2x Xion 2.8GHz
2. Disks: /dev/sda – system disk. Did not touch it; /dev/sdb – used for the LVM; /dev/sdc – used for the LVM
3. Mount: LV is mounted (and remains mounted) on /vmware
Results:
1. No snapshot, Using VG on /dev/sdb only:
# time dd if=/dev/zero of=/vmware/test.2GB bs=1M count=2048
2048+0 records in
2048+0 records outreal 0m16.088s
user 0m0.009s
sys 0m8.756s
2. With snapshot on the same disk (/dev/sdb):
# time dd if=/dev/zero of=/vmware/test.2GB bs=1M count=2048
2048+0 records in
2048+0 records outreal 6m5.185s
user 0m0.008s
sys 0m11.754s
3. With snapshot on 2nd disk (/dev/sdc):
# time dd if=/dev/zero of=/vmware/test.2GB bs=1M count=2048
2048+0 records in
2048+0 records outreal 5m17.604s
user 0m0.004s
sys 0m11.265s
4. Same as before, creating a new empty file on the disk:
# time dd if=/dev/zero of=/vmware/test2.2GB bs=1M count=2048
2048+0 records in
2048+0 records outreal 3m24.804s
user 0m0.006s
sys 0m11.907s
5. Removed the snapshot. Created a 3rd file:
Actually – this one is a long forgotten post. I’ll leave it online for the sake of history. Following that test I had several disk failures. It didn’t happen due to the LVM performance tests, but due to insufficient and faulty cooling for the disk array.
When it was solved and the system was restored back to work, I didn’t find the time to continue this test. I might do it some time, still.
Ez