LVM dangers and caveats

Summary

Risks of using LVM:

  • Vulnerable to write caching issues with SSD or VM hypervisor
  • Harder to recover data due to more complex on-disk structures
  • Harder to resize filesystems correctly
  • Snapshots are hard to use, slow and buggy
  • Requires some skill to configure correctly given these issues

The first two LVM issues combine: if write caching isn’t working correctly and you have a power loss (e.g. PSU or UPS fails), you may well have to recover from backup, meaning significant downtime. A key reason for using LVM is higher uptime (when adding disks, resizing filesystems, etc), but it’s important to get the write caching setup correct to avoid LVM actually reducing uptime.

— Updated Dec 2019: minor update on btrfs and ZFS as alternatives to LVM snapshots

Mitigating the risks

LVM can still work well if you:

  • Get your write caching setup right in hypervisor, kernel, and SSDs
  • Avoid LVM snapshots
  • Use recent LVM versions to resize filesystems
  • Have good backups

Details

I’ve researched this quite a bit in the past having experienced some data loss associated with LVM. The main LVM risks and issues I’m aware of are:

Vulnerable to hard disk write caching due to VM hypervisors, disk caching or old Linux kernels, and makes it harder to recover data due to more complex on-disk structures – see below for details. I have seen complete LVM setups on several disks get corrupted without any chance of recovery, and LVM plus hard disk write caching is a dangerous combination.

  • Write caching and write re-ordering by the hard drive is important to good performance, but can fail to flush blocks to disk correctly due to VM hypervisors, hard drive write caching, old Linux kernels, etc.
  • Write barriers mean the kernel guarantees that it will complete certain disk writes before the “barrier” disk write, to ensure that filesystems and RAID can recover in the event of a sudden power loss or crash. Such barriers can use a FUA (Force Unit Access) operation to immediately write certain blocks to the disk, which is more efficient than a full cache flush. Barriers can be combined with efficient tagged/native command queuing (issuing multiple disk I/O requests at once) to enable the hard drive to perform intelligent write re-ordering without increasing risk of data loss.
  • VM hypervisors can have similar issues: running LVM in a Linux guest on top of a VM hypervisor such as VMware, Xen, KVM, Hyper-V or VirtualBox can create similar problems to a kernel without write barriers, due to write caching and write re-ordering. Check your hypervisor documentation carefully for a “flush to disk” or write-through cache option (present in KVM, VMware, Xen, VirtualBox and others) – and test it with your setup. Some hypervisors such as VirtualBox have a default setting that ignores any disk flushes from the guest.
  • Enterprise servers with LVM should always use a battery backed RAID controller and disable the hard disk write caching (the controller has battery backed write cache which is fast and safe) – see this comment by the author of this XFS FAQ entry. It may also be safe to turn off write barriers in the kernel, but testing is recommended.
  • If you don’t have a battery-backed RAID controller, disabling hard drive write caching will slow writes significantly but make LVM safe. You should also use the equivalent of ext3’s data=ordered option (or data=journal for extra safety), plus barrier=1 to ensure that kernel caching doesn’t affect integrity. (Or use ext4 which enables barriers by default.) This is the simplest option and provides good data integrity at cost of performance. (Linux changed the default ext3 option to the more dangerous data=writeback a while back, so don’t rely on the default settings for the FS.)
  • To disable hard drive write caching: add hdparm -q -W0 /dev/sdX for all drives in /etc/rc.local (for SATA) or use sdparm for SCSI/SAS. However, according to this entry in the XFS FAQ (which is very good on this topic), a SATA drive may forget this setting after a drive error recovery – so you should use SCSI/SAS, or if you must use SATA then put the hdparm command in a cron job running every minute or so.
  • To keep SSD / hard drive write caching enabled for better performance: this is a complex area – see section below.
  • If you are using Advanced Format drives i.e. 4 KB physical sectors, see below – disabling write caching may have other issues.
  • UPS is critical for both enterprise and SOHO but not enough to make LVM safe: anything that causes a hard crash or a power loss (e.g. UPS failure, PSU failure, or laptop battery exhaustion) may lose data in hard drive caches.
  • Very old Linux kernels (2.6.x from 2009): There is incomplete write barrier support in very old kernel versions, 2.6.32 and earlier (2.6.31 has some support, while 2.6.33 works for all types of device target) – RHEL 6 uses 2.6.32 with many patches. If these old 2.6 kernels are unpatched for these issues, a large amount of FS metadata (including journals) could be lost by a hard crash that leaves data in the hard drives’ write buffers (say 32 MB per drive for common SATA drives). Losing 32MB of the most recently written FS metadata and journal data, which the kernel thinks is already on disk, usually means a lot of FS corruption and hence data loss.
  • Summary: you must take care in the filesystem, RAID, VM hypervisor, and hard drive/SSD setup used with LVM. You must have very good backups if you are using LVM, and be sure to specifically back up the LVM metadata, physical partition setup, MBR and volume boot sectors. It’s also advisable to use SCSI/SAS drives as these are less likely to lie about how they do write caching – more care is required to use SATA drives.

Keeping write caching enabled for performance (and coping with lying drives)

A more complex but performant option is to keep SSD / hard drive write caching enabled and rely on kernel write barriers working with LVM on kernel 2.6.33+ (double-check by looking for “barrier” messages in the logs).

You should also ensure that the RAID setup, VM hypervisor setup and filesystem uses write barriers (i.e. requires the drive to flush pending writes before and after key metadata/journal writes). XFS does use barriers by default, but ext3 does not, so with ext3 you should use barrier=1 in the mount options, and still use data=ordered or data=journal as above.

SSDs are problematic because the use of write cache is critical to the lifetime of the SSD. It’s best to use an SSD that has a supercapacitor (to enable cache flushing on power failure, and hence enable cache to be write-back not write-through).

Advanced Format drive setup – write caching, alignment, RAID, GPT

  • With newer Advanced Format drives that use 4 KiB physical sectors, it may be important to keep drive write caching enabled, since most such drives currently emulate 512 byte logical sectors (“512 emulation”), and some even claim to have 512-byte physical sectors while really using 4 KiB.
  • Turning off the write cache of an Advanced Format drive may cause a very large performance impact if the application/kernel is doing 512 byte writes, as such drives rely on the cache to accumulate 8 x 512-byte writes before doing a single 4 KiB physical write. Testing is recommended to confirm any impact if you disable the cache.
  • Aligning the LVs on a 4 KiB boundary is important for performance but should happen automatically as long as the underlying partitions for the PVs are aligned, since LVM Physical Extents (PEs) are 4 MiB by default. RAID must be considered here – this LVM and software RAID setup page suggests putting the RAID superblock at the end of the volume and (if necessary) using an option on pvcreate to align the PVs. This LVM email list thread points to the work done in kernels during 2011 and the issue of partial block writes when mixing disks with 512 byte and 4 KiB sectors in a single LV.
  • GPT partitioning with Advanced Format needs care, especially for boot+root disks, to ensure the first LVM partition (PV) starts on a 4 KiB boundary.

Harder to recover data due to more complex on-disk structures:

  • Any recovery of LVM data required after a hard crash or power loss (due to incorrect write caching) is a manual process at best, because there are apparently no suitable tools. LVM is good at backing up its metadata under /etc/lvm, which can help restore the basic structure of LVs, VGs and PVs, but will not help with lost filesystem metadata.
  • Hence a full restore from backup is likely to be required. This involves a lot more downtime than a quick journal-based fsck when not using LVM, and data written since the last backup will be lost.
  • TestDisk, ext3grep, ext3undel and other tools can recover partitions and files from non-LVM disks but they don’t directly support LVM data recovery. TestDisk can discover that a lost physical partition contains an LVM PV, but none of these tools understand LVM logical volumes. File carving tools such as PhotoRec and many others would work as they bypass the filesystem to re-assemble files from data blocks, but this is a last-resort, low-level approach for valuable data, and works less well with fragmented files.
  • Manual LVM recovery is possible in some cases, but is complex and time consuming – see this example and this, this, and this for how to recover.

Harder to resize filesystems correctly – easy filesystem resizing is often given as a benefit of LVM, but you need to run half a dozen shell commands to resize an LVM based FS – this can be done with the whole server still up, and in some cases with the FS mounted, but I would never risk the latter without up to date backups and using commands pre-tested on an equivalent server (e.g. disaster recovery clone of production server).

  • Update: More recent versions of lvextend support the -r (--resizefs) option – if this is available, it’s a safer and quicker way to resize the LV and the filesystem, particularly if you are shrinking the FS, and you can mostly skip this section.

  • Most guides to resizing LVM-based FSs don’t take account of the fact that the FS must be somewhat smaller than the size of the LV: detailed explanation here. When shrinking a filesystem, you will need to specify the new size to the FS resize tool, e.g. resize2fs for ext3, and to lvextend or lvreduce. Without great care, the sizes may be slightly different due to the difference between 1 GB (10^9) and 1 GiB (2^30), or the way the various tools round sizes up or down.

  • If you don’t do the calculations exactly right (or use some extra steps beyond the most obvious ones), you may end up with an FS that is too large for the LV. Everything will seem fine for months or years, until you completely fill the FS, at which point you will get serious corruption – and unless you are aware of this issue it’s hard to find out why, as you may also have real disk errors by then that cloud the situation. (It’s possible this issue only affects reducing the size of filesystems – however, it’s clear that resizing filesystems in either direction does increase the risk of data loss, possibly due to user error.)

  • It seems that the LV size should be larger than the FS size by 2 x the LVM physical extent (PE) size – but check the link above for details as the source for this is not authoritative. Often allowing 8 MiB is enough, but it may be better to allow more, e.g. 100 MiB or 1 GiB, just to be safe. To check the PE size, and your logical volume+FS sizes, using 4 KiB = 4096 byte blocks:

    Shows PE size in KiB:
    vgdisplay --units k myVGname | grep "PE Size"

    Size of all LVs:
    lvs --units 4096b

    Size of (ext3) FS, assumes 4 KiB FS blocksize:
    tune2fs -l /dev/myVGname/myLVname | grep 'Block count'

  • By contrast, a non-LVM setup makes resizing the FS very reliable and easy – run Gparted and resize the FSs required, then it will do everything for you. On servers, you can use parted from the shell.

  • It’s often best to use the Gparted Live CD or Parted Magic, as these have a recent and often more bug-free Gparted & kernel than the distro version – I once lost a whole FS due to the distro’s Gparted not updating partitions properly in the running kernel. If using the distro’s Gparted, be sure to reboot right after changing partitions so the kernel’s view is correct.

Snapshots are hard to use, slow and buggy – if snapshot runs out of pre-allocated space it is automatically dropped. Each snapshot of a given LV is a delta against that LV (not against previous snapshots) which can require a lot of space when snapshotting filesystems with significant write activity (every snapshot is larger than the previous one). It is safe to create a snapshot LV that’s the same size as the original LV, as the snapshot will then never run out of free space.

Snapshots can also be very slow (meaning 3 to 6 times slower than without LVM for these MySQL tests) – see this answer covering various snapshot problems. The slowness is partly because snapshots require many synchronous writes.

Snapshots have had some significant bugs, e.g. in some cases they can make boot very slow, or cause boot to fail completely (because the kernel can time out waiting for the root FS when it’s an LVM snapshot [fixed in Debian initramfs-tools update, Mar 2015]).

  • However, many snapshot race condition bugs were apparently fixed by 2015.
  • LVM without snapshots generally seems quite well debugged, perhaps because snapshots aren’t used as much as the core features.

Snapshot alternatives – filesystems and VM hypervisors

VM/cloud snapshots:

  • If you are using a VM hypervisor or an IaaS cloud provider (e.g. VMware, VirtualBox or Amazon EC2/EBS), their snapshots are often a much better alternative to LVM snapshots. You can quite easily take a snapshot for backup purposes (but consider freezing the FS before you do).

Filesystem snapshots:

  • filesystem level snapshots with ZFS or btrfs are easy to use and generally better than LVM, if you are on bare metal (but ZFS seems a lot more mature, just more hassle to install):

  • ZFS: there is now a kernel ZFS implementation, which has been in use for some years, and ZFS seems to be gaining adoption. Ubuntu now has ZFS as an ‘out of the box’ option, including experimental ZFS on root in 19.10.

  • btrfs: still not ready for production use (even on openSUSE which ships it by default and has team dedicated to btrfs), whereas RHEL has stopped supporting it). btrfs now has an fsck tool (FAQ), but the FAQ recommends you to consult a developer if you need to fsck a broken filesystem.

Snapshots for online backups and fsck

Snapshots can be used to provide a consistent source for backups, as long as you are careful with space allocated (ideally the snapshot is the same size as the LV being backed up). The excellent rsnapshot (since 1.3.1) even manages the LVM snapshot creation/deletion for you – see this HOWTO on rsnapshot using LVM. However, note the general issues with snapshots and that a snapshot should not be considered a backup in itself.

You can also use LVM snapshots to do an online fsck: snapshot the LV and fsck the snapshot, while still using the main non-snapshot FS – described here – however, it’s not entirely straightforward so it’s best to use e2croncheck as described by Ted Ts’o, maintainer of ext3.

You should “freeze” the filesystem temporarily while taking the snapshot – some filesystems such as ext3 and XFS will do this automatically when LVM creates the snapshot.

Conclusions

Despite all this, I do still use LVM on some systems, but for a desktop setup I prefer raw partitions. The main benefit I can see from LVM is the flexibility of moving and resizing FSs when you must have high uptime on a server – if you don’t need that, gparted is easier and has less risk of data loss.

LVM requires great care on write caching setup due to VM hypervisors, hard drive / SSD write caching, and so on – but the same applies to using Linux as a DB server. The lack of support from most tools (gparted including the critical size calculations, and testdisk etc) makes it harder to use than it should be.

If using LVM, take great care with snapshots: use VM/cloud snapshots if possible, or investigate ZFS/btrfs to avoid LVM completely – you may find ZFS or btrfs is sufficiently mature compared to LVM with snapshots.

Bottom line: If you don’t know about the issues listed above and how to address them, it’s best not to use LVM.

Leave a Comment

tech