Hi all! I recently built a cold storage server with three 1TB drives configured in RAID5 with LVM2. This is my first time working with LVM, so I’m a little bit overwhelmed by all its different commands. I have some questions:
- How do I verify that none of the drives are failing? This is easy in case of a catastrophic drive failure (running
lvchange -ay <volume group>
will yell at you that it can’t find a drive), but what about subtler cases? - Do I ever need to manually resync logical volumes? Will LVM ever “ask” me to resync logical volumes in cases other than drive failure?
- Is there any periodic maintenance that I should do on the array, like running some sort of health check?
- Does my setup prevent me from data rot? What happens if a random bit flips on one of the hard drives? Will LVM be able to detect and correct it? Do I need to scan manually for data rot?
- LVM keeps yelling at me that it can’t find
dmeventd
. From what I understand,dmeventd
doesn’t do anything by itself, it’s just a framework for different plugins. This is a cold storage server, meaning that I will only boot it up every once in a while, so I would rather perform all maintenance manually instead of delegating it to a daemon. Is it okay to not installdmeventd
? - Do I need to monitor SMART status manually, or does LVM do that automatically? If I have to do it manually, is there a command/script that will just tell me “yep, all good” or “nope, a drive is failing” as opposed to the somewhat overwhelming output of
smartctl -a
? - Do I need to run SMART self-tests periodically? How often? Long test or short test? Offline or online?
- The boot drive is an SSD separate from the raid array. Does LVM keep any configuration on the boot drive that I should back up?
Just to be extra clear: I’m not using mdadm
. /proc/mdstat
lists no active devices. I’m using the built-in raid5 feature in lvm2. I’m running the latest version of Alpine Linux, if that makes a difference.
Anyway, any help is greatly appreciated!
How I created the array:
pvcreate /dev/sda /dev/sdb /dev/sdc
vgcreate myvg /dev/sda /dev/sdb /dev/sdc
pvresize /dev/sda
pvresize /dev/sdb
pvresize /dev/sdc
lvcreate --type raid5 -L 50G -n vol1 myvg
lvcreate --type raid5 -L 300G -n vol2 myvg
lvcreate --type raid5 -l +100%FREE -n vol3 myvg
For education purposes, I also simulated a catastrophic drive failure by zeroing out one of the drives. My procedure to repair the array was as follows, which seemed to work correctly:
pvcreate /dev/sda
vgextend myvg /dev/sda
vgreduce --remove --force myvg
lvconvert --repair myvg/vol1
lvconvert --repair myvg/vol2
lvconvert --repair myvg/vol3
You must log in or # to comment.