I am so confused that why I can’t find on how to do this…

  1. Offline the failing (but still working disk) from the LVM RAID1 side
  2. Physically remove it from the system
  3. Put in new disk
  4. Tell LVM RAID1 to use the newly installed disk

I do not have space in a dual caddy to put in the “to be replaced” disk for the --replace parameter to work. So how do I flipping tell LVM that I want to offline the disk that I wanna remove? Then online the new disk??

  • Helix@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    3 years ago

    You can use the lvconvert --repair command to repair a mirror after a disk failure. This brings the mirror back into a consistent state. The lvconvert --repair command is an interactive command that prompts you to indicate whether you want the system to attempt to replace any failed devices.

    https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/mirror_repair

    (works with all newer versions of LVM too, IIRC)

    Physically remove it from the system

    Please do that when the computer it’s in is not powered on. Even hotplugging when your device is capable of it can yield interesting results if you’re not doing it 100% correctly.

    • DBGamer@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      3 years ago

      Interesting so that could be ran after the failing device been removed? Additionally interesting cause I always yanked the drives out when Linux Mint says they are put on standby (sleeping). Or I do that manually then I would pull them out.

      • Helix@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        3 years ago

        Additionally interesting cause I always yanked the drives out when Linux Mint says they are put on standby (sleeping)

        How were they connected? USB? SATA? Were they even hotpluggable? The thing is, you can’t know if a running system accesses the disks and you should only disconnect disks from a running system when they are explicitly hot-pluggable. Otherwise you risk damaging the disks for example when you disconnect them while they’re spinning up.

        • DBGamer@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          3 years ago

          I am sorry but I am lost on what you means. I gotten a $30 5.25" Dual Hot Swappable 2.5" bays which each of the drives connected through the SATA interface (you plug the respected SATA cables in each bays’ end and the drives connections are literally “clicks into place”).

          So I thought if the drives are on standby “hence I assume are therefore not spinning up/down” then I thought it safe to pulls them out. Since I thought they would not be doing anything if they are simply on standby.

          • Helix@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            3 years ago

            Dual Hot Swappable 2.5" bays which each of the drives connected through the SATA interface (you plug the respected SATA cables in each bays’ end and the drives connections are literally “clicks into place”).

            Yeah, but if your mainboard doesn’t support hot swap on the ports they’re connected to, you can’t hot swap them. Similarly, if your disks themselves don’t support hot swap, you can’t hot swap them.

            In my experience, swapping disks while the PC is booted should only be done if you’re absolutely certain both the mainboard and the disk support hot swapping properly, else you may damage the disk. Especially consumer-grade disks are susceptible to failures in this way.

            So I thought if the drives are on standby “hence I assume are therefore not spinning up/down” then I thought it safe to pulls them out.

            Not really. Only if they support hot swapping and your motherboard does as well. Many do, some don’t.

            I thought they would not be doing anything if they are simply on standby.

            Being on standby may simply mean they have parked their r/w arms and are still spinning.

            In one of your other threads, you posted SMART information which shows “Power-off Retract Count” as 72, which means there were 72 instances where the drive didn’t properly shut down or lost power unexpectedly.

            For comparison, my 4 year old WD Red HDD has >33000h of usage and only a power-off retract count of 14.

            • DBGamer@lemmy.mlOP
              link
              fedilink
              arrow-up
              1
              ·
              3 years ago

              That’s understandable to me now, thank you very much for explaining all of this to me so I am able to becomes aware and able to understand everything now. :)