I recently decided to replace the SD card in my Raspberry Pi and reinstall the system. Without any special backups in place, I turned to rsync to duplicate /var/lib/docker with all my containers, including Nextcloud.

Step #1: I mounted an external hard drive to /mnt/temp.

Step #2: I used rsync to copy the data to /mnt/tmp. See the difference?

Step #3: I reformatted the SD card.

Step #4: I realized my mistake.

Moral: no one is immune to their own stupidity 😂

  • Nibodhika@lemmy.world
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    1
    ·
    10 months ago

    If you have one backup, you have no backup. That’s a hard lesson to learn, but if you care about those photos it’s possible to recover them if you haven’t written stuff on that sdcard yet.

    • TWeaK@lemm.ee
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      2
      ·
      10 months ago

      At least 3 backups, 2 different media, 1 offsite location.

      • krash@lemmy.ml
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        10 months ago

        I like 3-2-1-1-0 better. Like yours, but:

        • the additional 1 is for “offline” (so you have one offsite and offline backup copy).
        • 0 for zero errors. Backups must be tested and verified.
  • space@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    3
    ·
    edit-2
    10 months ago

    Fuck up #1: no backups

    Fuck up #2: using SD cards for data storage. SD cards and USB drives are ephemeral storage devices, not to be relied on. Most of the time they use file systems like FAT32 which are far less safe than NTFS or ext4. Use reliable storage media, like hard drives.

    Fuck up #3: no backups.

    • AtariDump@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      10 months ago

      Would an SSD be any better than a pen drive or should it be stored on spinning rust?

      • ShepherdPie@midwest.social
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        10 months ago

        In my experience, flash drives are way more reliable than SD cards and I’d put SSD and HDD above both of those.

        I wish they’d just ditch the SD card on the Pi already as it’s always the most likely reason why your stuff stops working. For my Pi running Home Assistant, I’ve swapped to an SDD as the boot drive. For the others, I still use SD cards but they’re just doing basic stuff like running Klipper on my 3d printer or a (WIP) live photo frame that can be easily swapped with a replacement SD later.

        • icanwatermyplants@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          It really depends how you define reliability. SD cards are physically nigh indestructible, but can show failure when overwritten often. Hence for one off backups it’s actually a good alternative. It will start showing problems when used as a medium that often writes and overwrites the same data often.

          I would recommend backups on SD cards in an A/B fashion when you want to give a backup to someone else to store safely.

          • ShepherdPie@midwest.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Reliability in that I’ve used flash drives and SD cards for years but have only ever had issues with corrupt SD cards (probably at least half a dozen times) while I’ve never had any with flash drives.

            Constant writes is an issue with them, which is why I think it’s stupid that the Raspberry Pi Foundation continues to use them as the default storage/OS drive. Then again, they continue to make insane choices with power supplies as well, so it shouldn’t be a big surprise.

      • bbuez@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        10 months ago

        The best way to ensure your data lasts a long time is to use a laser to beam it to the darkest part of the sky. Read speed is abysmal though

      • space@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        Much better. SSDs and HDDs do monitor the health of the drives (and you can see many parameters through SMART), while pen drives and SD cards don’t.

        Of course, they have their limits which is why raid exists. File systems like ZFS are built on the premise that drives are unreliable. It’s up to you if you want that redundancy. The most important thing to not lose data is to have backups. Ideally at least 3 copies, 1 off site (e.g. on a cloud, or on a disk at some place other than your home).

        • Starayo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Though not every fail state is going to show up. If you start seeing weird intermittent behaviour from a drive, for goodness sake find a way to back it up immediately.

          My mum’s new nuc started having some issues, SMART showed perfect drive health. After trying a few things to diagnose, I rebooted to run memtest and check for bad ram, and that was the last time it ever booted into windows. Controller or something on the nvme ssd died. Far too expensive to try and repair for data recovery. Thankfully had a… Somewhat recent backup. Not as recent as we would have liked.

      • The Overlord@tsck.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        SD cards and pen drives are (usally) made from lower quality, cheaper nand (the little memory chips that store the data) and also lack health monitoring, that being said ssds can and do die so it’s important to have backups

  • ducking_donuts@lemm.ee
    link
    fedilink
    English
    arrow-up
    29
    ·
    10 months ago

    Unless you’ve used something secure for formatting or wrote data to the SD after, consider attempting data recovery.

    • summerof69@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      10 months ago

      No luck with extundelete (segfault) and testdisk (sees some deleted files, but not /var/lib/docker). At least I can always throws it away and not worry about safety of my data! :)

      • Nilz
        link
        fedilink
        English
        arrow-up
        16
        ·
        10 months ago

        You can always try professional data recovery services. It just depends on how much the data is worth to you.

        • Atemu@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          10 months ago

          And how much time you want to put into not getting scammed.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    10 months ago

    I’m just impressed an SD card in a Pi lasted since 2017 without losing all your data on its own.

    For the future the general guideline is 3 copies of your data at minimum, so definitely set up some backups.

  • xlash123@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    10 months ago

    If you haven’t done much writing to the SD card, you may be able to recover the data. Data isn’t really “deleted”, it is just labeled as deleted. There is software that can comb through the raw data and try to make sense of what files were there. I don’t know of any specific software, so if anyone knows, please reply

    Edit: Another commenter mentioned some success with DMDE

    Edit 2: Worth mentioning that this is true of formats. As long as it doesn’t zero out the entire media, it just edits the file system metadata to say there are no files.

  • Outcide@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    10 months ago

    There’s an old saying, “Unix is user friendly, it’s just fussy about it’s friends.”

    • lando55@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 months ago

      Unix is the kind of friend who won’t bat an eye about holding your beer while you go and do something incredibly stupid

  • bruhduh@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    10 months ago

    Testdisk and photorec, use them, they even saved my data from bricked Chinese usb flash drive, so it’ll save yours unless you wrote dd if /dev/zero of /*/microsd. Also here’s the tip, don’t attempt to rebuild partition firstly, first step try to copy all files from microsd to another device with these programs and after that try other ways, edit: I’ve seen from your other comments that your data already was overwritten, my condolences

  • Bobby Turkalino@lemmy.yachts
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    10 months ago

    Everyone else is gonna be like “if you don’t have at least 3 backups of something blahblah” but you know, not everyone has the finances for that, so advice from a cheapskate computer nerd: when going through critical transfers/reformats/deletions like you were doing, ALWAYS try actually recovering stuff from the backup before you cross the point of no return. E.g. if the backup is a .zip, extract a few individual files from it and open them in their respective programs.

  • Geth@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    3
    ·
    10 months ago

    I know I’m going to get down voted for this but this would be almost impossible to fuck up with a gui. Yet people insist that writing commands manually is superior. I’m sorry for your loss.

    • wargreymon2023
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      10 months ago

      Fair enough.

      CLI is not about ease to begin with, it is about versatility.

    • jkrtn@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      Guardrails are absolutely not a reason why people prefer the CLI. We want the guardrails off so we can go faster.

      • Geth@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        10 months ago

        This is on me for sure that I’ve never seen anyone be faster using a CLI compared to a GUI especially for basic operations which is what most of us do 95% of the time. I know there are specific cases where a command just does it better/easier but for me that’s not the case for everyday stuff.

        • SayCyberOnceMore@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          10 months ago

          But what about the movies where the actors are typing commands and a visual GUI is moving around and updating on the screen (and making sound effects too).

          Isn’t that the best of all worlds? /s

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      There is something to be said about CLI applications being risky by default (“rm” doesn’t prompt to ask, rsync --delete will do just that). But I’ve definitely slipped on the mouse button while “drag & dropping” files in a GUI before. And it can be a right mess if you move a bunch of individual files rather than a sub-folder…

      • Midnight Wolf@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        At least for windows, you can ctrl-z that away and it’ll handle your mouse fumble. Explorer also highlights the files after a copy so if that doesn’t work (and it was a copy action), just delete them immediately.

        I haven’t used *nix for daily stuff in years but I’m sure the same abilities are there, surely.

    • lhamil64@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      To play devil’s advocate, tab completion would have also likely caught this. OP could have typed /mnt/t<Tab> and it would autofill temp, or <Tab><Tab> would show the matching options if it’s ambiguous.

  • glasgitarrewelt@feddit.de
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    10 months ago

    Sorry to hear, I feel you:

    I wanted to delete all .m3u-files in my music collection when I learned:

    find ./ -name "*.m3u" -delete -> this would have been the right way, all .m3u in the current folder would have been deleted.

    find ./ -delete -name "*.m3u" -> WRONG, this just deletes the current folder and everything in it.

    Who would have known, that the position of -delete actually matters.

    • Synapse@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      I did this sort of mistakes too, luckily BTRFS snapshots are always here to save the day !

    • nabladabla
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      The first one would have deleted nothing as it needs to match the whole name. I recommend running find with an explicit -print before replacing it in place with -delete or -exec. It’s good to remember that find has a complex order dependent language with -or and -and, but not maybe the best idea to try to use those features.

    • blackbirdbiryani@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      I use GNU find every day and still have to google about the details. Only learnt about - delete the other day, good to know the position matters.

  • BCsven@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 months ago

    Unlesa you did a full zeroing format the info might still be available. There was an applicarion that attempts to rebuild the partition / Filesystem from left over meta data or inode info. I forget the name unfortunately. Normall the strings command will get your photos but probably not if they were in a docker image database.

      • BCsven@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Those are good for sure. And maybe it was testdisk. There was one that just undeleted the partition table delete. as long as new data had not been written everthing would be intact

  • shadowbert@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    10 months ago

    My condolences :'(

    I once lost a bunch of data because I accidently left a / at the end of a path… rsync can be dangerous lol

    • blackbirdbiryani@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Rclone is superior IMHO, you have to explicitly name the output folder. Used to think it was a hassle but in hindsight being explicit about the destination reduces mistakes.

      • shadowbert@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        Sometimes you’re hands are tied by the tools already on the server - but I’ll try to remember to check to see if that’s available next time.

  • kandoh@reddthat.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 months ago

    The bells of the Gion monastery in India echo with the warning that all things are impermanent.

  • vsis@feddit.cl
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    Sorry to read that.

    I’ve dded an external drive instead of an SD card once by mistake. I’ve never felt more stupid than that day.