I have a home server that I’m using and hosting files on it. I’m worried about it breaking and loosing access to the files. So what method do you use to backup everything?

  • Anon819450514@lemmy.ca
    link
    fedilink
    English
    arrow-up
    44
    ·
    edit-2
    1 year ago

    Backblaze on a B2 account. 0.005$ per gb. You pay for the storage you use. You pay for when you need to download your backup.

    On my truenas server, it’s easy as pie to setup and easy as 🥧 to restore a backup when needed.

    • KairuByte@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      If your data is replaceable, there’s not much point unless it’s a long wait or high cost to get it back. It’s why I don’t have many backups.

    • Difficult_Bit_1339@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      In the 20 years that I’ve been running a home server I’ve never had anything more than a failed disk in the array which didn’t cause any data loss.

      I do have backups since it’s a good practice and also because it familiarizes me with the software and processes as they change and update so my skillset is always fresh for work purposes.

  • JubilantJaguar@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    1 year ago

    ITT: lots of the usual paranoid overkill. If you do rsync with the --backup switch to a remote box or a VPS, that will cover all bases in the real world. The probability of losing anything is close to 0.

    The more serious risk is discovering that something broke 3 weeks ago and the backups were not happening. So you need to make sure you are getting some kind of notification when the script completes successfully.

    • anteaters@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      While I don’t agree that using something like restic is overkill you are very right that backup process monitoring is very overlooked. And recovering with the backup system of your choice is too.

      I let my jenkins run the backup jobs as I have it running anyways for development tasks. When a job fails it notifies me immediately via email and I can also manually check in the web ui how the backup went.

  • satanmat@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    3-2-1

    Three copies. The data on your server.

    1. Buy a giant external drive and back up to that.

    2. Off site. Backblaze is very nice

    How to get your data around? Free file sync is nice.

    Veeeam community version may help you too

    • z3bra@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      I’m not sure how you understand the 3-2-1 rule given how you explained it, even though you’re stating the right stuff (I’m confused about your numbered list…) so just for reference for people reading that, it means that your backups need to be on:

      • 3 copies
      • 2 mediums
      • 1 offsite location
      • theragu40@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Huh. I always heard 3 copies, 2 locations, 1 of the locations offsite. Yours makes sense though.

  • cnk@kbin.dk
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    cronjobs with rsync to a Synology NAS and then to Synology’s cloud backup.

    • notleigh@aussie.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Same setup here. I’ve got a really basic script running nightly from cron. B2 is cheap as, and having an encrypted backup that’s versioned is great for piece of mind.

      At one point I was away from home and my (little rpi) server wasn’t accessible, but with the restic repo up on B2 I was able to easily find a file I urgently needed remotely. It’s awesome.

  • Jason@lemmy.weiser.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Proxmox Backup Server. It’s life-changing. I back up every night and I can’t tell you the number of times I’ve completely messed something up only to revert it in a matter of minutes to the nightly backup. You need a separate machine running it–something that kept me from doing it for the longest time–but it is 100% worth it.

    I back that up to Backblaze B2 (using Duplicati currently, but I’m going to switch to Kopia), but thankfully I haven’t had to use that, yet.

    • dustojnikhummer@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      PBS backs up the host as well, right? Shame Veeam won’t add Proxmox support. I really only backup my VMs and some basic configs

      • DemonSlayerB@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Veeam has been pretty good for my HyperV VMs, but I do wish I could find something a bit better. I’ve been hearing a lot about Proxmox lately. I wonder if it’s worth switching to. I’m a MS guy myself so I just used what I know.

      • Jason@lemmy.weiser.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        PBS only backs up the VMs and containers, not the host. That being said, the Proxmox host is super-easy to install and the VMs and containers all carry over, even if you, for example, botch an upgrade (ask me how I know…)

        • dustojnikhummer@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Then what’s the purpose over just setting up the built in snapshot backup tool, that unlike PBS can natively back up onto an SMB network share?

          • Jason@lemmy.weiser.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I’m not super familiar with how snapshots work, but that seems like a good solution. As I remember, what pushed me to PBS was the ability to make incremental backups to keep them from eating up storage space, which I’m not sure is possible with just the snapshots in Proxmox. I could be wrong, though.

  • mariom@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Autorestic, nice wrapper for restic.

    Data goes from one server to second server, and vice versa (different provider, different geolocation). And to backblaze B2 - as far as I know cheapest s3-like storage

    • BlueBockser@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Wasabi might also be worth mentioning, a while back I compared S3-compatible storage providers and found them to be cheaper for volumes >1TB. They now seem to be slightly more expensive (5.99$ vs. 5$), but they don’t charge for download traffic.

  • BigDev@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    I am lucky enough to have a second physical location to store a second computer, with effectively free internet access (as long as the data volume is low, under about 1TB/month.)

    I use the ZFS file system for my storage pool, so backups are as easy as a few commands in a script triggered every few hours, that takes a ZFS snapshot and tosses it to my second computer via SSH.

  • Curious Canid@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    My server runs Plex and has almost 50 TB of video on it. After looking at all the commercial backup options I gave up on backing up that part of the data. :-(

    I do backup my personal data, which is less than a terrabyte at this point. I worked out an arrangement with a friend who also runs a server. We each have a drive in the other’s server that we use for backup. Every night cron runs a simple rsync script to do an incremental backup of everything new to the other machine.

    This approach cost nothing beyond getting the drives. And we will still have our data even if one of the servers is physically destroyed and unrecoverable.

    • WxFisch@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I also have a decent amount of video data for Plex (not nearly 50TB, but more than I want I pay to backup). I figure if worst comes to worst I can rip DVD/BluRays again (though I’d rather not) so I only backup file storage from my NAS that my laptops and desktop backup to. It’s just not worth the cost to backup data that’s fairly easy to replace.

      • Curious Canid@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yeah, that was where I finally came out too. I still own the discs. My only worry is that some of my collection is beginning to age. I’ve had a few DVDs that were no longer readable.

    • lom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Oh that whith the friend’s server is a good idea. Mutual benefit at little extra cost

      • Curious Canid@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It’s the only “no cost” option I know of that provides an off-site backup. And once it occurred to me, it was really easy to set up.

  • Ferawyn@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Various different ways for various different types of files.

    Anything important is shared between my desktop PC’s, servers and my phone through Syncthing. Those syncthing folders are all also shared with two separate servers (in two separate locations) with hourly, daily, weekly, monthly volume snapshotting. Think your financial administration, work files, anything you produce, write, your main music collection, etc… It’s also a great way to keep your music in sync between your desktop PC and your phone.

    Servers have their configuration files, /etc, /var/log, /root, etc… rsynced every 15 minutes to the same two backup servers, also to snapshotted volumes. That way, should any one server burn down, I can rebuild it in a trivial amount of time. This also goes for user profiles, document directories, ProgramData, and anything non-synced on windows PC’s.

    Specific data sets, like database backups, repositories and such are also generally rsynced regularly, some to snapshotted volumes, some to regulars, depending on the size and volatility of the data.

    Bigger file shares, like movies, tv-shows, etc… I don’t backup, but they’re stored on a distributed GlusterFS, so if any one server goes down, that doesn’t lose me everything just yet.

    Hardware will fail, sooner or later. You should see any one device as essentially disposable, and have anything of worth synced and archived automatically.