I am mainly hosting Jellyfin, Nextcloud, and Audiobookself. The files for these services are currently stored on a 2TB HDD and I don’t want to lose them in case of a drive failure. I bought two 12TB HDDs because 2TB got tight and I thought I could add redundancy to my system, to prevent data loss due to a drive failure. I thought I would go with a RAID 2 (or another form of RAID?), but everyone on the internet says that RAID is not a backup. I am not sure if I need a backup. I just want to avoid losing my files when the disk fails.
How should I proceed? Should I use RAID2, or rsync the files every, let’s say, week? I don’t want to have another machine, so I would hook up the rsync target drive to the same machine as the rsync host drive! Rsyncing the files seems to be very cumbersome (also when using a cron job).
Thanks for the advice. Do you have suggestions how to setup/handle the backup? E.G. manually connecting the drive via USB and cloning the files via rsync/ borg, e.g. every week or every time a threshold of changes have been made? Or having a small extra machine with the backup hard drive and sending the files via the network?
I am also still a bit confused. I have 2x 12TB. Lets say I have 6TB files on my hosting drive. AFAICT can I have two backups/snapshots before the third backup needs to override the first backup. Or am missing something? Buying more drives for backup is not really doable, as drives do generally cost a buck and I cannot/ don’t really want to afford buying more drives.
You can backup to an external USB drive (that’s what I do), or setup a small backup server (with RAID if you want).
If you use Borg it will do the right thing out of the box with minimal configuration - compression, deduplication, encryption, and incremental backups.
The first backup will be full and take longer, but subsequent backups will only target changes and will be quite fast.
Restoring is very straightforward, even if you only need a single file you deleted accidentally.