I am using duplicati and thinking of switching to Borg. What do you use and why?

  • Eduardo
    link
    fedilink
    3
    edit-2
    2 years ago

    What problem are you trying to solve? Please think about that, and about your backup strategy, before you decide on any specific tools.

    For example, here are several scenarios that I guard against in my backup strategy:

    • Accidentally delete a file, I want to recover it quickly (snapshots);
    • Entire drive goes kablooie, I want my system to continue running without downtime (RAID)
    • User data drive goes kablooie, I want to recover (many many options)
    • Root drive goes kablooie, I want to recover (baremetal recovery tools)
    • House burns down or computer is damaged/stolen (offsite backups)
  • @Holzkohlen@feddit.de
    link
    fedilink
    2
    edit-2
    2 years ago

    I just use a script on an systemd timer. Well two scripts on two timers really - one running daily, one weekly for different data. It’s just a bunch of rsync commands copying folders to an hdd in my system and I reroute the output into a simple log file, mainly to verify if it ran at all. I am a bit paranoid about that. I can also run it manually whenever I want. Oh and some of the data I also rsync again to a smb cloud drive from Hetzner. I do not keep multiple versions and I delete remote files that have been deleted locally. It’s just a 1:1 copy.

  • flatbield
    link
    fedilink
    22 years ago

    Just a reminder. Consider and test your restore process as well. Backups without restore testing are kind of questionable. Also think how the restore will go. Do you want to do a bare metal restore, or will you just reinstall, and restore certain things for example. Lot of these backup methods will not get a true bare metal restore set, nor can file system backups be “perfect” if they are done on a running system. Databases and things like cryptfs mounts for example can be problematic for example. Nor do all tools necessarily backup the full structure of the file system.

    Not saying these are always issues, just be aware of them.

  • @I_Am_Jacks_____@beehaw.org
    link
    fedilink
    22 years ago

    I’ve been using restic. It has built-in dedup & encryption and supports both local and remote storage. I’m using it to back up to a local restic-server (pointing to a USB drive) and Backblaze B2.

    Restores for single or small sets of files is easy: restic -r $REPO mount /mnt Then browse through the filesystem view of your snapshots and copy just like any other filesystem.

  • @flux@beehaw.org
    link
    fedilink
    22 years ago

    Kopia has served me great. I back up to my local Ceph S3 storage and then keep a second clone of that on a raid.

    Kopiahas good performance and miltiple hosts can back up tp it concurrently while preserving deduplication – unlike borgbackup.

    • 𝜏au
      link
      fedilink
      2
      edit-2
      2 years ago

      I’ve been using Kopia on my desktop computer for a few years now to do cloud backups. It’s generally working well and I haven’t found anything else with the same combination of features yet.

      That said, kopia-ui is still a bit finicky and I’ve managed to bork a repo beyond repair a few times (e.g. once because my cloud provider account ran out of space, leading to some kind of inconsistent state) and there are some oddities, like the regular “periodic maintenance” (it’s a bit weird that it’s needed in the first place) randomly failing or taking forever.

  • @CjkOvPDwQW@lemmy.pt
    link
    fedilink
    12 years ago

    Using borg backup, just because there are some nice frontends for the gnome ecosystem (when I am using gnome, I love to use gnome apps), and it has a nice cmd for scripting when using something else (using it on servers)

  • @TDCN@feddit.dk
    link
    fedilink
    12 years ago

    Rsync is great but if you want snapshots and file history rsnapshot works pretty well. It’s based on rsync but for every sync it creates shortcuts for existing files and only copies changes and new files. It saves space and remains transparent for the user. FreeFileSync is also amazing

    • *ira
      link
      fedilink
      22 years ago

      Seconded, I use restic with a remote blob storage and works nicely

  • flatbield
    link
    fedilink
    02 years ago

    I am old school. I just use GNU Tar with the Pax format and multiple external detachable encypted hard drives. Reason is it is simple and a well known tool that is very common with a standard archive format.

    • @GnomeComedy@beehaw.org
      link
      fedilink
      1
      edit-2
      2 years ago

      I’m curious - how much data are you backing up with that method and how frequently are you doing your backups? Doesn’t sound like it would scale well, but I’m also wondering if maybe this is perfect and I’ve just been over thinking it.

      • flatbield
        link
        fedilink
        32 years ago

        There is not a size limit. Lot of these other methods actually use GNU Tar behind the scenes anyway. More then that GNU tar has been used for decades for this purpose. Pull out any Unix book from 2 decades ago and you will see “tar”, “cpio”, and “dump/restore” as the way. The new tool out there is Pax and in fact GNU Tar supports the new “pax” format. Moreover GNU Tar with Pax format can backup almost full disk structure including hard links, ACLs, and extended attributes which a lot of tools do not do. It is still useful to archive some things at a lower level like your partition table, and boot blocks of course. You also have to decide what run-level (such as rescue) you want to archive in, and/or what services you should stop, or provide separate to file system dumps for depending on your system. Databases, and things like ecryptfs take some special thought (thought it does for any tool). It is also good to do test restores to verify your disaster plan.

        I use tar on many systems. My workstation is about 1TB of data. Backup is about 11 hours though I think it could be faster if I disabled compression (I currently use the standard gzip compression which is not optimal). I think the process is CPU bound by the compression at the moment. Going to uncompressed or using parallel gzip at level 2 is probably the fastest you can do and should really speed things up by 4X or more. I have played with this some for my wife and her raw backup is a lot faster now. My wife uses USB 3 external drives specifically plugged into USB 3 ports (the one with the SS symbol and the blue interior), and with a USB 3 related cable. I use 6TB naked SATA drives I insert into a hot mount enclosure and store in storage boxes. My backup system can theoretically do incrementals too, but it has some issues since I have moved to BTRFS so I do not use that at the moment. Did always use before. I have an idea how to fix, but need to debug and test incrementals now.

        How often: I backup monthly. When my incrementals were working I use to do it weekly or whenever I got nervous. Other option for the BTRFS file systems would be to use their native backup tools. Not sure though, I like to use generic stuff. Lot to be said for generic.

        Big downside of tar is the mind numbing man page. Getting the options correct takes some real thought. You also have to be comfortable with the shell and Bash scripting. Big upside you can customize exactly what you want.

          • flatbield
            link
            fedilink
            22 years ago

            Yes, I actually did not know how far back, thanks. Wikipedia seems to say 1979. I know my system admin book dated 1992 talks about it and it was common then. I think my brother use to use it in the early 1980s for his job and maybe I did too a few times. Wikipedia says GNU Tar is newer and traces back to 1987. The formats have changed some and there are several. The PAX format is much newer which I think was standardized in 2001 but GNU Tar would have taken time to implement it. I do not know that date.

            People seem to forget that tar worked well back then and still does.

            • @davefischer@beehaw.org
              link
              fedilink
              12 years ago

              I had the chance to play with late 70s Unix for a bit a few years ago. (Hardware on loan from a museum.) VERY minimal, but still recognizable. (Well, my Unix reflexes are old - I started in the mid 80s.)

              • flatbield
                link
                fedilink
                32 years ago

                Interesting. About then I was using a VAX. Somehow I spend most of my time on other stuff until I switched to Linux around 2000.

                • @davefischer@beehaw.org
                  link
                  fedilink
                  22 years ago

                  My first Unix was 4.3BSD on a VAX-11/750. (There was another 11/750 running VMS, but I didn’t like that nearly as much.)

  • @Klaymore@sh.itjust.works
    link
    fedilink
    02 years ago

    I use NixOS so all my system configuration is already saved in my NixOS configs, which I save on GitHub. For dotfiles that aren’t managed by NixOS I use syncthing to sync them between my devices, but no real backup cause I can just remake them if I need to, and things like my Neovim and VSCode configs are managed by my NixOS configs so they’re backed up as well.

      • @Klaymore@sh.itjust.works
        link
        fedilink
        1
        edit-2
        2 years ago

        Yeah I have a full impermanence setup using tmpfs, which is really nice. I did it like on the NixOS wiki and it’s been helpful for organizing my dotfiles and keeping track of all the random stuff that programs put everywhere.

        I actually have all my stuff in a separate /stuff folder kinda by accident so my /home only has dotfiles and things like that.

  • @isosphere@beehaw.org
    link
    fedilink
    02 years ago

    I’m currently working on a disaster recovery plan using fsarchiver. I have very limited experience with it so far, but it had the features and social proof I was looking for.

    I have so far used it to create offline filesystem backups of two volumes, one was LUKS encrypted (has to be manually “opened” with cryptsetup).

    It can backup live filesystems which was important to me.

    It’s early days for my experience with this, but I’m sure others have used it and might chime in.

    • flatbield
      link
      fedilink
      22 years ago

      Just one warning. If doing live, think about state and test your restores. Just mention because things like databases and ecryptfs will not properly archive live. There are various ways around, but consider if you have concerns regarding getting really good complete backups taken at one point in time and on live systems.