Hello all,

I have recently bought an external 4tb drive for backups and having an image of another 2tb drive (in case it fails). The drives are used for cold storage (backups). I would like a prefference on the filesystem i should format it. From the factory, it comes with ntfs and that is ok but i wonder if it will be better with something like ext4. Being readable directly from windows won’t be necessary (although useful) since i could just temporarily turn on ssh on the linux machine (or a local vm) and start copying.

Edit: the reason for this post is also to address an issue i had while backing up to an ntfs drive on linux. I had filesystem corruptions (thankfully fixed by chkdsk on a windows machine) and I would like to avoid that in the future.

Edit2: ok I have decided I will go with ext4. Now I am making the image of the first 2tb drive. Wish me luck!

    • Bogasse@lemmy.ml
      link
      fedilink
      arrow-up
      13
      ·
      7 months ago

      Although it depends of the backup format :

      • If you store compressed tarballs they won’t be of any benefits.
      • If you copy whole directory as is, the filesystem-level compression and ability to deduplicate data (eg. with duperemove) are likely to save A LOT of storage (I’d bet on a 3 times reduction).
  • friend_of_satan@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    7 months ago

    zfs is made for data integrity. I wouldn’t use anything else for my backups. If a file is corrupted, it will tell you which file when it encounters a checksum error while reading the file.

      • refalo@programming.dev
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        7 months ago

        if you’re also using raidz or mirroring in zfs, then yes. it can also do encryption and deduplication

      • friend_of_satan@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        If there is a redundant block then it will auto recover and just report what happened. Redundancy can be set up with multiple disks or by having a single disk write blocks to multiple places by setting the “copies” property to more than 1.

  • tiny@midwest.social
    link
    fedilink
    English
    arrow-up
    16
    ·
    7 months ago

    If your Linux distro is using btrfs you can format it to btrfs and use btrfs send for backups. Otherwise the filesystem shouldn’t be to big if a deal unless you want to restore files from a Windows machine. If that is the case use ntfs

    • wallmenisOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      I use fedora 40 kinoite which uses btrfs but i am not sure i trust it enough for this data. Also forgot to mention in original post that I had some problems when overwriting files in ntfs which caused corruption. Thankfully chkdsk on a windows machine fixed that but I wouldn’t like for that to happen again when backing up from a linux machine.

        • wallmenisOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Yeah but I’d rather have something with a journaling system that might make recovery easier. I don’t have any issue with temporarily connecting the drive to my pi and then moving the files via sftp (or spinning a vm via hyper-v/wsl). Also I don’t have much experience with CoW filesystems like zfs and btrfs and I am scared to mess with them in case I cause data loss by accident. So ext4 it is…

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    7
    ·
    7 months ago

    I’d use ext4 for that, personally. You might also consider using full-disk encryption (redhat example) if there’s going to be any data on there you wouldn’t want a burglar to have. Obviously it wouldn’t do much good if you don’t encrypt the other disk as well, but having a fresh one to try it out on makes things easier.