Hello all,
I have recently bought an external 4tb drive for backups and having an image of another 2tb drive (in case it fails). The drives are used for cold storage (backups). I would like a prefference on the filesystem i should format it. From the factory, it comes with ntfs and that is ok but i wonder if it will be better with something like ext4. Being readable directly from windows won’t be necessary (although useful) since i could just temporarily turn on ssh on the linux machine (or a local vm) and start copying.
Edit: the reason for this post is also to address an issue i had while backing up to an ntfs drive on linux. I had filesystem corruptions (thankfully fixed by chkdsk on a windows machine) and I would like to avoid that in the future.
Edit2: ok I have decided I will go with ext4. Now I am making the image of the first 2tb drive. Wish me luck!
I recommentd ext4. Its extremely stable and easy to manage. Btrfs, zfs etc. is overkill for a pure data drive imo.
Although it depends of the backup format :
- If you store compressed tarballs they won’t be of any benefits.
- If you copy whole directory as is, the filesystem-level compression and ability to deduplicate data (eg. with duperemove) are likely to save A LOT of storage (I’d bet on a 3 times reduction).
This! And I’d probably add par2 parity files - just in case some bitrot happens.
I can’t tell if this is actual advice or irony
zfs is made for data integrity. I wouldn’t use anything else for my backups. If a file is corrupted, it will tell you which file when it encounters a checksum error while reading the file.
Can it recover the error?
if you’re also using raidz or mirroring in zfs, then yes. it can also do encryption and deduplication
If there is a redundant block then it will auto recover and just report what happened. Redundancy can be set up with multiple disks or by having a single disk write blocks to multiple places by setting the “copies” property to more than 1.
Btrfs
agree, and it compresses automatically, can be useful for some backup
If your Linux distro is using btrfs you can format it to btrfs and use btrfs send for backups. Otherwise the filesystem shouldn’t be to big if a deal unless you want to restore files from a Windows machine. If that is the case use ntfs
I use fedora 40 kinoite which uses btrfs but i am not sure i trust it enough for this data. Also forgot to mention in original post that I had some problems when overwriting files in ntfs which caused corruption. Thankfully chkdsk on a windows machine fixed that but I wouldn’t like for that to happen again when backing up from a linux machine.
NTFS has never been well supported on Linux. Any native filesystem will be fine.
Are you sharing this drive with windows machines? It may be better to go exfat or something more neutral in that case.
Yeah but I’d rather have something with a journaling system that might make recovery easier. I don’t have any issue with temporarily connecting the drive to my pi and then moving the files via sftp (or spinning a vm via hyper-v/wsl). Also I don’t have much experience with CoW filesystems like zfs and btrfs and I am scared to mess with them in case I cause data loss by accident. So ext4 it is…
deleted by creator
I’d use ext4 for that, personally. You might also consider using full-disk encryption (redhat example) if there’s going to be any data on there you wouldn’t want a burglar to have. Obviously it wouldn’t do much good if you don’t encrypt the other disk as well, but having a fresh one to try it out on makes things easier.
There was just a similar post here. You may find interesting clues there as well.
Buck the trend, put APFS on those bad boys.
Not bad idea.
Well, given the current state of the Open Source driver, I think it is a bad idea.
Although, I guess if you can tolerate closed source….
I was kidding…
Or, kick it old school with reiserFS
ext4 is the linux equivalent of ntfs