It make no sense today for me anymore…I found more downside than real use of my RAID5 array.

My setup: 5 disks of 22TB in Raid 5

  • Data Organising is estimated to 20 days!
  • Rebuild time of RAID5 is unknown never had to do (yet) :-)
  • Disks never sleep in BTFRS, power cost is here 0.30 per kWh
  • Constant noise of 5 disk clicking instead of only one or two when using
  • Do I need 80TB of continuous stiorage? Not really with 2700 movies= 12TB, 8000 TVShow episode= 14TB, most is still in h264 few in h265 and really really few in AV1 (fantastic by the way)
  • I dont care about Media, and rebuild everything on a 10GB Fiber most of it automatically. Most of my private stuff is on 3-2-1 encrypted anyway
  • High availability is not a topic, I’m alone using this box. And even, my Homelab is best effort not 24/7

I have another NAS full SSD, with 8 SSD but I hate the nature of RAID in SSD: they die unexpected most of the time. I prefer to lose 4TB then put 30TB at risks if 2 or more SSD decide to stop working

So maybe duplicating on another disk in a mirror (rsync) is maybe better for me

  • truthfultemporarily@feddit.org
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    RAID5 has been dead in commercial contexts for around 10 years. Reason is the resilver time is just too long. Now mostly you either use striped mirrors or do redundancy on the software level.

    • mbirth 🇬🇧@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      Now mostly you either use striped mirrors

      How is rebuilding an xx TB mirrored disk faster than rebuilding an xx TB disk that’s part of a RAID? Since most modern NASes use software RAID, it’s only a matter of tweaking a few parameters to speed up the rebuild process.

      • MentalEdge@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        1 day ago

        Rebuilding parity requires processing power. Copying a mirror does not.

        There’s also the fact that the rebuild stresses the drives, increasing the chance of a cascade failure, where the resulting rebuild after a drive failure, reveals other drive failures.

        It all results in management overhead, which having to “just tweak some parameters” makes worse, not better.

        In comparison to simple mirroring and backing up offsite, RAID is a headache.

        The redundancy it provides is better achieved in other ways, and the storage pooling it provides is better achieved in other ways.

        • mbirth 🇬🇧@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Rebuilding parity requires processing power.

          That shouldn’t be an issue with any NAS bought in the past decade.

          the rebuild stresses the drives

          You can tweak the parameters so the rebuild is being done slower. Also, mirroring a disk stresses the (remaining) disk as well. (But to be fair, if that one fails, you’ll still be able to access the data from the other mirror-pair(s).)

          It all results in management overhead

          I’m not seeing that. Tweaking parameters is not necessary unless you want to change the default behaviour. Default behaviour is fine in most cases.

          In comparison […] RAID is a headache.

          Speak for yourself. I rather enjoy the added storage capacity.

          • MentalEdge@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 day ago

            I rather enjoy the added storage capacity.

            So do I.

            It’s just that I use btrfs, mergerfs, or lvms to pool storage. Not RAID.

            Making changes to my storage setup is far easier using these options, much more so than RAID.

            Mergerfs especially makes adding or removing capacity truly trivial, with the only lengthy processes involved being bog-standard file transfers.

            Hard drive storage is pretty cheap. And the effort it takes to make changes to a raid volume as my needs change over the years, just isn’t worth the savings.

            • mbirth 🇬🇧@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              How often do you change your storage setup? I’ve configured everything once like 5 years ago and haven’t touched it since. I can add larger disks in pairs and the Synology does some LVM-/mdraid-magic to add the newly available free space as RAID1 until I add a third larger disk and it remodels it to RAID5.

              How do you handle parity with MergerFS? Or are all your storage partitions mirrored?

              Hard drive storage is pretty cheap.

              Not really - especially, if you’re looking for CMR drives. And any storage increase needs at least 2 disks with basically no (ethical) way to get any money back for the old ones.

              • MentalEdge@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 day ago

                Every year or so.

                My NAS is self-built.

                I used to buy one more drive whenever my pools would start getting full. I’m now in a place where I can discard data about as fast as I get more to store, I don’t predict needing new drives until one fails.

                I’ve re-arranged my volumes to increase or decrease parity many times after buying drives or instead of buying drives.

                Mergerfs makes access easy, the underlying drives are either with or without parity pairs, and I have things arranged so that critical files are always stored with mirroring, while non-critical files are not.

                • mbirth 🇬🇧@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 day ago

                  Interesting! Thank you for that insight. I might adopt some methods for when I finally replace the Synology with a new NAS (which will definitely not be another Synology device!).

      • truthfultemporarily@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        It’s not faster but you’re safer during it. If you have a RAID5, you cannot have a second disk fail during resilver. With striped mirrors another disk fail will have at most a 1/3 chance to destroy all data.

        • mbirth 🇬🇧@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Agreed. However, mirroring the remaining disk onto a new one makes it more likely for it to fail, too, I guess?

          I think the more important rule would be to not buy two disks from the same batch. And then go with whatever tickles your fancy.

    • moonpiedumplings@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      21 hours ago

      No, isn’t it only software raid5 done via btrfs?

      Btrfs + hardware raid should work fine. The OS can’t tell the difference anyways.

      • tenchiken@anarchist.nexus
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        Yeah but that’s not what I interpreted it as. OP might be using either I suppose.

        Personally, hardware raid irritates me since recovery scenarios are harder to recover from without $$$. I’ve had more luck with mdraid recovery than several vendors of hardware raid.

        I do think BTRFS is cool, but like at things there’s caveats.

    • eagerbargain3@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      dont need uptime… tried truenas scale on my ugreen nAS 64GB RAM (no ECC) and did not like it. ZFS is great no question but learning curve and risks of losing my whole array is too high (or two array of ZFS2). A pure EXT4 JBOD with replication once in a while is enough and more energy efficient for media

      Anyway I envision to keep updating most of it to AV1 down the line, so reducing storage need over time (long period)

      • exu@feditown.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        RAID (any form of it) is an uptime technology. If you don’t need uptime, you don’t need RAID

  • 4grams@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 day ago

    This is why I went back to a simple snapraid and Mergerfs setup. It only spins up the disk it uses, slow but a lot more efficient. It also is based on dead simple ext4 drives which are all still accessible even if the software fails; it’s all file level. I’ve lost many drives over the years and have successfully rebuilt every time.

    Scale is about the same as yours, about 24tb made up of 4 and 10tb disks. It’s unglamorous, it’s old school but it works and is reliable.