Hi all, so I have about 6 nvme drives in my server, I use them in pairs for download cache, appdata, and vm’s. Would there be a benefit or would it act worse if I combined all 6 drives into one ZFS pool to have the bitrot protection on my VM drives, and the increased storage space.

  • Red@reddthat.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 years ago

    Technically, you could get worse throughput because it would act as “one drive” rather than 3 drives.
    But if you know that your writes are inconsistent and all 3 are not writing at the same time, I think it could be fine.

    I’m a z2 when it comes to pools (2 parity drives) because I’m scared of losing the data on a rebuild if 1 dies, so even with 6 drives for me, I would only have +1 drive worth of extra space in your case.

    I would do it if I need space, and I didn’t have any extra slots available for expansion

    • Nogami@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      I gotta disagree here, if you’re using 6NVME drives, the ZFS data is striped across 6 drives, so you’ll get more performance, assuming the controller they are attached to can take advantage of it and you have a fast enough CPU to manage parity calculations.

      This is far more evident on spinning disks than NVMEs, but there should still be some speedups. Doing it for the protection and having a single large storage space would also be significant benefits in my book.

      ZFS also has far superior data caching, so recall of commonly used data will be from RAM, rather than the drives.

      • Red@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        Hmm, I was thinking if they had 3x2NVMe drives, then they could have a total read/write bandwidth of 900MB/s (assuming a 300MB/s throughput, and assuming the 3 different raids/pools each are completely independent).

        Where as if they make it one ZFS pool, it would be 300MB/s because all 3 “apps”, would be accessing the same “pool”.

        Though, this may only be relevant to HDDs not NVMe’s as I havn’t had the privilege to have NVMe’s in a zfs/raid/etc.

        • Nogami@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 years ago

          If he had 6 x NVME drives and his controller and MB supported maximizing bandwidth it, it should be capable of 300MB/sec x 6 (1.8TB/sec), parity calculations nonwithstanding because the data is striped across all drives in the pool as it reads and writes to/from all drives in the pool at the same time.

          For example, if a big 1TB file is split across 6 physical devices in ZFS with striping, the system will read pieces of that 1TB from all NVME drives in the pool at the same time, not from one drive at a time.

          Transfer rate is determined by the physical devices (and the hardware attached), not a pool.

          Here’s a little video about it - this isn’t using ZFS, but the idea about increasing transfer rates by raiding your NVMEs together is accurate. He’s getting speeds of over 21TB/sec for reads, and almost 17TB/sec for writes (though his example doesn’t have parity protection, which will boost his speeds)

          https://www.youtube.com/watch?v=DXT1IXFIFAI