Been building this server up for about 5 years, adding hard drives as needed.

Running unraid

E5-2698 v3
64gb ddr4 ecc
X99-E WS
P600 for transcoding
10gbit networking w/ 3gbit fibre WAN
15 HDDs of assorted sizes, totally 148TB, 132TB usable

    • quafeinum@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 years ago

      But I don’t want to spend my free time managing yet another server. Slap unraid on it an call it a day.

      • lightrush@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 years ago

        I was referring to the actual storage system. Unraid’s funny JBOD vs some easy to use industry standard solutions. Not the overall OS with any dancing bears it displays, or doesn’t. ☺️

        If you’re looking at the latter, I have no argument against installing something with easy to use interface etc. like Unraid.

    • Hutch@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      Do you have any guides for setting this up and optimising it? I’d like my next build to use Debian (like my desktop and servers) instead of Unraid or Synology, both of which are lacking in different ways and ready for retirement.

      • lightrush@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 years ago

        Guides no, but there’s good documentation. E.g. LVMRAID and ZFS. Here’s some overview of ZFS.

        For storage arrays, I would use ZFS over LVMRAID for a few reasons the most important being data integrity.

        For the system drive, i.e. where the OS is installed, LVMRAID might be simpler to use. There’s probably a wiki somewhere for installing Debian on ZFS but LVMRAID has been a Linux staple for a while and it’s easy to install an OS onto. E.g. via the OS installers. You could install on LVM then after you’re up and running, you can convert that to an LVMRAID with a single command and a second SSD.

        The simplest possible scheme I can think of from setup perspective is to use the Debian installer to put your OS on LVM. Once Debian is running, install a second SSD, the same size or larger, then use LVM’s lvconvert to convert to a RAID1. See “linear to raid1” in the LVMRAID man page (doc). Then for storage, install ZFS and create a zpool of the desired type from the available disks and throw your data on it.

        Read the docs (RTFM), write down a planned list of steps, build the commands needed for each step from the docs (where commands are relevant), then try it on a machine without data.

        Here’s a sample command I’ve used to create one of my zpools:

        sudo zpool create -f -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa -O sync=disabled -O mountpoint=/media/storage-volume1 -O encryption=on raidz /dev/disk/by-id/ata-W /dev/disk/by-id/ata-X /dev/disk/by-id/usb-Y /dev/disk/by-id/usb-Z
        

        It looks complicated but it’s rather straightforward when you read the doc.

        • Hutch@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 years ago

          Sound advice. I tend to script everything via Ansible, and it sounds like beyond the initial OS install this is a good candidate for automation. I’m not sure I needed another excuse to go hardware shopping, but yet here we are.

          • lightrush@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 years ago

            You’re the Ansible now. [I’m the captain now.jpg]

            This is all automatable of course. I’m using SaltStack but the storage setup is no longer part of it. It used to be but then I migrated from LVMRAID mirrors to RAIDZ and I didn’t update the code to fix it. ZFS setup is just too easy. It’s one command more or less. I just have the exact command for each machine with the exact drives in them on file.